CN110537922A - Human body walking process lower limb movement identification method and system based on deep learning - Google Patents

Human body walking process lower limb movement identification method and system based on deep learning Download PDF

Info

Publication number
CN110537922A
CN110537922A CN201910847000.8A CN201910847000A CN110537922A CN 110537922 A CN110537922 A CN 110537922A CN 201910847000 A CN201910847000 A CN 201910847000A CN 110537922 A CN110537922 A CN 110537922A
Authority
CN
China
Prior art keywords
neural network
preprocessed
dimensional
preprocessed electromyographic
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910847000.8A
Other languages
Chinese (zh)
Other versions
CN110537922B (en
Inventor
王兴坚
池小楷
王少萍
安麦灵
苗忆南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingzhi test dimension (Beijing) Technology Co.,Ltd.
Original Assignee
Beijing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Aeronautics and Astronautics filed Critical Beijing University of Aeronautics and Astronautics
Priority to CN201910847000.8A priority Critical patent/CN110537922B/en
Publication of CN110537922A publication Critical patent/CN110537922A/en
Application granted granted Critical
Publication of CN110537922B publication Critical patent/CN110537922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1071Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring angles, e.g. using goniometers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/112Gait analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Power Engineering (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a method and a system for recognizing lower limb movement in a human body walking process based on deep learning. The method comprises the steps that a wireless sensor is used for collecting leg electromyographic signals of a subject, the obtained electromyographic signals are filtered and standardized, then time domain characteristics and frequency domain characteristics of the preprocessed electromyographic signals are extracted, and the time domain characteristics and the frequency domain characteristics and original data are used as input of a deep neural network model; processing mixed input data through a deep neural network model, processing original data by using a one-dimensional convolutional neural network and a cyclic neural network, processing frequency domain characteristics by using a two-dimensional convolutional neural network, processing time domain characteristics by using the cyclic neural network, and finally outputting the identification results of human walking gait and joint angle after all processed signals pass through a full-connection network. The method can quickly and accurately identify the walking gait and the joint angle of the human body so as to provide accurate wearer motion information for the exoskeleton robot.

Description

human body walking process lower limb movement identification method and system based on deep learning
Technical Field
the invention relates to the technical field of computer mode recognition, in particular to a method and a system for recognizing lower limb movement in a human body walking process based on deep learning.
Background
According to biological analysis, when a human body moves, skeletal muscles continuously contract and relax along with limb movement under the control of cerebellum and brainstem, and during the movement, neurons in the muscles generate action potentials which are conducted on the neurons and overlapped in time and space to form Electromyography (EMG). The surface electromyography (surface electromyography) is acquired by placing electrodes on the surface of the skin, and corresponding features can be extracted from the surface electromyography, so that the motion information of the lower limbs of the human body is acquired. The electromyographic signal is a complex non-steady signal, has strong ambiguity, is easily interfered by the outside, and is influenced by multiple factors such as muscle fatigue degree, irrelevant actions and the like, so the electromyographic signal generation mechanism is very complex; because the electromyographic signal characteristics and the human body action commands have strong nonlinearity, complete function mapping is difficult to find. A non-linear method can be adopted to construct a multi-dimensional dynamic model from a higher level, and more hidden detail information can be extracted.
For the human motion recognition based on the electromyographic signals, the deep neural network method is high in nonlinear strength, strong in self-adaptive capacity and excellent in recognition effect. The translation invariance of the convolutional neural network is utilized to identify local patterns in the sequence, and the same input transformation is executed on each sequence, so that the patterns learned at a certain position in the sequence can be identified at other positions, thereby efficiently utilizing data, reducing network parameters and improving the calculation rate. The dependency of the sequence data on time is an important issue, and information can be selectively contained and transmitted in the time step of the hidden state by adopting a recurrent neural network. The system has the capacity of memory and parameter sharing by effectively combining the convolutional neural network and the cyclic neural network, and can learn the nonlinear characteristics of the electromyographic signal sequence with high efficiency, so that the deep cyclic convolutional neural network is designed according to the characteristics of the electromyographic signal, the modeling problem of the electromyographic signal and human behavior mapping can be effectively solved, the human instruction output by the electromyographic signal can be accurately and quickly recognized, and the control precision of the mechanical exoskeleton is improved.
The traditional gait recognition is mainly applied to the field of intelligent video monitoring, aims to recognize the identity through the walking posture of people, and has the technical limitations that the precision of human body fine motion recognition is low, the real-time performance is poor and continuous recognition cannot be realized.
Disclosure of Invention
the invention aims to provide a method and a system for recognizing lower limb movement in a human body walking process based on deep learning, and aims to solve the problems that the traditional gait recognition method is low in human body fine movement recognition accuracy, poor in real-time performance and incapable of continuously recognizing.
in order to achieve the purpose, the invention provides the following scheme:
a lower limb movement identification method in a human body walking process based on deep learning comprises the following steps:
acquiring an original electromyographic signal; the original electromyographic signals are acquired by wireless sensors which are stuck to the surfaces of eight muscles on the right side of a thigh of a human body;
Preprocessing the original electromyographic signals to generate preprocessed electromyographic signals;
extracting time domain characteristics and frequency domain characteristics of the preprocessed electromyographic signals; the time domain characteristics comprise waveform length, average absolute value, variance, root mean square and zero crossing point number; the frequency domain characteristic is a frequency spectrogram of the preprocessed electromyographic signals;
acquiring a deep neural network model; the deep neural network model comprises a convolutional neural network, a cyclic neural network and a fully-connected neural network; the convolutional neural network comprises a two-dimensional convolutional neural network and a one-dimensional convolutional neural network; the recurrent neural network comprises a first recurrent neural network and a second recurrent neural network;
taking the time domain characteristics of the preprocessed electromyographic signals as the input of the first cyclic neural network, and outputting a first one-dimensional vector after the time domain characteristics are processed by the first cyclic neural network;
The frequency domain characteristics of the preprocessed electromyographic signals are used as the input of the two-dimensional convolutional neural network, and a second one-dimensional vector is output after the two-dimensional convolutional neural network is processed;
The preprocessed electromyographic signals are processed by the one-dimensional convolutional neural network and the second cyclic neural network in sequence, and then a third one-dimensional vector is output;
taking a combined vector of the first one-dimensional vector, the second one-dimensional vector and the third one-dimensional vector as an input of the fully-connected neural network, and outputting a lower limb movement recognition result of the human body in a walking process after the combined vector is processed by the fully-connected neural network; the lower limb movement recognition result comprises a predicted walking gait and a joint angle; the walking gait is divided into a swing period, a support early period, a support middle period and a support final period; the joint angles include hip, knee and ankle joint angles.
optionally, the preprocessing the raw electromyographic signal to generate a preprocessed electromyographic signal includes:
performing band-pass filtering on the original myoelectric signal by adopting a second-order Butterworth filter to generate a band-pass filtered myoelectric signal;
eliminating power frequency interference in the myoelectric signals subjected to band-pass filtering by adopting a second-order band-stop Babyt filter to generate interference-removed myoelectric signals;
and standardizing the data of the electromyographic signals after interference removal by adopting a zero-mean standardization method to generate preprocessed electromyographic signals.
Optionally, the extracting the time domain feature of the preprocessed electromyographic signal specifically includes:
dividing the preprocessed electromyographic signals into a plurality of preprocessed electromyographic signal segments;
calculating the waveform length WL of the preprocessed electromyographic signal segment by adopting a formula; wherein xk represents the amplitude of the kth preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; xk +1 represents the amplitude of the (k + 1) th preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; n is the number of preprocessed electromyographic signal sample points in the preprocessed electromyographic signal segment;
calculating the average absolute value MAV of the preprocessed electromyographic signal segments by adopting a formula;
Calculating the variance VAR of the preprocessed electromyographic signal segments by adopting a formula;
Calculating the root mean square RMS of the preprocessed electromyographic signal segments by adopting a formula;
calculating zero crossing points ZC of the preprocessed electromyographic signal segments by adopting a formula; where th represents a minimum value.
Optionally, the preprocessed electromyographic signal is processed by the one-dimensional convolutional neural network and the second cyclic neural network in sequence, and then a third one-dimensional vector is output, which specifically includes:
extracting effective characteristics in the preprocessed electromyographic signals by adopting the one-dimensional convolutional neural network;
inputting the effective features into the second recurrent neural network, and outputting a third one-dimensional vector after the effective features are processed by the second recurrent neural network; the second recurrent neural network comprises three gated neural unit recurrent layers.
optionally, the step of taking a combined vector of the first one-dimensional vector, the second one-dimensional vector, and the third one-dimensional vector as an input of the fully-connected neural network, and outputting a lower limb movement recognition result of a human body in a walking process after the combined vector is processed by the fully-connected neural network specifically includes:
combining the first one-dimensional vector, the second one-dimensional vector and the third one-dimensional vector into a one-dimensional combined vector [ a1, a2, a3] as an input of the fully-connected neural network; the fully-connected neural network comprises a first fully-connected layer, a first output layer, a Dropout layer, a second fully-connected layer and a second output layer;
The combined vector [ a1, a2, a3] is processed by the first full connected layer to obtain a predicted joint angle, which is output by the first output layer;
The combined vector [ a1, a2, a3] is processed by the first fully connected layer, the Dropout layer, and the second fully connected layer in sequence to generate a predicted walking gait, which is output by the second output layer.
A human walking process lower limb movement recognition system based on deep learning, the system comprising:
The original signal acquisition module is used for acquiring an original electromyographic signal; the original electromyographic signals are acquired by wireless sensors which are stuck to the surfaces of eight muscles on the right side of a thigh of a human body;
The signal preprocessing module is used for preprocessing the original electromyographic signals to generate preprocessed electromyographic signals;
the characteristic extraction module is used for extracting time domain characteristics and frequency domain characteristics of the preprocessed electromyographic signals; the time domain characteristics comprise waveform length, average absolute value, variance, root mean square and zero crossing point number; the frequency domain characteristic is a frequency spectrogram of the preprocessed electromyographic signals;
the deep neural network model establishing module is used for acquiring a deep neural network model; the deep neural network model comprises a convolutional neural network, a cyclic neural network and a fully-connected neural network; the convolutional neural network comprises a two-dimensional convolutional neural network and a one-dimensional convolutional neural network; the recurrent neural network comprises a first recurrent neural network and a second recurrent neural network;
the time domain characteristic processing module is used for taking the time domain characteristics of the preprocessed electromyographic signals as the input of the first cyclic neural network, and outputting a first one-dimensional vector after the time domain characteristics are processed by the first cyclic neural network;
the frequency domain characteristic processing module is used for taking the frequency domain characteristics of the preprocessed electromyographic signals as the input of the two-dimensional convolutional neural network, and outputting a second one-dimensional vector after the frequency domain characteristics are processed by the two-dimensional convolutional neural network;
The electromyographic signal processing module is used for sequentially processing the preprocessed electromyographic signals through the one-dimensional convolutional neural network and the second cyclic neural network and outputting a third one-dimensional vector;
The motion recognition module is used for taking a combined vector of the first one-dimensional vector, the second one-dimensional vector and the third one-dimensional vector as the input of the fully-connected neural network, and outputting a lower limb motion recognition result of the human body in the walking process after the combined vector is processed by the fully-connected neural network; the lower limb movement recognition result comprises a predicted walking gait and a joint angle; the walking gait is divided into a swing period, a support early period, a support middle period and a support final period; the joint angles include hip, knee and ankle joint angles.
Optionally, the signal preprocessing module specifically includes:
The signal filtering unit is used for performing band-pass filtering on the original myoelectric signal by adopting a second-order Butterworth filter to generate a band-pass filtered myoelectric signal;
the interference elimination unit is used for eliminating power frequency interference in the myoelectric signals subjected to the band-pass filtering by adopting a second-order band-stop Babyteworth filter to generate interference-removed myoelectric signals;
And the zero-mean standardization unit is used for standardizing the data of the electromyographic signals after interference removal by adopting a zero-mean standardization method to generate preprocessed electromyographic signals.
optionally, the feature extraction module specifically includes:
the signal dividing unit is used for dividing the preprocessed electromyographic signals into a plurality of preprocessed electromyographic signal segments;
the waveform length calculating unit is used for calculating the waveform length WL of the preprocessed electromyographic signal segment by adopting a formula; wherein xk represents the amplitude of the kth preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; xk +1 represents the amplitude of the (k + 1) th preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; n is the number of preprocessed electromyographic signal sample points in the preprocessed electromyographic signal segment;
the average absolute value calculating unit is used for calculating the average absolute value MAV of the preprocessed electromyographic signal segments by adopting a formula;
The variance calculating unit is used for calculating the variance VAR of the preprocessed electromyographic signal segments by adopting a formula;
The root mean square calculation unit is used for calculating the root mean square RMS of the preprocessed electromyographic signal segments by adopting a formula;
a zero crossing point calculating unit, configured to calculate zero crossing points ZC of the preprocessed electromyographic signal segments by using a formula; where th represents a minimum value.
Optionally, the electromyographic signal processing module specifically includes:
The effective feature extraction unit is used for extracting effective features in the preprocessed electromyographic signals by adopting the one-dimensional convolutional neural network;
the effective feature processing unit is used for inputting the effective features into the second recurrent neural network, and outputting a third one-dimensional vector after the effective features are processed by the second recurrent neural network; the second recurrent neural network comprises three gated neural unit recurrent layers.
optionally, the motion recognition module specifically includes:
a vector combination unit, configured to combine the first one-dimensional vector, the second one-dimensional vector, and the third one-dimensional vector into a one-dimensional combination vector [ a1, a2, a3] as an input of the fully-connected neural network; the fully-connected neural network comprises a first fully-connected layer, a first output layer, a Dropout layer, a second fully-connected layer and a second output layer;
A joint angle prediction unit for subjecting the combined vector [ a1, a2, a3] to the first full link layer processing to obtain a predicted joint angle, the joint angle being output by the first output layer;
A walking gait prediction unit, configured to generate a predicted walking gait after processing the combined vector [ a1, a2, a3] sequentially through the first fully connected layer, the Dropout layer, and the second fully connected layer, where the walking gait is output by the second output layer.
according to the specific embodiment provided by the invention, the invention discloses the following technical effects:
The invention provides a lower limb movement recognition method and a lower limb movement recognition system based on deep learning in a human body walking process, wherein the method comprises the steps of preprocessing an original electromyographic signal acquired by a wireless sensor, generating a preprocessed electromyographic signal, and extracting a time domain characteristic and a frequency domain characteristic of the preprocessed electromyographic signal; and then, taking the time domain characteristics and the frequency domain characteristics of the electromyographic signals and the preprocessed electromyographic signals as input, respectively processing different inputs by using a convolutional neural network and a cyclic neural network, extracting effective information, expanding and combining different information to obtain a one-dimensional combination vector, and obtaining a recognition result of the lower limb movement in the walking process of the human body through a full connection layer. The gait recognition and joint angle recognition are innovatively combined, the lower limb movement trend is jointly predicted, and the method has the advantage of high recognition accuracy; and the predicted angles of the hip, knee and ankle joints are basically consistent with the actual angles captured by the VICON, so the identification method has the characteristics of good real-time property and capability of continuously identifying the angles of the hip, knee and ankle joints.
Drawings
in order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a basic schematic block diagram of a method for identifying lower limb movement during walking of a human body according to the present invention;
FIG. 2 is a flowchart of a method for identifying lower limb movement during human walking based on deep learning according to the present invention;
FIG. 3 is a schematic diagram of the pasting position of the surface electromyography electrode provided by the invention;
FIG. 4 is a schematic view of the attachment position of the foot pedal according to the present invention;
FIG. 5 is a schematic diagram of a position of a mark point of a VICON sensor according to the present invention;
FIG. 6 is a schematic diagram of the correspondence between the electromyographic signals and the human body movement information (plantar pressure signals, joint angle signals) provided by the invention;
FIG. 7 is a frequency distribution diagram of electromyographic signals of eight muscles of the right lower limb according to the present invention;
FIG. 8 is a schematic structural diagram of a deep neural network model provided in the present invention;
FIG. 9 is a diagram of a GRU recurrent neural network architecture provided by the present invention;
FIG. 10 is a schematic structural diagram of a two-dimensional convolutional neural network provided by the present invention;
FIG. 11 is a schematic structural diagram of a one-dimensional convolutional neural network provided in the present invention;
FIG. 12 is a schematic diagram of the results of four successive gait recognition provided by the invention;
FIG. 13 is a schematic diagram of the continuous identification result of the hip, knee and ankle joint angles provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
the invention aims to provide a method and a system for recognizing lower limb movement in a human body walking process based on deep learning, which are used for quickly and accurately recognizing human body walking gait and joint angles and providing accurate input for an exoskeleton robot.
in order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
fig. 1 is a basic principle block diagram of the method for identifying the lower limb movement in the human body walking process provided by the invention. As shown in fig. 1, the basic principle framework of the method for identifying lower limb movement in the walking process of human body of the invention is as follows:
the first step is as follows: and acquiring an electromyographic signal, a plantar pressure signal and a joint angle.
In the stage of acquiring the original electromyographic signals, a healthy subject is selected to walk on the treadmill at a normal speed, the electromyographic signals of eight muscles of thighs on the right side of the subject and pressure signals of soles of feet of the subject are collected by using the wireless sensor, and the obtained signals are transmitted to the desktop receiver in real time and stored as original data. The different values of the plantar pressure signals of the feet are preprocessed to be divided into four walking gaits which are respectively a swing period, a support early period, a support middle period and a support final period. Meanwhile, an optical motion capture system VICON is adopted to capture the motion tracks of the legs of the testee in the walking process, so that the motion angles of the sagittal planes of the hip, knee and ankle joints are obtained.
The second step is that: and (4) preprocessing data.
the method mainly extracts effective time domain characteristics and frequency domain characteristics of the preprocessed electromyographic signals.
the third step: and processing the data by utilizing a neural network, and identifying a motion result.
the time domain characteristics and the frequency domain characteristics of the electromyographic signals and the preprocessed electromyographic signals are used as input, different inputs are processed by utilizing a convolutional neural network and a cyclic neural network respectively, effective information is extracted, then different information is expanded and combined to obtain a one-dimensional vector, a recognition result is obtained through a full connection layer, the recognition result is a predicted value of a walking gait and a joint angle respectively, the walking gait is a swing period (S0), a support early period (S1), a support middle period (S2) and a support final period (S3), and the joint angle is angles of hip, knee and ankle joints respectively.
The fourth step: and verifying the motion recognition result.
Comparing the real value and the predicted value of the walking gait to obtain the identification accuracy of the method; and comparing the identification results of the joint angles to obtain the identification precision of the method.
Fig. 2 is a flowchart of a method for identifying lower limb movement in a human walking process based on deep learning provided by the invention. Referring to fig. 2, the method for recognizing the lower limb movement in the walking process of the human body provided by the invention specifically comprises the following steps:
step 201: acquiring a raw electromyographic signal.
the human walking movement is mainly related to muscles of thighs and crus, surface electromyographic electrodes are pasted on the right sides of the thighs to obtain electromyographic signals of corresponding parts, effective time domain characteristics and effective frequency domain characteristics of the preprocessed electromyographic signals are extracted, the time domain characteristics and the effective frequency domain characteristics of the preprocessed electromyographic signals and the preprocessed electromyographic signals are jointly used as input of a recognizer, a neural network structure is built to serve as the recognizer, and the recognition of walking gait of a human body can be achieved.
in the embodiment of the invention, three healthy subjects are selected to walk on the treadmill at speeds of 3, 3.5 and 4km/h respectively, and each speed is tested for 3 times in a repeated way, and each time needs 6 minutes. During each experiment, electromyographic signals and plantar pressure signals were collected using surface electromyographic electrodes and a wireless sensor (Noraxon Desktop DTS-8) at a sampling rate of 1.5 kHz.
fig. 3 is a schematic diagram of the pasting position of the surface electromyography electrode provided by the invention. As shown in fig. 3, eight pairs of surface electromyographic electrodes are placed on eight muscles of the right lower limb, which are Rectus Femoris (RF), lateral Oblique (VLO), medial Oblique (VMO), Biceps Femoris (BF), Semitendinosus (ST), tibialis anterior (TIA), Lateral Gastrocnemius (LGA) and Medial Gastrocnemius (MGA), respectively.
fig. 4 is a schematic diagram of the position of the foot switch according to the present invention. As shown in fig. 4, four foot-operated switches are respectively stuck to the soles of the two sides of the subject to collect the sole pressure signals. The four foot pedal switch pasting positions are big toe, first metatarsal, fifth metatarsal and heel respectively. Meanwhile, the subject sticks mark points on both legs so that VICON (optical motion capture system) captures the motion tracks of both legs during walking. The VICON sensor mark point pasting position is shown in fig. 5.
the electromyographic signals and the plantar pressure signals collected by the wireless sensor and the foot pedal switch are sent to the desktop receiver in real time and stored as original electromyographic signal data.
step 202: and preprocessing the original electromyographic signals to generate preprocessed electromyographic signals.
fig. 6 is a schematic diagram of the correspondence between the electromyographic signals and the human body movement information (plantar pressure signals, joint angle signals). As shown in fig. 6, five muscles (CH1-CH5) in the thigh are mainly responsible for traction of hip and knee joint movements, and three muscles (CH6-CH8) in the lower leg are related to ankle joint movement. As seen from the graph of fig. 6, the electromyographic signals of different positions show a certain degree of difference in the asynchronous state of the same walking cycle. In the swing period and the early stage of support, the thigh electromyographic signals have more obvious amplitude changes, because in the swing period of the walking cycle, thigh muscles need to pull hip joints to swing and knee joints are bent, so that the thigh muscles exert force in a concentrated manner; in the early stage of support, the knee bears the weight of the body, and at the moment, muscles are needed to drive the knee to provide a support counter force. Before, during and after supporting, the calf electromyographic signals can generate relatively higher frequency amplitude changes, because the calf needs to provide supporting force and ankle joint rotation moment.
the energy frequency of the electromyographic signals is between 0 and 500Hz, so the original electromyographic signals (data) are subjected to band-pass filtering by using a second-order Butterworth filter, the frequency range is between 0 and 500Hz, and the band-pass filtered electromyographic signals are generated. In addition, a second-order band-stop Babytesis filter with the frequency range between 49 Hz and 51Hz is used for eliminating power frequency interference in the myoelectric signals after band-pass filtering, and the myoelectric signals after interference elimination are generated. And meanwhile, standardizing the data of the electromyographic signals after interference removal by adopting a zero-mean standardization method shown in formula (1) to enable the data to be in accordance with standard normal distribution, and obtaining the preprocessed electromyographic signals. The expression of the zero mean normalization method is as follows:
wherein xf is the electromyographic signal after interference removal, and mu and sigma are the mean value and the variance of the electromyographic signal after interference removal respectively.
Step 203: and extracting the time domain characteristic and the frequency domain characteristic of the preprocessed electromyographic signal.
And extracting the time domain characteristic and the frequency domain characteristic of the preprocessed electromyographic signals as a part of the input of the deep neural network. The time domain characteristics comprise waveform length, average absolute value, variance, root mean square and zero crossing point number; the frequency domain characteristic is a frequency spectrogram of the preprocessed electromyographic signal.
in order to meet the real-time requirement, the preprocessed electromyographic signal data is intercepted and segmented, and is divided into a plurality of preprocessed electromyographic signal segments (segments for short), each segment comprises 250 sample points (166 ms), and 235 preprocessed electromyographic signal sample points (sample points for short) are overlapped between adjacent segments. The feature extraction process of any preprocessed electromyographic signal segment is as follows:
(a) time domain feature extraction
(1) Waveform Length (WL): this feature may provide information about the cumulative change in amplitude between each segment, the waveform length being defined as:
wherein WL represents the waveform length of the preprocessed electromyographic signal segment; xk represents the amplitude of the kth preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; xk +1 represents the amplitude of the (k + 1) th preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; and N is the number of the preprocessed electromyographic signal sample points in the preprocessed electromyographic signal segment, wherein N is 250.
(2) mean Absolute Value (MAV): this feature may reflect energy information, and for a segment, the mean absolute value MAV may be expressed as:
(3) variance (Variance, VAR for short): the variance describes the mathematically expected deviation, and for each segment, the variance VAR can be calculated as:
(4) Root Mean Square (RMS): the feature can be used for analyzing data noise in each preprocessed electromyographic signal segment, and the root mean square RMS calculation method is as follows:
(5) zero crossing number (ZC for short): simple frequency measurement can be obtained by calculating the number of zero crossings of a signal, the number ZC of zero crossings being defined as:
Where th represents a minimum value.
Finally, the time domain features can be represented by a 5 × 8 matrix, i.e.:
The elements WL, MAV, VAR, RMS, ZC in the video feature matrix F respectively represent time-domain features calculated using equations (2) - (6), and the element subscripts RF, VLO, VMO, BF, ST, TIA, LGA, MGA represent time-domain features calculated from the preprocessed myoelectric signals of the corresponding muscle positions. For example, WLRF in the matrix represents the waveform length calculated from the electromyographic signals after the pretreatment of the rectus femoris RF position, RMSMGA represents the root mean square calculated from the electromyographic signals after the pretreatment of the medial gastrocnemius MGA position.
(b) frequency domain feature extraction
the spectrogram of each segment is extracted as a frequency domain feature as part of the classifier input. The invention extracts a spectrogram of each fragment by using a 64-point Short Time Fourier Transform (STFT) with a Hann window and 32-point overlap. Thus, the spectrogram of each segment has 33 distinct frequency bins (0-750Hz) and 9 time bins. Thus, the spectrogram of each slice is a 33 × 9 × 8 (frequency × time × channel) matrix.
FIG. 7 is a frequency distribution diagram of electromyographic signals of eight muscles of the right lower limb according to the present invention. Since most of the energy of the pre-processed electromyographic signals is mainly distributed in the frequency range of 0 to 200Hz, as shown in FIG. 7, only the first 9 rows (0-187.5Hz) of the spectrogram are retained. Finally, the spectrogram matrix (frequency domain feature matrix) of each slice has dimensions 9 × 9 × 3 (frequency × time × channel).
according to the method, a time domain feature matrix F obtained through calculation according to preprocessed electromyographic signals is used as a time domain feature, a 9 x 3 dimensional spectrogram matrix is used as a frequency domain feature, and the time domain feature matrix F and the frequency domain feature matrix are jointly used as a part of input of a deep neural network model.
and finally, the data matrix which is input as the deep neural network model is composed of the preprocessed myoelectric signals besides the time domain characteristic matrix and the spectrogram matrix, namely, a 250 x 8 matrix which is composed of 250 sample points and 8 myoelectric signals preprocessed by muscle positions.
step 204: and acquiring a deep neural network model.
In order to contain more information from electromyographic signals, the invention establishes a multi-level deep neural network model. Fig. 8 is a schematic structural diagram of a deep neural network model provided by the present invention. As shown in FIG. 8, the deep neural network model established by the present invention includes a convolutional neural network, a cyclic neural network and a fully-connected neural network.
wherein the convolutional neural network comprises a two-dimensional convolutional neural network and a one-dimensional convolutional neural network; each convolutional neural network includes two convolutional layers and two pooling layers. The convolution layer and the pooling layer in the convolutional neural network can respond to the translation invariance of input, so the convolutional neural network is widely applied to feature extraction, and the operation amount in the deep neural network learning process can be greatly reduced.
the recurrent neural network includes a first recurrent neural network and a second recurrent neural network, each of which includes three recurrent layers (recurrent layer 1, recurrent layer 2, and recurrent layer 3).
the fully-connected neural network includes a first fully-connected layer (fully-connected layer 1), a first output layer (output layer 1), a Dropout layer, a second fully-connected layer (fully-connected layer 2), and a second output layer (output layer 2).
As shown in fig. 8, the deep neural network model of the present invention adopts a multi-branch neural network structure, and the first part of the structure is a first recurrent neural network (recurrent neural network 1), and its input is the time domain characteristic of a segment preprocessed electromyographic signal; the second part of the structure is a two-dimensional convolution neural network, and the input of the two-dimensional convolution neural network is the frequency domain characteristic of the electromyographic signals after the same segment is preprocessed; the third part of the structure is the combination of a one-dimensional convolution neural network and a second cyclic neural network (cyclic neural network 2), and the input of the third part is the preprocessed electromyographic signals corresponding to the same segment. The final output of the three-part structure is three one-dimensional vectors which are combined into one-dimensional combination vector to form the input of the fully-connected neural network, and the output of the fully-connected neural network obtains the predicted values of the walking gait and the joint angle.
step 205: and taking the time domain characteristics of the preprocessed electromyographic signals as the input of the first cyclic neural network, and outputting a first one-dimensional vector after the time domain characteristics are processed by the first cyclic neural network.
the first part of the deep neural network model structure takes time domain features as input. Because the dependence of sequence data on time is an important problem, the Recurrent neural network can selectively contain and transmit information in the time step of a hidden state, so the invention uses the time domain characteristics of the preprocessed electromyographic signals processed by the Recurrent neural network to realize the sequence sensitivity by a three-layer Gated neural Unit (GRU).
fig. 9 is a structural diagram of a recurrent neural network with three layers of GRUs according to the present invention. The basic structure of the recurrent neural network (including the first recurrent neural network and the second recurrent neural network) of the present invention is shown in fig. 9, and includes three layers of gated neural units GRU recurrent layers.
the gated neural unit GRU uses the update gate and the reset gate to decide the output passed to the information. At time t, the update gate zt is calculated as follows:
z=σ(Wx+Uh) (8)
ht-1 saves information from the previous time T-1 and helps the model determine the amount of past information (from the previous time T-1) passed into the future by multiplying by a first weight Uz; at the same time the input information x is multiplied by a second weight Wz, which are added and the result is compressed to between 0 and 1 with a sigmoid activation function sigma. The Sigmoid activation function used in equation (8) is a generic activation function, i.e., where x is the input and f (x) is the output.
In the formula (8), the input information x is the time domain feature matrix extracted in step 103, and the information ht-1 of the previous T-1 time point refers to a value obtained by using the previous segment through a series of calculations, which is an iterative process:
the model uses a reset gate (reset gate) rt to determine the amount of information left over, and the calculation formula is as follows:
r=σ(Wx+Uh) (9)
This equation (9) is similar to the calculation equation of the update gate (8), except for the use of weights. The results are summed and compressed to between 0 and 1 using a sigmoid function sigma by multiplying the input information x and ht-1 by the respective third weight Wr and fourth weight Ur, respectively. The use of the reset gate rt introduces a new memory content, i.e. the current memory content it uses the reset gate to store the past related information, and the calculation formula is as follows:
The currently stored content determines the content deleted from the previous time step by calculating the Hadamard product between the reset gate rt and the fifth weight Uh, which learns to assign the rt vector to be close to 0 when the neural network data is near the end of the sequence, and to purge the past and focus only on the last sequence. Meanwhile, the current storage content multiplies the input information x and ht-1 by the respective sixth weight Wh and fifth weight Uh respectively, and applies a nonlinear activation function tanh (hyperbolic tangent) to finally obtain the required information. The weighted values Wz, Uz, Wr, Ur, Wh and Uh are continuously updated in the training process of the recurrent neural network so as to obtain better results.
Finally, the network determines the final information of the current time step by calculating ht, and transfers the final information ht of the current time step to the next time step, so that an update gate zt is introduced to determine the information ht-1 collected from the current storage content and the previous time, and the calculation formula is as follows:
For example, when the most relevant information of the sequence is at the beginning, the model may set the update gate zt close to 1 to retain most of the previous information, while 1-zt will be close to 0 at this time, thus omitting most of the current content, enabling the deletion of irrelevant information.
the GRU eliminates the problem of the gradient vanishing of the recurrent neural network, because the model does not clear new inputs each time, but the next time step to retain relevant information and pass it to the network. Through training learning of the network, they perform very well even in complex situations.
and after the time domain characteristics of the preprocessed electromyographic signals are processed by the first cyclic neural network with three GRU cyclic layers, a first one-dimensional vector a1 is output.
Step 206: and taking the frequency domain characteristics of the preprocessed electromyographic signals as the input of the two-dimensional convolutional neural network, and outputting a second one-dimensional vector after the two-dimensional convolutional neural network processes.
fig. 10 is a schematic structural diagram of a two-dimensional convolutional neural network provided by the present invention. The second part of the deep neural network model structure of the present invention utilizes a two-dimensional convolutional neural network to process the spectrogram of the frequency domain characteristics. As shown in fig. 8 and 10, the two-dimensional convolutional neural network used in the present invention is composed of the following parts: first, it includes two convolutional layers (two-dimensional convolutional layer 1 and two-dimensional convolutional layer 2), two-dimensional convolutional layer 1 and two-dimensional convolutional layer 2 are respectively composed of 32 and 16 filters, the size of the filter is 3 × 3, and any one of them is connected to a sub-sampling layer as a pooling layer (two-dimensional pooling layer 1 and two-dimensional pooling layer 2, respectively), the sub-sampling layer performs maximum pooling using 2 × 2 filters; next, the two-dimensional convolutional neural network also includes a flattening layer (not shown) for flattening the data for later merging. The network here serves as an additional part of the feature input in each training step, which does not contain a loop layer to convey historical information due to the high dimensionality and complexity of the frequency domain features. In this way, each spectrogram in the frequency domain features is treated as an image, and the color channels of the image are modeled with 8 channels.
as shown in fig. 10, the two-dimensional convolutional neural network is connected in sequence as convolutional layer 1, pooling layer 1, convolutional layer 2, pooling layer 2, and flattening layer, where the input of convolutional layer 1 is the input of the two-dimensional convolutional neural network, i.e., the frequency domain feature (9 × 9 × 3 dimensional spectrogram), and the input of each layer is the output of the previous layer from pooling layer 1 to flattening layer. And after the frequency domain characteristics of the preprocessed electromyographic signals are processed by the two-dimensional convolutional neural network, a second one-dimensional vector a2 is output.
Step 207: and the preprocessed electromyographic signals are processed by the one-dimensional convolutional neural network and the second cyclic neural network in sequence, and then a third one-dimensional vector is output.
The third part of the deep neural network model structure takes the preprocessed electromyographic signal data as input, and extracts effective characteristics in the preprocessed electromyographic signal in a data-driven mode so as to reduce data loss caused by artificial characteristic extraction. The one-dimensional convolutional neural network can extract one-dimensional sequence segments from the electromyographic signal sequence, and since the neural network performs the same input transformation on each sequence segment, patterns learned at previous positions can be identified at different subsequent positions.
Corresponding to an electromyographic signal sequence, because the number of channels of the acquired preprocessed electromyographic signal data is 8, the preprocessed electromyographic signal data is expressed as a vector { U ═ xi,1, xi,2, ·, xi, N ] | I ═ 1.2.. I; n is 8, wherein N is the channel number of the preprocessed electromyographic signals, and N is 8 in the invention; i is the number of input mappings, i.e. the size of the time slice, and in the present invention I is 250. xi, M represents the ith sample point of the preprocessed electromyographic signals corresponding to the nth muscle position, and U represents a vector formed by the preprocessed electromyographic signals corresponding to the 8 muscle positions in a time slice.
Fig. 11 is a schematic structural diagram of a one-dimensional convolutional neural network provided by the present invention. As shown in fig. 8 and 11, the one-dimensional convolutional network used in the present invention includes two one-dimensional convolutional layers (including one-dimensional convolutional layer 1 and one-dimensional convolutional layer 2) and two one-dimensional pooling layers (including one-dimensional pooling layer 1 and one-dimensional pooling layer 2), and the one-dimensional convolutional layer 1, the one-dimensional pooling layer 1, the one-dimensional convolutional layer 2, and the one-dimensional pooling layer 2 are connected in this order.
Wherein the one-dimensional convolutional layer can be represented as:
The one-dimensional maximum pooling layer (i.e. one-dimensional pooling layer) is formed after each one-dimensional convolution layer, and the calculation method is as follows:
Where yi, k and yl, k represent the output of the convolutional layer, and σ is the sigmoid activation function; n is the channel number of the preprocessed electromyographic signals, wherein N is 8; i1.2., I is the number of input mappings; ω j, n represents the weight value of the convolution kernel in the convolution layer, uj, n represents the value of the jth and nth unit of the convolution layer input matrix (i.e. the data matrix of the preprocessed electromyographic signals), bi, k represents the deviation, ph, k represents the output of the ith row and kth column element in the pooling layer. S is the pooling unit size, where S is 3. K represents the convolutional layer output channel size.
wherein the output of the one-dimensional convolutional layer 1 is used as the input of the one-dimensional max-pooling layer 1, the output of the one-dimensional max-pooling layer 1 is the input of the one-dimensional convolutional layer 2, and the output of the one-dimensional convolutional layer 2 is the input of the one-dimensional max-pooling layer 2. Selecting continuous 250 sampling points from the sampling sequence according to the length of the time segment, wherein the data matrix of the input preprocessed electromyographic signals is a 250 multiplied by 8 matrix as 8 electromyographic signal channels are in total; outputting corresponding effective characteristics after passing through the convolution layer and the pooling layer. The effective features extracted from the one-dimensional convolutional layer are used as the input of a second recurrent neural network, wherein the structure of the second recurrent neural network is the same as that of the first recurrent neural network, the second recurrent neural network comprises three layers of gated neural unit recurrent layers, and the number of hidden units in the three recurrent layers is 64, 128 and 256 respectively.
Extracting effective characteristics in the preprocessed electromyographic signals by adopting the one-dimensional convolutional neural network; inputting the valid features into the second recurrent neural network, and outputting a third one-dimensional vector a3 after being processed by the second recurrent neural network.
step 208: and taking a combined vector of the first one-dimensional vector, the second one-dimensional vector and the third one-dimensional vector as an input of the fully-connected neural network, and outputting a lower limb movement recognition result of the human body in a walking process after the combined vector is processed by the fully-connected neural network.
Combining the first one-dimensional vector a1, the second one-dimensional vector a2 and the third one-dimensional vector a3 into a one-dimensional combined vector [ a1, a2, a3] as an input of the fully-connected neural network. As shown in fig. 8, the fully-connected neural network includes a first fully-connected layer (fully-connected layer 1), a first output layer (output layer 1), a Dropout layer, a second fully-connected layer (fully-connected layer 2), and a second output layer (output layer 2). Where the input of the Dropout layer is the output of the fully connected layer 1 and the output of the Dropout layer is an intermediate quantity calculated by the network. The input of the fully connected layer 2 is the output of the Dropout layer, and the output of the fully connected layer 2 is the output layer corresponding to walking gait.
the final outputs of the three-part structure of the deep neural network model are respectively three one-dimensional vectors, and the three one-dimensional vectors are combined into a one-dimensional combination vector [ a1, a2, a3] to form the input of the fully-connected neural network. Outputting a part of the predicted value of the joint angle, subtracting the predicted value of the joint angle from the actual value of the joint angle to obtain an error, and returning the error to the deep neural network model for updating the model parameters to obtain a better deep neural network model; the other part is passed to the Dropout layer as an intermediate quantity of the calculation.
the combined vector [ a1, a2, a3] is processed by the first full connected layer to obtain a predicted joint angle, which is output by the first output layer. Meanwhile, the complete connection layer 1 is connected with the Dropout layer, passes through the complete connection layer 2 with soft-max unit as an activation function, and finally reaches the output layer 2 to obtain walking gait, and the walking gait is output by the second output layer.
The lower limb movement recognition result comprises the walking gait and the joint angle prediction value; the walking gait is divided into a swing period, a support early period, a support middle period and a support final period; the joint angles include hip, knee and ankle joint angles.
Verifying motion recognition results
According to the deep neural network model designed by the invention, the movement of the lower limbs is simplified into four gaits by using plantar pressure signals obtained by a foot switch, wherein the gaits are respectively a swing period (S0), a support early period (S1), a support middle period (S2) and a support final period (S3), and the state values are respectively 0, 1, 2 and 3. The gait recognition result obtained finally is shown in fig. 12. As can be seen from FIG. 12, the gait recognition result predicted by the method of the present invention substantially conforms to the actually measured state, which indicates that the prediction result of the present invention has high accuracy. For the middle unstable identification points, the subsequent control can be used for adjusting, so that the exoskeleton can accurately follow the walking movement of the human body.
the visualization of classification performance is represented by a confusion matrix M, each row of the matrix representing a percentage of the actual instance, each column representing a percentage of the prediction, the matrix M being defined as:
since the correct prediction is located on the diagonal of the confusion matrix, the incorrect prediction can be analyzed visually from other locations. For example, the elements of the matrix may be calculated as follows:
N01 represents the number of actual states S0 predicted to be states S1, and N0 represents the total number of predicted states S0. The calculation method of other elements of the matrix is similar to the calculation method of the element C01, and the calculation formula is as follows:
in the formula, x and y take the values of 0, 1, 2 and 3 and respectively correspond to states S0, S1, S2 and S3. Nxy represents the number of actual states x predicted to be states y, and Nx represents the total number of predicted states x.
table 1 lists the mean and variance of the confusion matrix, and it can be seen from table 1 that the method of the present invention has a high prediction accuracy for each state in successive gaits.
TABLE 1 prediction accuracy of four gaits
The joint angle prediction result in the walking process is shown in fig. 13, and as can be seen from fig. 13, the prediction angle of the method is basically consistent with the actual angle, so that the method can be used for controlling the exoskeleton movement angle in real time. The prediction result of the invention is evaluated by adopting the root mean square error eta and the correlation coefficient rho, and the calculation formula is as follows:
Wherein the ith value of the angle prediction is represented, and thetai represents the ith value of the real angle; mean values of the predicted angles are indicated, mean values of the true angles are indicated, and N is the total number of comparisons.
Table 2 lists the root mean square error and the correlation coefficient of the hip joint angle, the knee joint angle and the ankle joint angle respectively, and it can be seen from Table 2 that the root mean square error between the prediction recognition result and the true value is small, and the correlation coefficient is close to 1, so that the method has better performance on joint angle prediction.
TABLE 2 Joint Angle identification results
Traditional gait recognition is mainly applied to the field of intelligent video monitoring, aims at carrying out identity recognition through the walking posture of people, and has the technical limitations that the precision of human body fine motion recognition is low, the real-time performance is poor and continuous recognition cannot be realized. According to the method for recognizing the lower limb movement in the human body walking process based on the deep learning, real-time, accurate and continuous input is provided for the exoskeleton robot, so that the gait and the joint angle are predicted by collecting electromyographic signals, the gait is represented by collecting plantar pressure signals, gait recognition and joint angle recognition are combined innovatively for jointly predicting the lower limb movement trend, and the method has the advantage of high recognition accuracy. The method basically matches the predicted angle of the hip and knee ankle joint with the actual angle captured by VICON, so that the identification method is good in real-time performance and can continuously identify the hip and knee ankle joint angle. The human motion input instruction can be rapidly and accurately extracted through the identification method so as to control the exoskeleton robot.
based on the method for recognizing the lower limb movement in the human body walking process based on the deep learning, the invention also provides a system for recognizing the lower limb movement in the human body walking process based on the deep learning, and the system comprises:
the original signal acquisition module is used for acquiring an original electromyographic signal; the original electromyographic signals are acquired by wireless sensors which are stuck to the surfaces of eight muscles on the right side of a thigh of a human body;
The signal preprocessing module is used for preprocessing the original electromyographic signals to generate preprocessed electromyographic signals;
the characteristic extraction module is used for extracting time domain characteristics and frequency domain characteristics of the preprocessed electromyographic signals; the time domain characteristics comprise waveform length, average absolute value, variance, root mean square and zero crossing point number; the frequency domain characteristic is a frequency spectrogram of the preprocessed electromyographic signals;
the deep neural network model establishing module is used for acquiring a deep neural network model; the deep neural network model comprises a convolutional neural network, a cyclic neural network and a fully-connected neural network; the convolutional neural network comprises a two-dimensional convolutional neural network and a one-dimensional convolutional neural network; the recurrent neural network comprises a first recurrent neural network and a second recurrent neural network;
the time domain characteristic processing module is used for taking the time domain characteristics of the preprocessed electromyographic signals as the input of the first cyclic neural network, and outputting a first one-dimensional vector after the time domain characteristics are processed by the first cyclic neural network;
The frequency domain characteristic processing module is used for taking the frequency domain characteristics of the preprocessed electromyographic signals as the input of the two-dimensional convolutional neural network, and outputting a second one-dimensional vector after the frequency domain characteristics are processed by the two-dimensional convolutional neural network;
the electromyographic signal processing module is used for sequentially processing the preprocessed electromyographic signals through the one-dimensional convolutional neural network and the second cyclic neural network and outputting a third one-dimensional vector;
the motion recognition module is used for taking a combined vector of the first one-dimensional vector, the second one-dimensional vector and the third one-dimensional vector as the input of the fully-connected neural network, and outputting a lower limb motion recognition result of the human body in the walking process after the combined vector is processed by the fully-connected neural network; the lower limb movement recognition result comprises a predicted walking gait and a joint angle; the walking gait is divided into a swing period, a support early period, a support middle period and a support final period; the joint angles include hip, knee and ankle joint angles.
Wherein, the signal preprocessing module specifically comprises:
the signal filtering unit is used for performing band-pass filtering on the original myoelectric signal by adopting a second-order Butterworth filter to generate a band-pass filtered myoelectric signal;
The interference elimination unit is used for eliminating power frequency interference in the myoelectric signals subjected to the band-pass filtering by adopting a second-order band-stop Babyteworth filter to generate interference-removed myoelectric signals;
and the zero-mean standardization unit is used for standardizing the data of the electromyographic signals after interference removal by adopting a zero-mean standardization method to generate preprocessed electromyographic signals.
the feature extraction module specifically comprises:
the signal dividing unit is used for dividing the preprocessed electromyographic signals into a plurality of preprocessed electromyographic signal segments;
the waveform length calculating unit is used for calculating the waveform length WL of the preprocessed electromyographic signal segment by adopting a formula; wherein xk represents the amplitude of the kth preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; xk +1 represents the amplitude of the (k + 1) th preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; n is the number of preprocessed electromyographic signal sample points in the preprocessed electromyographic signal segment;
The average absolute value calculating unit is used for calculating the average absolute value MAV of the preprocessed electromyographic signal segments by adopting a formula;
The variance calculating unit is used for calculating the variance VAR of the preprocessed electromyographic signal segments by adopting a formula;
the root mean square calculation unit is used for calculating the root mean square RMS of the preprocessed electromyographic signal segments by adopting a formula;
a zero crossing point calculating unit, configured to calculate zero crossing points ZC of the preprocessed electromyographic signal segments by using a formula; where th represents a minimum value.
The electromyographic signal processing module specifically comprises:
the effective feature extraction unit is used for extracting effective features in the preprocessed electromyographic signals by adopting the one-dimensional convolutional neural network;
the effective feature processing unit is used for inputting the effective features into the second recurrent neural network, and outputting a third one-dimensional vector after the effective features are processed by the second recurrent neural network; the second recurrent neural network comprises three gated neural unit recurrent layers.
the motion recognition module specifically comprises:
A vector combination unit, configured to combine the first one-dimensional vector, the second one-dimensional vector, and the third one-dimensional vector into a one-dimensional combination vector [ a1, a2, a3] as an input of the fully-connected neural network; the fully-connected neural network comprises a first fully-connected layer, a first output layer, a Dropout layer, a second fully-connected layer and a second output layer;
A joint angle prediction unit for subjecting the combined vector [ a1, a2, a3] to the first full link layer processing to obtain a predicted joint angle, the joint angle being output by the first output layer;
A walking gait prediction unit, configured to generate a predicted walking gait after processing the combined vector [ a1, a2, a3] sequentially through the first fully connected layer, the Dropout layer, and the second fully connected layer, where the walking gait is output by the second output layer.
the embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A lower limb movement identification method in a human body walking process based on deep learning is characterized by comprising the following steps:
Acquiring an original electromyographic signal; the original electromyographic signals are acquired by wireless sensors which are stuck to the surfaces of eight muscles on the right side of a thigh of a human body;
Preprocessing the original electromyographic signals to generate preprocessed electromyographic signals;
extracting time domain characteristics and frequency domain characteristics of the preprocessed electromyographic signals; the time domain characteristics comprise waveform length, average absolute value, variance, root mean square and zero crossing point number; the frequency domain characteristic is a frequency spectrogram of the preprocessed electromyographic signals;
acquiring a deep neural network model; the deep neural network model comprises a convolutional neural network, a cyclic neural network and a fully-connected neural network; the convolutional neural network comprises a two-dimensional convolutional neural network and a one-dimensional convolutional neural network; the recurrent neural network comprises a first recurrent neural network and a second recurrent neural network;
taking the time domain characteristics of the preprocessed electromyographic signals as the input of the first cyclic neural network, and outputting a first one-dimensional vector after the time domain characteristics are processed by the first cyclic neural network;
the frequency domain characteristics of the preprocessed electromyographic signals are used as the input of the two-dimensional convolutional neural network, and a second one-dimensional vector is output after the two-dimensional convolutional neural network is processed;
the preprocessed electromyographic signals are processed by the one-dimensional convolutional neural network and the second cyclic neural network in sequence, and then a third one-dimensional vector is output;
taking a combined vector of the first one-dimensional vector, the second one-dimensional vector and the third one-dimensional vector as an input of the fully-connected neural network, and outputting a lower limb movement recognition result of the human body in a walking process after the combined vector is processed by the fully-connected neural network; the lower limb movement recognition result comprises a predicted walking gait and a joint angle; the walking gait is divided into a swing period, a support early period, a support middle period and a support final period; the joint angles include hip, knee and ankle joint angles.
2. The method for recognizing the lower limb movement in the walking process of the human body according to claim 1, wherein the preprocessing is performed on the original electromyographic signals to generate preprocessed electromyographic signals, and specifically comprises the following steps:
Performing band-pass filtering on the original myoelectric signal by adopting a second-order Butterworth filter to generate a band-pass filtered myoelectric signal;
Eliminating power frequency interference in the myoelectric signals subjected to band-pass filtering by adopting a second-order band-stop Babyt filter to generate interference-removed myoelectric signals;
And standardizing the data of the electromyographic signals after interference removal by adopting a zero-mean standardization method to generate preprocessed electromyographic signals.
3. The method for recognizing the lower limb movement in the walking process of the human body according to claim 2, wherein the extracting of the time domain features of the preprocessed electromyographic signals specifically comprises:
dividing the preprocessed electromyographic signals into a plurality of preprocessed electromyographic signal segments;
calculating the waveform length WL of the preprocessed electromyographic signal segment by adopting a formula; wherein xk represents the amplitude of the kth preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; xk +1 represents the amplitude of the (k + 1) th preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; n is the number of preprocessed electromyographic signal sample points in the preprocessed electromyographic signal segment;
calculating the average absolute value MAV of the preprocessed electromyographic signal segments by adopting a formula;
calculating the variance VAR of the preprocessed electromyographic signal segments by adopting a formula;
Calculating the root mean square RMS of the preprocessed electromyographic signal segments by adopting a formula;
Calculating zero crossing points ZC of the preprocessed electromyographic signal segments by adopting a formula; where th represents a minimum value.
4. The method for recognizing the lower limb movement in the walking process of the human body according to claim 3, wherein the preprocessed electromyographic signals are processed by the one-dimensional convolutional neural network and the second cyclic neural network in sequence and then output a third one-dimensional vector, and specifically comprises the following steps:
Extracting effective characteristics in the preprocessed electromyographic signals by adopting the one-dimensional convolutional neural network;
Inputting the effective features into the second recurrent neural network, and outputting a third one-dimensional vector after the effective features are processed by the second recurrent neural network; the second recurrent neural network comprises three gated neural unit recurrent layers.
5. The method according to claim 4, wherein the step of outputting the result of the lower limb motion recognition during the walking process of the human body after the processing of the fully-connected neural network, which is performed by using the combined vector of the first one-dimensional vector, the second one-dimensional vector and the third one-dimensional vector as the input of the fully-connected neural network, specifically comprises:
Combining the first one-dimensional vector, the second one-dimensional vector and the third one-dimensional vector into a one-dimensional combined vector [ a1, a2, a3] as an input of the fully-connected neural network; the fully-connected neural network comprises a first fully-connected layer, a first output layer, a Dropout layer, a second fully-connected layer and a second output layer;
The combined vector [ a1, a2, a3] is processed by the first full connected layer to obtain a predicted joint angle, which is output by the first output layer;
the combined vector [ a1, a2, a3] is processed by the first fully connected layer, the Dropout layer, and the second fully connected layer in sequence to generate a predicted walking gait, which is output by the second output layer.
6. A human walking process lower limb movement recognition system based on deep learning is characterized by comprising the following steps:
the original signal acquisition module is used for acquiring an original electromyographic signal; the original electromyographic signals are acquired by wireless sensors which are stuck to the surfaces of eight muscles on the right side of a thigh of a human body;
The signal preprocessing module is used for preprocessing the original electromyographic signals to generate preprocessed electromyographic signals;
The characteristic extraction module is used for extracting time domain characteristics and frequency domain characteristics of the preprocessed electromyographic signals; the time domain characteristics comprise waveform length, average absolute value, variance, root mean square and zero crossing point number; the frequency domain characteristic is a frequency spectrogram of the preprocessed electromyographic signals;
the deep neural network model establishing module is used for acquiring a deep neural network model; the deep neural network model comprises a convolutional neural network, a cyclic neural network and a fully-connected neural network; the convolutional neural network comprises a two-dimensional convolutional neural network and a one-dimensional convolutional neural network; the recurrent neural network comprises a first recurrent neural network and a second recurrent neural network;
The time domain characteristic processing module is used for taking the time domain characteristics of the preprocessed electromyographic signals as the input of the first cyclic neural network, and outputting a first one-dimensional vector after the time domain characteristics are processed by the first cyclic neural network;
the frequency domain characteristic processing module is used for taking the frequency domain characteristics of the preprocessed electromyographic signals as the input of the two-dimensional convolutional neural network, and outputting a second one-dimensional vector after the frequency domain characteristics are processed by the two-dimensional convolutional neural network;
The electromyographic signal processing module is used for sequentially processing the preprocessed electromyographic signals through the one-dimensional convolutional neural network and the second cyclic neural network and outputting a third one-dimensional vector;
The motion recognition module is used for taking a combined vector of the first one-dimensional vector, the second one-dimensional vector and the third one-dimensional vector as the input of the fully-connected neural network, and outputting a lower limb motion recognition result of the human body in the walking process after the combined vector is processed by the fully-connected neural network; the lower limb movement recognition result comprises a predicted walking gait and a joint angle; the walking gait is divided into a swing period, a support early period, a support middle period and a support final period; the joint angles include hip, knee and ankle joint angles.
7. The system for recognizing lower limb movement in the walking process of human body according to claim 6, wherein the signal preprocessing module specifically comprises:
The signal filtering unit is used for performing band-pass filtering on the original myoelectric signal by adopting a second-order Butterworth filter to generate a band-pass filtered myoelectric signal;
the interference elimination unit is used for eliminating power frequency interference in the myoelectric signals subjected to the band-pass filtering by adopting a second-order band-stop Babyteworth filter to generate interference-removed myoelectric signals;
and the zero-mean standardization unit is used for standardizing the data of the electromyographic signals after interference removal by adopting a zero-mean standardization method to generate preprocessed electromyographic signals.
8. The system for recognizing human walking process lower limb movement according to claim 7, wherein the feature extraction module specifically comprises:
the signal dividing unit is used for dividing the preprocessed electromyographic signals into a plurality of preprocessed electromyographic signal segments;
the waveform length calculating unit is used for calculating the waveform length WL of the preprocessed electromyographic signal segment by adopting a formula; wherein xk represents the amplitude of the kth preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; xk +1 represents the amplitude of the (k + 1) th preprocessed electromyographic signal sample point in the preprocessed electromyographic signal segment; n is the number of preprocessed electromyographic signal sample points in the preprocessed electromyographic signal segment;
the average absolute value calculating unit is used for calculating the average absolute value MAV of the preprocessed electromyographic signal segments by adopting a formula;
the variance calculating unit is used for calculating the variance VAR of the preprocessed electromyographic signal segments by adopting a formula;
the root mean square calculation unit is used for calculating the root mean square RMS of the preprocessed electromyographic signal segments by adopting a formula;
a zero crossing point calculating unit, configured to calculate zero crossing points ZC of the preprocessed electromyographic signal segments by using a formula; where th represents a minimum value.
9. the human walking process lower limb movement recognition system of claim 8, wherein the electromyographic signal processing module specifically comprises:
The effective feature extraction unit is used for extracting effective features in the preprocessed electromyographic signals by adopting the one-dimensional convolutional neural network;
The effective feature processing unit is used for inputting the effective features into the second recurrent neural network, and outputting a third one-dimensional vector after the effective features are processed by the second recurrent neural network; the second recurrent neural network comprises three gated neural unit recurrent layers.
10. the human walking process lower limb movement recognition system of claim 9, wherein the movement recognition module specifically comprises:
a vector combination unit, configured to combine the first one-dimensional vector, the second one-dimensional vector, and the third one-dimensional vector into a one-dimensional combination vector [ a1, a2, a3] as an input of the fully-connected neural network; the fully-connected neural network comprises a first fully-connected layer, a first output layer, a Dropout layer, a second fully-connected layer and a second output layer;
a joint angle prediction unit for subjecting the combined vector [ a1, a2, a3] to the first full link layer processing to obtain a predicted joint angle, the joint angle being output by the first output layer;
A walking gait prediction unit, configured to generate a predicted walking gait after processing the combined vector [ a1, a2, a3] sequentially through the first fully connected layer, the Dropout layer, and the second fully connected layer, where the walking gait is output by the second output layer.
CN201910847000.8A 2019-09-09 2019-09-09 Human body walking process lower limb movement identification method and system based on deep learning Active CN110537922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910847000.8A CN110537922B (en) 2019-09-09 2019-09-09 Human body walking process lower limb movement identification method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910847000.8A CN110537922B (en) 2019-09-09 2019-09-09 Human body walking process lower limb movement identification method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN110537922A true CN110537922A (en) 2019-12-06
CN110537922B CN110537922B (en) 2020-09-04

Family

ID=68712908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910847000.8A Active CN110537922B (en) 2019-09-09 2019-09-09 Human body walking process lower limb movement identification method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN110537922B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111515938A (en) * 2020-05-28 2020-08-11 河北工业大学 Lower limb exoskeleton walking trajectory tracking method based on inheritance type iterative learning control
CN111611859A (en) * 2020-04-21 2020-09-01 河北工业大学 Gait recognition method based on GRU
CN111653366A (en) * 2020-07-28 2020-09-11 上海海事大学 Tennis elbow recognition method based on electromyographic signals
CN112515657A (en) * 2020-12-02 2021-03-19 吉林大学 Plantar pressure analysis method based on lower limb exoskeleton neural network control
CN112842825A (en) * 2021-02-24 2021-05-28 郑州铁路职业技术学院 Training device for lower limb rehabilitation recovery
CN112906673A (en) * 2021-04-09 2021-06-04 河北工业大学 Lower limb movement intention prediction method based on attention mechanism
CN112906457A (en) * 2021-01-06 2021-06-04 南昌大学 Walking gait signal preprocessing method based on mobile phone acceleration sensor
CN113780106A (en) * 2021-08-24 2021-12-10 电信科学技术第五研究所有限公司 Deep learning signal detection method based on radio waveform data input
CN114159080A (en) * 2021-12-07 2022-03-11 东莞理工学院 Training and recognition method and device for upper limb rehabilitation robot movement intention recognition model
CN114259223A (en) * 2021-12-17 2022-04-01 南昌航空大学 Human motion state monitoring system based on D-type plastic optical fiber
CN114783069A (en) * 2022-06-21 2022-07-22 中山大学深圳研究院 Method, device, terminal equipment and storage medium for identifying object based on gait
CN114872040A (en) * 2022-04-20 2022-08-09 中国科学院自动化研究所 Musculoskeletal robot control method and device based on cerebellum prediction and correction
CN114932536A (en) * 2022-05-31 2022-08-23 山东大学 Walking active mechanical device
CN115019393A (en) * 2022-06-09 2022-09-06 天津理工大学 Exoskeleton robot gait recognition system and method based on convolutional neural network
WO2022242133A1 (en) * 2021-05-18 2022-11-24 中国科学院深圳先进技术研究院 Gesture classification and recognition method and application thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590432A (en) * 2017-07-27 2018-01-16 北京联合大学 A kind of gesture identification method based on circulating three-dimensional convolutional neural networks
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590432A (en) * 2017-07-27 2018-01-16 北京联合大学 A kind of gesture identification method based on circulating three-dimensional convolutional neural networks
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YU HU 等: "A novel attention-based hybrid CNN-RNN architecture for sEMG-based gesture recognition", 《PLOS ONE》 *
郑毅 等: "基于长短时记忆网络的人体姿态检测方法", 《计算机应用》 *
郝沙沙: "基于表面肌电信号的手部动作识别方法研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611859A (en) * 2020-04-21 2020-09-01 河北工业大学 Gait recognition method based on GRU
CN111515938B (en) * 2020-05-28 2022-11-18 河北工业大学 Lower limb exoskeleton walking trajectory tracking method based on inheritance type iterative learning control
CN111515938A (en) * 2020-05-28 2020-08-11 河北工业大学 Lower limb exoskeleton walking trajectory tracking method based on inheritance type iterative learning control
CN111653366A (en) * 2020-07-28 2020-09-11 上海海事大学 Tennis elbow recognition method based on electromyographic signals
CN112515657A (en) * 2020-12-02 2021-03-19 吉林大学 Plantar pressure analysis method based on lower limb exoskeleton neural network control
CN112906457A (en) * 2021-01-06 2021-06-04 南昌大学 Walking gait signal preprocessing method based on mobile phone acceleration sensor
CN112842825A (en) * 2021-02-24 2021-05-28 郑州铁路职业技术学院 Training device for lower limb rehabilitation recovery
CN112842825B (en) * 2021-02-24 2023-06-09 郑州铁路职业技术学院 Training device for rehabilitation and recovery of lower limbs
CN112906673A (en) * 2021-04-09 2021-06-04 河北工业大学 Lower limb movement intention prediction method based on attention mechanism
WO2022242133A1 (en) * 2021-05-18 2022-11-24 中国科学院深圳先进技术研究院 Gesture classification and recognition method and application thereof
CN113780106A (en) * 2021-08-24 2021-12-10 电信科学技术第五研究所有限公司 Deep learning signal detection method based on radio waveform data input
CN113780106B (en) * 2021-08-24 2024-02-27 电信科学技术第五研究所有限公司 Deep learning signal detection method based on radio waveform data input
CN114159080B (en) * 2021-12-07 2022-06-24 东莞理工学院 Training and recognition method and device for upper limb rehabilitation robot movement intention recognition model
CN114159080A (en) * 2021-12-07 2022-03-11 东莞理工学院 Training and recognition method and device for upper limb rehabilitation robot movement intention recognition model
CN114259223A (en) * 2021-12-17 2022-04-01 南昌航空大学 Human motion state monitoring system based on D-type plastic optical fiber
CN114872040A (en) * 2022-04-20 2022-08-09 中国科学院自动化研究所 Musculoskeletal robot control method and device based on cerebellum prediction and correction
CN114872040B (en) * 2022-04-20 2024-04-16 中国科学院自动化研究所 Musculoskeletal robot control method and device based on cerebellum prediction and correction
CN114932536A (en) * 2022-05-31 2022-08-23 山东大学 Walking active mechanical device
CN115019393A (en) * 2022-06-09 2022-09-06 天津理工大学 Exoskeleton robot gait recognition system and method based on convolutional neural network
CN114783069A (en) * 2022-06-21 2022-07-22 中山大学深圳研究院 Method, device, terminal equipment and storage medium for identifying object based on gait

Also Published As

Publication number Publication date
CN110537922B (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN110537922B (en) Human body walking process lower limb movement identification method and system based on deep learning
Shen et al. Movements classification of multi-channel sEMG based on CNN and stacking ensemble learning
CN110141239A (en) A kind of motion intention identification and installation method for lower limb exoskeleton
Shi et al. Feature extraction and classification of lower limb motion based on sEMG signals
CN106308809A (en) Method for recognizing gait of thigh amputation subject
Wang et al. sEMG-based consecutive estimation of human lower limb movement by using multi-branch neural network
Lu et al. Evaluation of classification performance in human lower limb jump phases of signal correlation information and LSTM models
CN108681685A (en) A kind of body work intension recognizing method based on human body surface myoelectric signal
Wojtczak et al. Hand movement recognition based on biosignal analysis
Li et al. Gait recognition based on EMG with different individuals and sample sizes
Sun et al. Continuous estimation of human knee joint angles by fusing kinematic and myoelectric signals
Huihui et al. Estimation of ankle angle based on multi-feature fusion with random forest
Hussain et al. Amputee walking mode recognition based on mel frequency cepstral coefficients using surface electromyography sensor
Kumar et al. Human hand prosthesis based on surface EMG signals for lower arm amputees
Delgado et al. Estimation of joint angle from sEMG and inertial measurements based on deep learning approach
Li et al. Continuous angle prediction of lower limb knee joint based on semg
Cene et al. Upper-limb movement classification through logistic regression sEMG signal processing
Lu et al. Channel-distribution hybrid deep learning for sEMG-based gesture recognition
Ghalyan et al. Human gait cycle classification improvements using median and root mean square filters based on EMG signals
wafa Talha et al. Myoelectric Signal Analysis and Processing in View Hand Muscle Movement Detection
Si et al. Recognition of Lower Limb Movements Baesd on Electromyography (EMG) Texture Maps
He et al. Static hand posture classification based on the biceps brachii muscle synergy features
Krishnapriya et al. Surface electromyography based hand gesture signal classification using 1d cnn
Shi et al. A novel method for predicting action switching in continuous motion based on semg signals
Issa et al. Lower limb activity prediction using EMG signals and broad learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220309

Address after: 101113 No. 6275, building 6, No. 17, Yunshan South Road, Industrial Development Zone, Tongzhou District, Beijing

Patentee after: Jingzhi test dimension (Beijing) Technology Co.,Ltd.

Address before: 100191 No. 37, Haidian District, Beijing, Xueyuan Road

Patentee before: BEIHANG University