CN110236550B - Human gait prediction device based on multi-mode deep learning - Google Patents
Human gait prediction device based on multi-mode deep learning Download PDFInfo
- Publication number
- CN110236550B CN110236550B CN201910464986.0A CN201910464986A CN110236550B CN 110236550 B CN110236550 B CN 110236550B CN 201910464986 A CN201910464986 A CN 201910464986A CN 110236550 B CN110236550 B CN 110236550B
- Authority
- CN
- China
- Prior art keywords
- data
- neural network
- deep neural
- module
- gait
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/1036—Measuring load distribution, e.g. podologic studies
- A61B5/1038—Measuring plantar pressure during gait
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/112—Gait analysis
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/04—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by terrestrial means
- G01C21/08—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by terrestrial means involving use of the magnetic field of the earth
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/18—Stabilised platforms, e.g. by gyroscope
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0219—Inertial sensors, e.g. accelerometers, gyroscopes, tilt switches
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Health & Medical Sciences (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Automation & Control Theory (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Dentistry (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Evolutionary Computation (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Geology (AREA)
- General Life Sciences & Earth Sciences (AREA)
- Environmental & Geological Engineering (AREA)
- Physiology (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention provides a human body gait prediction device based on multi-mode deep learning, and belongs to the field of gait prediction and deep learning. The device includes: the device comprises an inertial sensor module, a pressure sensor module, a sound sensor module, an inertial sensor data acquisition and preprocessing module, a pressure sensor data acquisition and preprocessing module, a sound data acquisition and preprocessing module and a deep neural network processing module. The device utilizes an inertial sensor, a sole pressure sensor and a sound sensor to collect acceleration, angular velocity, angle and geomagnetic field component signals of lower limb movement of a human body and sole pressure and walking sound data, the collected data are input into a deep neural network processing module after being preprocessed, and the deep neural network processing module outputs a human body gait prediction result. The gait prediction system is simple and convenient to wear, can meet different human body requirements, and can be applied to gait prediction of the exoskeletal robot in the fields of medical rehabilitation and military in the future.
Description
Technical Field
The invention relates to a human body gait prediction device based on multi-mode deep learning, and belongs to the field of gait prediction and deep learning.
Background
With the development of artificial intelligence, especially the rise of deep learning in recent years, intelligent collaboration between people and machines has become an important field of artificial intelligence. The exoskeleton robot is an important representative of human-computer intelligent cooperation, perfectly combines human intelligence and robot strength, and has great development potential in the fields of medical rehabilitation and military in the future. The exoskeleton robot captures human motion gait in real time through a sensor sensing system, and a controller generates a control signal to drive a mechanical skeleton to move along with the human body. However, since data acquisition, signal processing, actuator response, and the like require a certain time, the mechanical skeletal motion gait lags behind the human motion gait, thereby affecting the wearing comfort and the human-computer coordination of the wearer. In order to solve the problem, the exoskeleton robot needs to accurately predict human gait in real time, so that a reference signal of a control system is ahead of the motion gait of the human body, and the motion gait of a wearer is followed in real time.
The essence of gait prediction is that historical data is used for predicting gait data and trends in the next period of time, and the gait prediction is a time sequence signal prediction. In the exoskeleton robot, a wearable sensor is often configured, so that a gait prediction device based on the wearable sensor needs to be researched. Currently, most gait prediction devices are image-based, or for single modality sensors, such as inertial sensors. The prediction device based on the image is often difficult to obtain accurate human gait and is not suitable for high-precision exoskeleton robot gait control. Most of the existing prediction devices based on the single-mode sensor need to extract gait features manually, and the algorithm has low calculation efficiency and prediction accuracy and poor robustness.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a human gait prediction device based on multi-mode deep learning. The device utilizes inertial sensor, plantar pressure sensor and sound sensor multimode sensor, gathers the acceleration, angular velocity, angle, geomagnetic field component signal of human low limbs motion to and plantar pressure and walking sound data, and utilizes and realizes the gait prediction to the human body based on multimode deep learning algorithm, and it is simple convenient to dress, can satisfy different human needs, can be applied to the gait prediction of outer skeleton robot in medical rehabilitation and military field in the future.
The invention provides a human gait prediction device based on multi-mode deep learning, which is characterized by comprising the following steps: the system comprises an inertial sensor module, a pressure sensor module, a sound sensor module, an inertial sensor data acquisition and preprocessing module, a pressure sensor data acquisition and preprocessing module, a sound sensor data acquisition and preprocessing module and a deep neural network processing module;
the system comprises an inertial sensor module, a pressure sensor module, a deep neural network processing module, a pressure sensor data acquisition and preprocessing module, a deep neural network processing module and a deep neural network processing module, wherein the inertial sensor module comprises 7 inertial sensors, each inertial sensor is connected with the inertial sensor data acquisition and preprocessing module in a wired parallel mode respectively, the pressure sensor module comprises 12 pressure sensors, each pressure sensor is connected with the pressure sensor data acquisition and preprocessing module in a wired parallel mode respectively, the sound sensor module comprises 2 sound sensors, each sound sensor is connected with the sound sensor data acquisition and preprocessing module in a wired parallel mode respectively, and the inertial sensor data acquisition and preprocessing module, the pressure sensor data acquisition and preprocessing module and the sound sensor data acquisition and preprocessing module are connected with the deep neural network processing module in a wired parallel mode respectively;
the 7 inertial sensors are respectively arranged at the positions of the waist back, the left thigh, the right thigh, the left calf, the right calf, the left instep and the right instep of a user, and each inertial sensor is respectively used for acquiring 3-dimensional acceleration data, 3-dimensional angular velocity data, 3-dimensional angle data and 3-dimensional magnetic field data of a corresponding part and sending the acquired data to the inertial sensor data acquisition and preprocessing module;
the inertial sensor acquisition and preprocessing module is used for receiving data acquired by each inertial sensor, performing data preprocessing of filtering and normalization, and sending the preprocessed inertial sensor data to the deep neural network processing module;
the 12 pressure sensors are distributed in an insole mode, 1 insole is respectively placed on the left sole and the right sole, 6 pressure sensors are respectively arranged on each insole, each pressure sensor collects sole pressure at a corresponding position, and the collected data are sent to the pressure sensor data collecting and preprocessing module;
the pressure sensor data acquisition and preprocessing module is used for receiving data acquired by each pressure sensor, performing data preprocessing of filtering and normalization, and sending the preprocessed pressure sensor data to the deep neural network processing module;
the 2 sound sensors are respectively arranged on the left instep and the right instep and used for collecting the sole sound data of the walking of the human body and sending the collected data to the sound sensor data collecting and preprocessing module;
the sound sensor data acquisition and preprocessing module is used for receiving data sent by each sound sensor, performing data preprocessing of filtering and normalization, and sending the preprocessed sound sensor data to the deep neural network processing module;
the deep neural network processing module is used for receiving the preprocessed inertial sensor data, pressure sensor data and sound sensor data, predicting the gait of the received data by using the deep neural network and outputting a gait prediction result;
1) enabling a tester to wear different sensors to acquire multi-modal data, preprocessing the multi-modal data, establishing a data sample set, and dividing the data sample set into a training data set, a verification data set and a test data set; the method comprises the following specific steps:
1-1) a tester respectively wears an inertial sensor module consisting of 7 inertial sensors, a pressure sensor module consisting of 12 pressure sensors and a sound sensor module consisting of 2 sound sensors; the 7 inertial sensors are respectively arranged at 7 positions of the back, the left thigh, the right thigh, the left calf, the right calf, the left instep and the right instep of a tester and are used for acquiring 3-dimensional acceleration data, 3-dimensional angular velocity data, 3-dimensional angle data and 3-dimensional magnetic field data of different parts of the lower limb of a human body; the 12 pressure sensors are distributed in an insole mode, 1 insole is respectively arranged at the left sole and the right sole, and each insole comprises 6 pressure sensor data acquisition points for acquiring sole pressure data of the 12 data points; the sound sensor is worn on the instep, and the left instep and the right instep are respectively 1 and used for collecting the sole sound of the walking of the human body;
1-2) after finishing wearing, enabling a tester to respectively perform 5 human gait behaviors in 5 walking environments, wherein the walking environments comprise: tile, cement, asphalt, sand, grass, the gait activities include: walking slowly on the flat ground, walking quickly on the flat ground, going up and down stairs, going up and down slopes, and turning left and right; wherein, going up and down stairs only under the walking environment of the tile land, going up and down slopes only under the walking environment of the asphalt land, and obtaining 17 environment gait combinations; wherein the time length of the single environment gait combination is 10-60 minutes;
1-3) under each environment gait combination, at each sampling moment, 84-dimensional data including 7 groups of 3-dimensional acceleration, 3-dimensional angular velocity, 3-dimensional angle and 3-dimensional magnetic field are acquired by 7 inertial sensors and sent to an inertial sensor acquisition and preprocessing module, 12 pressure sensors acquire 12-dimensional plantar pressure data and send to a pressure sensor data acquisition and preprocessing module, and 2 sound sensors acquire 2-dimensional walking sound data and send to a sound sensor data acquisition and preprocessing module; the sampling frequency of each sensor is 20-100 Hz;
all data at a single sampling instant constitutes the original data sample of 1 × 98,i=1,2,…,17,j=1,2,3,…,is the jth original data sample under the ith environment gait combinationThe k-th dimension raw data in (1), 2, …,98, wherein the 98-dimensional data are arranged in the order of 21-dimensional acceleration, 21-dimensional angular velocity, 21-dimensional angle, 21-dimensional magnetic field, 12-dimensional pressure, 2-dimensional sound; all original data samples obtained by single environment gait combined samplingSet of constitutions is17 ambient gait combinations allForming a set of raw data samplesXRawThe total size of the data samples of (1) is N;
1-4) each data acquisition and preprocessing module pair XRawFiltering and normalizing corresponding data in all original data samples; filtering method selection Kalman filtering method, single original data sampleData of each dimension in (1)The normalization method for k-1, 2, …,98 is as follows:
in the formula:normalized data of the kth dimension original data of the jth original data sample under the ith environment gait combination,the k-dimension original data of the j original data sample under the i environment gait combination,is the maximum of all the k-th dimension raw data,is the minimum of all the k-th dimension raw data,representing the mean of all k-dimension raw data;
after all the original data samples are preprocessed, a data sample set X is obtainedNormAnd sending to a deep neural network processing module;
1-5) deep neural network processing Module will XNormDivided into training data sets X according to set proportionTrainVerification data set XValidateAnd test data set XTest(ii) a Wherein the training data set XTrainThe proportion of the test data set is not less than 75%, the proportion of the verification data set is not less than 5%, and the proportion of the test data set is not less than 5%;
2) constructing a deep neural network based on a time convolution network in a deep neural network processing module; the method comprises the following specific steps:
2-1) determining a deep neural network structure;
adopting a time convolution network to construct a deep neural network, wherein the deep neural network is divided into a transition time prediction network and a target time prediction network;
let time 0 < t1<t2<t3<t4<t5In data sample set XNormIn, select t1Time t2Taking the data sample of the moment as input data x (t) of the deep neural network1)…x(t2),t3Time t4Data sample creation for a time instant is a transition time instant sample label y (t)3)…y(t4),t5The data sample of a time of day is created as a target time of day sample label z (t)5);
The input data of the transition moment prediction network is t1Time t2Data sample x (t) at time instant1)…x(t2) Output prediction data of t3Time t4Data sample prediction value at timeTarget time predicts network will x (t)1)…x(t2) All or part of the data x' (t)1)…x′(t2) Andwith the input of predicted data t5Predicted value of time
Let t2=t1+7Tsample,t3=t2+Tsample,t4=t3+Tsample,t5=t4+Tsample,TsampleInputting a data sequence x (t) of 8 sampling moments into the network for the prediction of the data sampling interval, i.e. the transition moment1)…x(t2) Predicting and outputting data of 2 sampling momentsTarget moment prediction network inputs 8 sampling moment data sequence x' (t)1)…x′(t2) And transition time prediction data of 2 sampling timesPredicting and outputting data of 1 sampling moment
2-2) determining a loss function of the deep neural network;
the loss function L for the deep neural network is:
in the formula, LyAnd LzRespectively representing the loss functions of the transition moment prediction network and the target moment prediction network,and y represents the predicted value and the tag value of the predicted network output at the transition time respectively,and z represents the predicted value and the tag value of the predicted network output at the target moment, respectively, wyAnd wzAre respectively LyAnd LzWeight coefficient, LyAnd LzSelection L1Loss function or L2Any of the loss functions:
in the formula, NBRepresenting the number of samples in batch processing, the value range is 32,64,128 and 256,the predicted value of the network output is u, the label value of the network output is j, and j represents the number of the jth output value of the network;
2-3) determining parameters and structural hyper-parameters of the deep neural network;
the predicted network parameters at the transition moment contain the weight W of the convolutional layerycAnd bias BycWeight W of the full link layeryfAnd bias Byf;
Target time prediction network parameter containing convolution layer weight WzcAnd bias BzcWeight W of the full link layerzfAnd bias Bzf;
The structural hyper-parameters of the deep neural network comprise Block number, channel number, node number, convolution kernel length, void coefficient and Dropout coefficient;
the value range of the Block number is an integer in the range of [5,10], the value of the channel number is an integer in the range of [30,200], the value of the node number is an integer in the range of [50,500], the value of the convolution kernel length is 3 or 5, the value of the void coefficient is 1 or 2, and the value range of Dropout is [0,1 ];
3) training the deep neural network constructed in the step 2) to obtain the trained deep neural network and corresponding optimal parameters; the method comprises the following specific steps:
3-1) training a deep neural network;
determining training parameters of a deep neural network, comprising: number of training rounds NEpochsAnd a learning rate α, wherein all data samples of the training data set are trained in one round for a number of training rounds NEpochsHas a value range of NEpochsNot less than 100, learning rate α value range of 0,1];
Initializing parameter W of deep neural network by random methodyc、Byc、Wyf、Byf、Wzc、Bzc、Wzf、BzfUsing a training data set XTrainTraining the deep neural network parameters, and adopting a standard random gradient descent method to carry out Wyc、Byc、Wyf、Byf、Wzc、Bzc、Wzf、BzfUpdating parameters; every interval NVNumber of training rounds using validation data set XValidatePerforming one-time verification on the deep neural network, and automatically storing a data set X for a verification setValidateThe network parameter with the minimum error is used as the current network parameter;
if the validation data set error no longer decreases or the training number reaches a specified number NEpochsIf yes, ending the training and entering the step 3-2);
3-2) Using test data set XTestTesting the deep neural network after training is finished, and evaluating the optimal deep neural network parameters;
the criterion for evaluation is the mean error value p, and the calculation expression is:
in the formula, NTestTo test the number of samples in a data set,and ziRespectively representing the ith predicted value and the tag value output by the target time prediction network;
if the estimated mean error value p<3%, finishing the evaluation, and saving the current network parameter as the optimal parameter W of the deep neural networkyc*、Byc*、Wyf*、Byf*、Wzc*、Bzc*、Wzf*、BzfEntering step 4); if the evaluated average error value p is more than or equal to 3%, returning to the step 3-1), and retraining the deep neural network;
4) predicting human gait by using the trained deep neural network; the method comprises the following specific steps:
4-1) selecting a new tester, and repeating the step 1-1), so that the tester wears the inertial sensor module, the pressure sensor module and the sound sensor module respectively;
4-2) randomly selecting 1 walking environment from the 5 walking environments in the step 1-2), and randomly selecting 1 human gait behavior from the 5 human gait behaviors in the step 1-2), wherein ascending and descending stairs are only carried out in a tile ground walking environment, ascending and descending slopes are only carried out in an asphalt ground walking environment, the step 1-3) is repeated, original data samples under the environment gait combination after a tester wears three sensor modules are collected in real time and are respectively sent to corresponding data collection and preprocessing modules, and all data sampled once are arranged to form 1 original data sample of 1 × 98 As raw data samplesThe k-th dimension raw data in (1, 2. ·, 98);
4-3) repeating steps 1-4), andpreprocessing is carried out, and the data sample after preprocessing is obtained and recorded asConcurrence ofSending the data to a deep neural network processing module;
4-4) in the deep neural network processing module, willData samples corresponding to the first 7 sampling instants of the sampling instants andform a new t1Time t2Inputting data into the deep neural network trained in the step 3), and outputting the tth test person by the network in real time5Temporal gait prediction Predicting outcome data for gaitThe k-th dimension of (1), k is 1, 2.
The invention has the characteristics and beneficial effects that:
1. the human gait prediction device based on the multi-mode deep learning can effectively collect sensor data of three modes of an inertial sensor, a plantar pressure sensor and a sound sensor, carries out human gait prediction by using a deep neural network algorithm, and can predict 3-dimensional acceleration, 3-dimensional angular velocity, 3-dimensional angle, 3-dimensional magnetic field, 12-dimensional plantar pressure and 2-dimensional walking sound of 5 human gait behaviors (such as flat ground slow walking, flat ground fast walking, up and down stairs, up and down slopes, left and right turning) in 5 walking environments (such as tile ground, cement ground, asphalt ground, sand ground and grassland).
2. According to the human body gait prediction device based on the multi-mode deep learning, the time convolution network is adopted to construct the deep neural network for gait prediction, a characteristic extractor does not need to be designed artificially to extract gait characteristics, but the characteristic learning and the gait prediction are automatically integrated, so that the accuracy and the robustness of the human body gait prediction are improved.
3. The human body gait prediction device based on the multi-mode deep learning is simple and convenient to wear, can meet different human body requirements, is suitable for gait prediction of most of different human bodies, and can be applied to gait prediction of exoskeletal robots in the fields of medical rehabilitation and military in the future.
Drawings
FIG. 1 is a schematic view of the structure of the apparatus of the present invention.
Fig. 2 is a schematic view of the sensor wear of the device of the present invention.
Fig. 3 is a schematic diagram of left sole pressure acquired by the insole type sole pressure sensor of the device of the invention.
Fig. 4 is a diagram of a deep neural network in the apparatus of the present invention.
FIG. 5 is a Block diagram of the deep neural network of the device of the present invention.
In the figure, 1-7 are inertial sensors, 8-9 are sound sensors, 10-11 are insole type plantar pressure sensors, and ① - ⑥ are distribution positions of the plantar pressure sensors.
Detailed Description
The invention provides a human gait prediction device based on multi-modal deep learning, which is further described in detail below by combining the accompanying drawings and specific embodiments.
The invention provides a human gait prediction device based on multi-mode deep learning, which has a structure shown in figure 1 and comprises: the device comprises an inertial sensor module, a pressure sensor module, a sound sensor module, an inertial sensor data acquisition and preprocessing module, a pressure sensor data acquisition and preprocessing module, a sound sensor data acquisition and preprocessing module and a deep neural network processing module.
The inertial sensor module comprises 7 inertial sensors, each inertial sensor is connected with an inertial sensor data acquisition and preprocessing module in a wired parallel mode respectively, the pressure sensor module comprises 12 pressure sensors, each pressure sensor is connected with a pressure sensor data acquisition and preprocessing module in a wired parallel mode respectively, the sound sensor module comprises 2 sound sensors, each sound sensor is connected with a sound sensor data acquisition and preprocessing module in a wired parallel mode respectively, and the inertial sensor data acquisition and preprocessing module, the pressure sensor data acquisition and preprocessing module and the sound sensor data acquisition and preprocessing module are connected with a deep neural network processing module in a wired parallel mode respectively.
The inertial sensor is a conventional sensor which simultaneously integrates a three-axis gyroscope, a three-axis accelerometer, a three-axis angle meter and a three-axis electronic compass. The 7 inertial sensors are respectively arranged at the positions of the waist back, the left thigh, the right thigh, the left calf, the right calf, the left instep and the right instep of a user, as shown in fig. 2, each inertial sensor is respectively used for collecting 3-dimensional acceleration data, 3-dimensional angular velocity data, 3-dimensional angle data and 3-dimensional magnetic field data of different parts of the lower limb of a human body, and sending the collected data to the inertial sensor data collecting and preprocessing module.
The inertial sensor acquisition and preprocessing module is used for receiving the data acquired by each inertial sensor, performing data preprocessing of filtering and normalization, and sending the preprocessed inertial sensor data to the deep neural network processing module. The inertial sensor data acquisition and preprocessing module adopts a processing framework based on a conventional MCU and can be installed at any position of a human body. In this embodiment, the inertial sensor data acquisition and preprocessing module is installed at the back position.
The pressure sensor is a conventional film type pressure sensor, and can perform static and dynamic measurement on the pressure of any contact surface. The 12 pressure sensors are distributed in an insole mode, as shown in fig. 3, 1 insole is placed on each of the left sole and the right sole, 6 data acquisition points are arranged on each insole, the pressure of the sole is acquired by the 12 data acquisition points of each pressure sensor at the corresponding position to obtain 12-dimensional sole pressure data, and the acquired data are sent to the pressure sensor data acquisition and preprocessing module.
The pressure sensor data acquisition and preprocessing module is used for receiving data acquired by each pressure sensor, performing data preprocessing of filtering and normalization, and sending the preprocessed pressure sensor data to the deep neural network processing module. The pressure sensor data acquisition and preprocessing module adopts a processing framework based on a conventional MCU and can be installed at any position of a human body. In this embodiment, the pressure sensor data acquisition and preprocessing module is installed at the back position.
The sound sensor employs a conventional sound-sensitive condenser electret microphone sensor. 2 sound sensor dresses at the instep, and each 1 of left and right sides instep as shown in figure 2 for gather human walking's sole sound data, and send the data acquisition who gathers to sound sensor data acquisition and preprocessing module.
The sound sensor data acquisition and preprocessing module is used for receiving data sent by each sound sensor, performing data preprocessing of filtering and normalization, and sending the preprocessed sound sensor data to the deep neural network processing module. The data acquisition and preprocessing module of the sound sensor adopts a processing framework based on a conventional MCU and can be installed at any position of a human body. In this embodiment, the sound sensor data acquisition and preprocessing module is installed at the back position.
The deep neural network processing module is used for receiving the preprocessed inertial sensor data, pressure sensor data and sound sensor data, predicting the gait of the received data by using the deep neural network and outputting a gait prediction result. The deep neural network processing module adopts a processing framework based on a conventional GPU or FPGA and is used for improving the calculation efficiency of the deep neural network; meanwhile, an output interface of a USB or a serial port is adopted for outputting a gait prediction result for interactive use with external equipment or an exoskeleton system.
The deep neural network module adopts a Time Convolutional Network (TCN) to construct a deep neural network, and the network structure is divided into a transition time prediction network and a target time prediction network, as shown in fig. 4, where fig. 4(a) is the transition time prediction network, and fig. 4(b) is the target time prediction networkAnd predicting the network. The preprocessed inertial sensor data, the pressure sensor data and the sound sensor data received by the deep neural network form a data sample set XNorm;
Let time 0 < t1<t2<t3<t4<t5In data sample set XNormIn, select t1Time t2Taking the data sample of the moment as input data x (t) of the deep neural network1)…x(t2),t3Time t4Data sample creation for a time instant is a transition time instant sample label y (t)3)…y(t4),t5The data sample of a time of day is created as a target time of day sample label z (t)5);
The input data of the prediction network at the transition moment is x (t)1)…x(t2) Output the prediction data asWhereinMay be related to x (t)1)…x(t2) The dimensions of (A) are the same or different; target time predicts network will x (t)1)…x(t2) All or part of the data x' (t)1)…x′(t2) Andwhile as input, outputting the prediction data asWhere, x' (t)1)…x′(t2) Type of sensor data and data dimension andthe type of data of (a) is the same as the dimension,the data type and dimension of (2) can be equal tox′(t1)…x′(t2) Andthe data type and dimension of (a) are the same or different. In gait prediction, the general method is to directly pass through x (t)1)…x(t2) PredictionThe invention adds a transition processTherefore, the network can learn more variation trends, the prediction inaccuracy caused by random errors at individual moments is reduced, and the prediction effect is improved.
The Block in the deep neural network adopts a residual structure, cavity causal convolution, weight normalization and Re L U, Dropout operation are sequentially executed, and then repeated execution is performed once in sequence, wherein the specific operation flow is shown in FIG. 5. the convolution 1 × 1 in the Block structure of TCN is an optional module, convolution operation is executed when the input dimension and the output dimension of the residual are different, and when the input dimension and the output dimension of the residual are the same, the convolution operation is not required to be executed, and a unit matrix is used for substitution, so that the residual structure can effectively reduce the loss of information in the convolution network, and is more convenient for program expansion.
The calculation formula of the cavity causal convolution operation F acting on the s-th output neuron is as follows:
in the formula: x is the input layer sequence x (t)1)…x(t2),xs-d*iCorresponding s-d x i inputs in the input layer sequence are shown, f is a convolution kernel, d is a hole coefficient, and k is the length of the convolution kernel.
The Re L U (Rectified L initial Unit) function has the calculation formula as follows:
f(u)=max(0,u)
u is the input to the Re L U function, the derivative of the function being 1 when U >0 and 0 when U <0, such that the function has non-linearity.
The Dropout operation is to randomly discard the activation values of some neurons in the input to avoid overfitting and improve the generalization capability of the convolutional neural network. Dropout has a value range of [0,1 ].
The weight normalization operation is to re-parameterize each weight vector w of the neural network through a vector parameter v and a scalar parameter g, and perform random gradient descent on newly introduced parameters so as to accelerate the convergence speed of the optimization process. The weight vector w can be expressed as:
where v is a k-dimensional vector, g is a scalar, and | | · | | | represents the euclidean norm, this re-parameterization has the effect of fixing the euclidean norm of the weight parameter w, such that w ═ g, independent of the parameter v.
The working principle of the device is as follows:
1) enabling a tester to wear different sensors to acquire multi-modal data, preprocessing the multi-modal data, establishing a data sample set, and dividing the data sample set into a training data set, a verification data set and a test data set; the method comprises the following specific steps:
1-1) a tester respectively wears an inertial sensor module consisting of 7 inertial sensors, a pressure sensor module consisting of 12 pressure sensors and a sound sensor module consisting of 2 sound sensors; the 7 inertial sensors are respectively arranged at 7 positions of the back, the left thigh, the right thigh, the left calf, the right calf, the left instep and the right instep of a tester and are used for acquiring 3-dimensional acceleration data, 3-dimensional angular velocity data, 3-dimensional angle data and 3-dimensional magnetic field data of different parts of the lower limb of a human body; the 12 pressure sensors are distributed in an insole mode, 1 insole is respectively arranged at the left sole and the right sole, and each insole comprises 6 pressure sensor data acquisition points for acquiring sole pressure data of the 12 data points; the sound sensor is worn on the instep, and the left instep and the right instep are respectively 1 and used for collecting the sole sound of the walking of the human body;
1-2) after finishing wearing, enabling a tester to respectively perform 5 human gait behaviors in 5 walking environments, wherein the walking environments comprise: tile, cement, asphalt, sand, grass, the gait activities include: walking slowly on the flat ground, walking quickly on the flat ground, going up and down stairs, going up and down slopes, and turning left and right; wherein, going up and down stairs only under the walking environment of the tile land, going up and down slopes only under the walking environment of the asphalt land, and obtaining 17 environment gait combinations; wherein the time length of the single environment gait combination is 10-60 minutes;
1-3) under each environment gait combination, at each sampling moment, 84-dimensional data including 7 groups of 3-dimensional acceleration, 3-dimensional angular velocity, 3-dimensional angle and 3-dimensional magnetic field are acquired by 7 inertial sensors and sent to an inertial sensor acquisition and preprocessing module, 12 pressure sensors acquire 12-dimensional plantar pressure data and send to a pressure sensor data acquisition and preprocessing module, and 2 sound sensors acquire 2-dimensional walking sound data and send to a sound sensor data acquisition and preprocessing module; the sampling frequency of each sensor is 20-100 Hz;
all data at a single sampling instant constitutes the original data sample of 1 × 98,i=1,2,…,17,j=1,2,3,…,is the jth original data sample under the ith environment gait combinationThe k-th dimension raw data in (1), 2, …,98, wherein the 98-dimensional data are arranged in the order of 21-dimensional acceleration, 21-dimensional angular velocity, 21-dimensional angle, 21-dimensional magnetic field, 12-dimensional pressure, 2-dimensional sound; all original data samples obtained by single environment gait combined samplingSet of constitutions is17 ambient gait combinations allForming a set of raw data samplesXRawThe total size of the data samples of (1) is N;
1-4) each data acquisition and preprocessing module pair XRawFiltering and normalizing the corresponding data in all the original data samples; filter method selection standard Kalman filter method, single raw data sampleData of each dimension in (1)The normalization method for k-1, 2, …,98 is as follows:
in the formula:normalized data of the kth dimension original data of the jth original data sample under the ith environment gait combination,the k-dimension original data of the j original data sample under the i environment gait combination,is the maximum of all the k-th dimension raw data,for all k-dimension originalThe minimum value of the data is the minimum value,representing the mean of all k-dimension raw data;
after all the original data samples are preprocessed, a data sample set X is obtainedNormAnd sending to a deep neural network processing module;
1-5) deep neural network processing Module will XNormDivided into training data sets X according to set proportionTrainVerification data set XValidateAnd test data set XTest(ii) a Wherein the training data set XTrainThe proportion of the test data set is not less than 75%, the proportion of the verification data set is not less than 5%, and the proportion of the test data set is not less than 5%;
2) constructing a deep neural network based on a time convolution network in a deep neural network processing module; the method comprises the following specific steps:
2-1) determining a deep neural network structure;
adopting a time convolution network to construct a deep neural network, wherein the deep neural network is divided into a transition time prediction network and a target time prediction network;
let time 0 < t1<t2<t3<t4<t5In data sample set XNormIn, select t1Time t2The data sample of the time is used as the input data x (t) of the neural network1)…x(t2),t3Time t4Data sample creation for a time instant is a transition time instant sample label y (t)3)…y(t4),t5The data sample of a time of day is created as a target time of day sample label z (t)5);
The input sequence data of the transition time prediction network is t1Time t2Data sample x (t) at time instant1)…x(t2) Outputting predicted sequence data as t3Time t4Data sample prediction value at timeTarget time predicts network will x (t)1)…x(t2) All or part of the data x' (t)1)…x′(t2) Andwhile outputting predicted sequence data as input t5Predicted value of time
Let t2=t1+7Tsample,t3=t2+Tsample,t4=t3+Tsample,t5=t4+Tsample,TsampleInputting a data sequence x (t) of 8 sampling moments into the network for the prediction of the data sampling interval, i.e. the transition moment1)…x(t2) Predicting and outputting data of 2 sampling momentsTarget moment prediction network inputs 8 sampling moment data sequence x' (t)1)…x′(t2) And transition time prediction data of 2 sampling timesPredicting and outputting data of 1 sampling moment
2-2) determining a loss function of the deep neural network;
the loss function L for the deep neural network is:
in the formula, LyAnd LzRespectively representing the loss functions of the transition moment prediction network and the target moment prediction network,and y represents the predicted value and the tag value of the predicted network output at the transition time respectively,and z represents the predicted value and the tag value of the predicted network output at the target moment, respectively, wyAnd wzAre respectively LyAnd LzWeight coefficient, LyAnd LzSelection L1Loss function or L2Any of the loss functions:
in the formula, NBRepresenting the number of samples in batch processing, the value range is 32,64,128 and 256,the predicted value of the network output is u, the label value of the network output is j, and j represents the number of the jth output value of the network;
2-3) determining parameters and structural hyper-parameters of the deep neural network;
the parameters needing to be optimized by the prediction network at the transition moment comprise the weight W of the convolutional layerycAnd bias BycWeight W of the full link layeryfAnd bias Byf;
The parameters needing to be optimized by the target time prediction network comprise the weight W of the convolutional layerzcAnd bias BzcWeight W of the full link layerzfAnd bias Bzf;
The structural hyper-parameters of the deep neural network comprise Block number, channel number, node number, convolution kernel length, void coefficient and Dropout coefficient;
the value range of the Block number is an integer in the range of [5,10], the value of the channel number is an integer in the range of [30,200], the value of the node number is an integer in the range of [50,500], the value of the convolution kernel length is 3 or 5, the value of the void coefficient is 1 or 2, and the value range of Dropout is [0,1 ];
3) training the deep neural network constructed in the step 2) to obtain the trained deep neural network and corresponding optimal parameters; the method comprises the following specific steps:
3-1) training a deep neural network;
determining training parameters of a deep neural network, comprising: number of training rounds NEpochsAnd a learning rate α, wherein all data samples of the training data set are trained in one round for a number of training rounds NEpochsHas a value range of NEpochsNot less than 100, learning rate α value range of 0,1];
Initializing parameter W of deep neural network by random methodyc、Byc、Wyf、Byf、Wzc、Bzc、Wzf、BzfUsing a training data set XTrainTraining the deep neural network parameters, and adopting a standard random gradient descent method to carry out Wyc、Byc、Wyf、Byf、Wzc、Bzc、Wzf、BzfUpdating parameters; every interval NVNumber of training rounds using validation data set XValidatePerforming one-time verification on the deep neural network, and automatically storing a data set X for a verification setValidateThe network parameter with the minimum error is used as the current network parameter;
if the validation data set error no longer decreases or the training number reaches a specified number NEpochsIf yes, ending the training and entering the step 3-2);
3-2) Using test data set XTestTesting the deep neural network after training is finished, and evaluating the optimal deep neural network parameters;
the criterion for evaluation is the mean error value p, and the calculation expression is:
in the formula, NTestTo test the number of samples in a data set,and ziRespectively representing the ith predicted value and the tag value output by the target time prediction network;
if the estimated mean error value p<3%, finishing the evaluation, and saving the current network parameter as the optimal parameter W of the deep neural networkyc*、Byc*、Wyf*、Byf*、Wzc*、Bzc*、Wzf*、BzfEntering step 4); if the evaluated average error value p is more than or equal to 3%, returning to the step 3-1), and retraining the deep neural network;
4) predicting human gait by using the trained deep neural network; the method comprises the following specific steps:
4-1) selecting a new tester, and repeating the step 1-1), so that the tester wears the inertial sensor module, the pressure sensor module and the sound sensor module respectively;
4-2) randomly selecting 1 walking environment from the 5 walking environments in the step 1-2), and randomly selecting 1 human gait behavior from the 5 human gait behaviors in the step 1-2), wherein ascending and descending stairs are only carried out in a tile ground walking environment, ascending and descending slopes are only carried out in an asphalt ground walking environment, the step 1-3) is repeated, original data samples under the environment gait combination after a tester wears three sensor modules are collected in real time and are respectively sent to corresponding data collection and preprocessing modules, and all data sampled once are arranged to form 1 original data sample of 1 × 98 As raw data samplesThe k-th dimension raw data in (1, 2. ·, 98);
4-3) repeating steps 1-4), andpreprocessing is carried out, and the data sample after preprocessing is obtained and recorded asAnd sending to a deep neural network processing module;
4-4) in the deep neural network processing module, willData samples corresponding to the first 7 sampling instants of the sampling instants andform a new t1Time t2Inputting data into the deep neural network trained in the step 3), and outputting the tth test person by the network in real time5Temporal gait prediction Predicting outcome data for gaitThe k-th dimension of (1), k is 1, 2.
According to the human body gait prediction device based on multi-mode deep learning, the output gait prediction result can be directly transmitted to an exoskeleton robot or other systems to be used for gait control.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the modifications or substitutions within the technical scope of the present invention are included in the scope of the present invention, and therefore, the scope of the present invention should be subject to the protection scope of the claims.
Claims (1)
1. A human gait prediction device based on multi-modal deep learning is characterized by comprising: the system comprises an inertial sensor module, a pressure sensor module, a sound sensor module, an inertial sensor data acquisition and preprocessing module, a pressure sensor data acquisition and preprocessing module, a sound sensor data acquisition and preprocessing module and a deep neural network processing module;
the system comprises an inertial sensor module, a pressure sensor module, a deep neural network processing module, a pressure sensor data acquisition and preprocessing module, a deep neural network processing module and a deep neural network processing module, wherein the inertial sensor module comprises 7 inertial sensors, each inertial sensor is connected with the inertial sensor data acquisition and preprocessing module in a wired parallel mode respectively, the pressure sensor module comprises 12 pressure sensors, each pressure sensor is connected with the pressure sensor data acquisition and preprocessing module in a wired parallel mode respectively, the sound sensor module comprises 2 sound sensors, each sound sensor is connected with the sound sensor data acquisition and preprocessing module in a wired parallel mode respectively, and the inertial sensor data acquisition and preprocessing module, the pressure sensor data acquisition and preprocessing module and the sound sensor data acquisition and preprocessing module are connected with the deep neural network processing module in a wired parallel mode respectively;
the 7 inertial sensors are respectively arranged at the positions of the waist back, the left thigh, the right thigh, the left calf, the right calf, the left instep and the right instep of a user, and each inertial sensor is respectively used for acquiring 3-dimensional acceleration data, 3-dimensional angular velocity data, 3-dimensional angle data and 3-dimensional magnetic field data of a corresponding part and sending the acquired data to the inertial sensor data acquisition and preprocessing module;
the inertial sensor acquisition and preprocessing module is used for receiving data acquired by each inertial sensor, performing data preprocessing of filtering and normalization, and sending the preprocessed inertial sensor data to the deep neural network processing module;
the 12 pressure sensors are distributed in an insole mode, 1 insole is respectively placed on the left sole and the right sole, 6 pressure sensors are respectively arranged on each insole, each pressure sensor collects sole pressure at a corresponding position, and the collected data are sent to the pressure sensor data collecting and preprocessing module;
the pressure sensor data acquisition and preprocessing module is used for receiving data acquired by each pressure sensor, performing data preprocessing of filtering and normalization, and sending the preprocessed pressure sensor data to the deep neural network processing module;
the 2 sound sensors are respectively arranged on the left instep and the right instep and used for collecting the sole sound data of the walking of the human body and sending the collected data to the sound sensor data collecting and preprocessing module;
the sound sensor data acquisition and preprocessing module is used for receiving data sent by each sound sensor, performing data preprocessing of filtering and normalization, and sending the preprocessed sound sensor data to the deep neural network processing module;
the deep neural network processing module is used for receiving the preprocessed inertial sensor data, pressure sensor data and sound sensor data, predicting the gait of the received data by using the deep neural network and outputting a gait prediction result;
1) enabling a tester to wear different sensors to acquire multi-modal data, preprocessing the multi-modal data, establishing a data sample set, and dividing the data sample set into a training data set, a verification data set and a test data set; the method comprises the following specific steps:
1-1) a tester respectively wears an inertial sensor module consisting of 7 inertial sensors, a pressure sensor module consisting of 12 pressure sensors and a sound sensor module consisting of 2 sound sensors; the 7 inertial sensors are respectively arranged at 7 positions of the back, the left thigh, the right thigh, the left calf, the right calf, the left instep and the right instep of a tester and are used for acquiring 3-dimensional acceleration data, 3-dimensional angular velocity data, 3-dimensional angle data and 3-dimensional magnetic field data of different parts of the lower limb of a human body; the 12 pressure sensors are distributed in an insole mode, 1 insole is respectively arranged at the left sole and the right sole, and each insole comprises 6 pressure sensor data acquisition points for acquiring sole pressure data of the 12 data points; the sound sensor is worn on the instep, and the left instep and the right instep are respectively 1 and used for collecting the sole sound of the walking of the human body;
1-2) after finishing wearing, enabling a tester to respectively perform 5 human gait behaviors in 5 walking environments, wherein the walking environments comprise: tile, cement, asphalt, sand, grass, the gait activities include: walking slowly on the flat ground, walking quickly on the flat ground, going up and down stairs, going up and down slopes, and turning left and right; wherein, going up and down stairs only under the walking environment of the tile land, going up and down slopes only under the walking environment of the asphalt land, and obtaining 17 environment gait combinations; wherein the time length of the single environment gait combination is 10-60 minutes;
1-3) under each environment gait combination, at each sampling moment, 84-dimensional data including 7 groups of 3-dimensional acceleration, 3-dimensional angular velocity, 3-dimensional angle and 3-dimensional magnetic field are acquired by 7 inertial sensors and sent to an inertial sensor acquisition and preprocessing module, 12 pressure sensors acquire 12-dimensional plantar pressure data and send to a pressure sensor data acquisition and preprocessing module, and 2 sound sensors acquire 2-dimensional walking sound data and send to a sound sensor data acquisition and preprocessing module; the sampling frequency of each sensor is 20-100 Hz;
all data at a single sampling instant constitutes the original data sample of 1 × 98, is the jth original data sample under the ith environment gait combinationThe k-th dimension raw data in (1, 2., 98), wherein the 98-dimensional data are arranged in the order of 21-dimensional acceleration, 21-dimensional angular velocity, 21-dimensional angle, 21-dimensional magnetic field, 12-dimensional pressure and 2-dimensional sound; all original data samples obtained by single environment gait combined samplingSet of constitutions is17 ambient gait combinations allForming a set of raw data samplesXRawThe total size of the data samples of (1) is N;
1-4) each data acquisition and preprocessing module pair XRawFiltering and normalizing corresponding data in all original data samples; filtering method selection Kalman filtering method, single original data sampleData of each dimension in (1)The normalization method of (1) is as follows:
in the formula:normalized data of the kth dimension original data of the jth original data sample under the ith environment gait combination,the k-dimension original data of the j original data sample under the i environment gait combination,is the maximum of all the k-th dimension raw data,is the minimum of all the k-th dimension raw data,representing the mean of all k-dimension raw data;
after all the original data samples are preprocessed, a data sample set X is obtainedNormAnd sending to a deep neural network processing module;
1-5) deep neural network processing Module will XNormDivided into training data sets X according to set proportionTrainVerification data set XValidateAnd test data set XTest(ii) a Wherein the training data set XTrainThe proportion of the test data set is not less than 75%, the proportion of the verification data set is not less than 5%, and the proportion of the test data set is not less than 5%;
2) constructing a deep neural network based on a time convolution network in a deep neural network processing module; the method comprises the following specific steps:
2-1) determining a deep neural network structure;
adopting a time convolution network to construct a deep neural network, wherein the deep neural network is divided into a transition time prediction network and a target time prediction network;
let time 0 < t1<t2<t3<t4<t5In data sample set XNormIn, select t1Time t2Taking the data sample of the moment as input data x (t) of the deep neural network1)…x(t2),t3Time t4Data sample creation for a time instant is a transition time instant sample label y (t)3)…y(t4),t5The data sample of a time of day is created as a target time of day sample label z (t)5);
The input data of the transition moment prediction network is t1Time t2Data sample x (t) at time instant1)…x(t2) Output prediction data of t3Time t4Data sample prediction value at timeTarget time predicts network will x (t)1)…x(t2) All or part of the data x' (t)1)…x′(t2) Andwith the input of predicted data t5Predicted value of time
Let t2=t1+7Tsample,t3=t2+Tsample,t4=t3+Tsample,t5=t4+Tsample,TsampleInputting a data sequence x (t) of 8 sampling moments into the network for the prediction of the data sampling interval, i.e. the transition moment1)…x(t2) Predicting and outputting data of 2 sampling momentsTarget moment prediction network inputs 8 sampling moment data sequence x' (t)1)…x′(t2) And transition time prediction data of 2 sampling timesPredicting and outputting data of 1 sampling moment
2-2) determining a loss function of the deep neural network;
the loss function L for the deep neural network is:
in the formula, LyAnd LzRespectively representing the loss functions of the transition moment prediction network and the target moment prediction network,and y represents the predicted value and the tag value of the predicted network output at the transition time respectively,and z represents the predicted value and the tag value of the predicted network output at the target moment, respectively, wyAnd wzAre respectively LyAnd LzWeight coefficient, LyAnd LzSelection L1Loss function or L2Any of the loss functions:
in the formula, NBRepresenting the number of samples in batch processing, the value range is 32,64,128 and 256,the predicted value of the network output is u, the label value of the network output is j, and j represents the number of the jth output value of the network;
2-3) determining parameters and structural hyper-parameters of the deep neural network;
the predicted network parameters at the transition moment contain the weight W of the convolutional layerycAnd bias BycWeight W of the full link layeryfAnd bias Byf;
Target time prediction network parameter containing convolution layer weight WzcAnd bias BzcWeight W of the full link layerzfAnd bias Bzf;
The structural hyper-parameters of the deep neural network comprise Block number, channel number, node number, convolution kernel length, void coefficient and Dropout coefficient;
the value range of the Block number is an integer in the range of [5,10], the value of the channel number is an integer in the range of [30,200], the value of the node number is an integer in the range of [50,500], the value of the convolution kernel length is 3 or 5, the value of the void coefficient is 1 or 2, and the value range of Dropout is [0,1 ];
3) training the deep neural network constructed in the step 2) to obtain the trained deep neural network and corresponding optimal parameters; the method comprises the following specific steps:
3-1) training a deep neural network;
determining training parameters of a deep neural network, comprising: number of training rounds NEpochsAnd a learning rate α, wherein all data samples of the training data set are trained in one round for a number of training rounds NEpochsHas a value range of NEpochsNot less than 100, learning rate α value range of 0,1];
Initializing parameter W of deep neural network by random methodyc、Byc、Wyf、Byf、Wzc、Bzc、Wzf、BzfUsing a training data set XTrainTraining the deep neural network parameters, and adopting a standard random gradient descent method to carry out Wyc、Byc、Wyf、Byf、Wzc、Bzc、Wzf、BzfUpdating parameters; every interval NVNumber of training rounds using validation data set XValidatePerforming one-time verification on the deep neural network, and automatically storing a data set X for a verification setValidateThe network parameter with the minimum error is used as the current network parameter;
if the validation data set error no longer decreases or the training number reaches a specified number NEpochsIf yes, ending the training and entering the step 3-2);
3-2) Using test data set XTestTesting the deep neural network after training is finished, and evaluating the optimal deep neural network parameters;
the criterion for evaluation is the mean error value p, and the calculation expression is:
in the formula, NTestTo test the number of samples in a data set,and ziRespectively representing the ith predicted value and the tag value output by the target time prediction network;
if the estimated mean error value p<3%, finishing the evaluation, and saving the current network parameter as the optimal parameter W of the deep neural networkyc*、Byc*、Wyf*、Byf*、Wzc*、Bzc*、Wzf*、BzfEntering step 4); if the evaluated average error value p is more than or equal to 3%, returning to the step 3-1), and retraining the deep neural network;
4) predicting human gait by using the trained deep neural network; the method comprises the following specific steps:
4-1) selecting a new tester, and repeating the step 1-1), so that the tester wears the inertial sensor module, the pressure sensor module and the sound sensor module respectively;
4-2) randomly selecting 1 walking environment from the 5 walking environments in the step 1-2), and randomly selecting 1 human gait behavior from the 5 human gait behaviors in the step 1-2), wherein ascending and descending stairs are only carried out in a tile ground walking environment, ascending and descending slopes are only carried out in an asphalt ground walking environment, the step 1-3) is repeated, original data samples under the environment gait combination after a tester wears three sensor modules are collected in real time and are respectively sent to corresponding data collection and preprocessing modules, and all data sampled once are arranged to form 1 original data sample of 1 × 98 As raw data samplesThe k-th dimension raw data in (1, 2. ·, 98);
4-3) repeating steps 1-4), andpreprocessing is carried out, and the data sample after preprocessing is obtained and recorded asAnd sending to a deep neural network processing module;
4-4) in the deep neural network processing module, willData samples corresponding to the first 7 sampling instants of the sampling instants andform a new t1Time t2Inputting data into the deep neural network trained in the step 3), and outputting the tth test person by the network in real time5Temporal gait prediction Predicting outcome data for gaitThe k-th dimension of (1), k is 1, 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910464986.0A CN110236550B (en) | 2019-05-30 | 2019-05-30 | Human gait prediction device based on multi-mode deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910464986.0A CN110236550B (en) | 2019-05-30 | 2019-05-30 | Human gait prediction device based on multi-mode deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110236550A CN110236550A (en) | 2019-09-17 |
CN110236550B true CN110236550B (en) | 2020-07-10 |
Family
ID=67885473
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910464986.0A Active CN110236550B (en) | 2019-05-30 | 2019-05-30 | Human gait prediction device based on multi-mode deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110236550B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110893100A (en) * | 2019-12-16 | 2020-03-20 | 广东轻工职业技术学院 | Device and method for monitoring posture change based on plantar pressure sensor |
CN111590544A (en) * | 2020-04-10 | 2020-08-28 | 南方科技大学 | Method and device for determining output force of exoskeleton |
CN111820530B (en) * | 2020-07-23 | 2021-07-27 | 东莞市喜宝体育用品科技有限公司 | Shoes with bradyseism braced system |
CN113576467A (en) * | 2021-08-05 | 2021-11-02 | 天津大学 | Wearable real-time gait detection system integrating plantar pressure sensor and IMU |
CN113658707A (en) * | 2021-08-26 | 2021-11-16 | 华南理工大学 | Foot varus angle detection modeling method and system |
CN114343617A (en) * | 2021-12-10 | 2022-04-15 | 中国科学院深圳先进技术研究院 | Patient gait real-time prediction method based on edge cloud cooperation |
CN114176577A (en) * | 2021-12-30 | 2022-03-15 | 北京航空航天大学 | Method and device for detecting motor nerve diseases and readable storage medium |
CN115227238A (en) * | 2022-08-04 | 2022-10-25 | 河北工业大学 | Gait recognition system based on wearable strain sensor and construction method thereof |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2556795A1 (en) * | 2011-08-09 | 2013-02-13 | Nederlandse Organisatie voor toegepast -natuurwetenschappelijk onderzoek TNO | Method and system for feedback on running style |
CN103976739B (en) * | 2014-05-04 | 2019-06-04 | 宁波麦思电子科技有限公司 | It is wearable to fall down dynamic realtime detection method and device |
CN106175778B (en) * | 2016-07-04 | 2019-02-01 | 中国科学院计算技术研究所 | A kind of method that establishing gait data collection and gait analysis method |
CN106344031A (en) * | 2016-08-29 | 2017-01-25 | 常州市钱璟康复股份有限公司 | Sound feedback-based gait training and estimating system |
CN109431510A (en) * | 2018-11-08 | 2019-03-08 | 华东师范大学 | A kind of flexible gait monitoring device calculated based on artificial intelligence |
CN109784412A (en) * | 2019-01-23 | 2019-05-21 | 复旦大学 | The multiple sensor signals fusion method based on deep learning for gait classification |
-
2019
- 2019-05-30 CN CN201910464986.0A patent/CN110236550B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110236550A (en) | 2019-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110236550B (en) | Human gait prediction device based on multi-mode deep learning | |
CN110232412B (en) | Human gait prediction method based on multi-mode deep learning | |
CN110334573B (en) | Human motion state discrimination method based on dense connection convolutional neural network | |
CN106156524A (en) | A kind of online gait planning system and method for Intelligent lower limb power assisting device | |
CN110659677A (en) | Human body falling detection method based on movable sensor combination equipment | |
CN104656112B (en) | Based on surface electromyogram signal and the used personal localization method and devices combined of MEMS | |
CN106874874A (en) | Motion state identification method and device | |
CN108958482B (en) | Similarity action recognition device and method based on convolutional neural network | |
CN110755085B (en) | Motion function evaluation method and equipment based on joint mobility and motion coordination | |
CN109846487A (en) | Thigh measuring method for athletic posture and device based on MIMU/sEMG fusion | |
CN110193830B (en) | Ankle joint gait prediction method based on RBF neural network | |
CN114495267A (en) | Old people falling risk assessment method based on multi-dimensional data fusion | |
Wang et al. | A2dio: Attention-driven deep inertial odometry for pedestrian localization based on 6d imu | |
Wang et al. | Inertial odometry using hybrid neural network with temporal attention for pedestrian localization | |
Tao et al. | Attention-based sensor fusion for human activity recognition using imu signals | |
CN110705599B (en) | Human body action recognition method based on online transfer learning | |
CN116597940A (en) | Modeling method of movement disorder symptom quantitative evaluation model | |
CN111419237A (en) | Cerebral apoplexy hand motion function Carroll score prediction method | |
Yang et al. | Inertial sensing for lateral walking gait detection and application in lateral resistance exoskeleton | |
CN113229806A (en) | Wearable human body gait detection and navigation system and operation method thereof | |
Qian et al. | A Pedestrian Navigation Method Based on Construction of Adapted Virtual Inertial Measurement Unit Assisted by Gait Type Classification | |
CN116502066A (en) | Exoskeleton swing period prediction system and method based on BP neural network | |
CN115904086A (en) | Sign language identification method based on wearable calculation | |
CN116206358A (en) | Lower limb exoskeleton movement mode prediction method and system based on VIO system | |
CN115615432A (en) | Indoor pedestrian inertial navigation method based on deep neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |