CN114983447B - Human action recognition, analysis and storage wearable device based on AI technology - Google Patents

Human action recognition, analysis and storage wearable device based on AI technology Download PDF

Info

Publication number
CN114983447B
CN114983447B CN202210913498.5A CN202210913498A CN114983447B CN 114983447 B CN114983447 B CN 114983447B CN 202210913498 A CN202210913498 A CN 202210913498A CN 114983447 B CN114983447 B CN 114983447B
Authority
CN
China
Prior art keywords
layer
signal
input signal
fusion
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210913498.5A
Other languages
Chinese (zh)
Other versions
CN114983447A (en
Inventor
许笑傲
王维
李宇佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Ocean University
Original Assignee
Guangdong Ocean University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Ocean University filed Critical Guangdong Ocean University
Priority to CN202210913498.5A priority Critical patent/CN114983447B/en
Publication of CN114983447A publication Critical patent/CN114983447A/en
Application granted granted Critical
Publication of CN114983447B publication Critical patent/CN114983447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02438Detecting, measuring or recording pulse rate or heart rate with portable devices, e.g. worn by the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Evolutionary Computation (AREA)
  • Cardiology (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fuzzy Systems (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human body action recognition, analysis and storage wearable device based on AI technology, comprising: acquiring biological reference signals by adopting wearable equipment, wherein the biological reference signals comprise electromyographic signals, blood dissolved oxygen signals and heart rate signals; and the motion prediction depth model predicts motion according to the biological reference signal. Through the intelligent model, in the daily monitoring process, the action can be prejudged through the action prediction depth model, and management or early warning information is provided for a monitoring platform or the person. The invention has the advantages of novel identification target, accurate prediction and identification, reduced monitoring risk and high learning speed.

Description

Human action recognition, analysis and storage wearable device based on AI technology
Technical Field
The invention relates to the technical field of wearable electronic devices, in particular to a human body action recognition, analysis and storage wearable device based on an AI technology.
Background
In the present day of the rapid development of microelectronics and material technology, a great number of novel microelectronic devices with various functions and high integration are continuously developed, and have unprecedented application prospects in various fields in people's daily life are shown. The human-computer interface can naturally expand a communication channel between a person and external equipment, and the intention of the person is related with the behavior of the machine, so that human-computer interaction is realized. Heretofore, bioelectric signals applied to human-machine interfaces include neuronal signals, as well as cortical, electroencephalographic, electromyographic, electrooculographic signals, and the like. Because of the low signal-to-noise ratio and poor stability, the development of mechanical human-machine interfaces based on biopotential signals is slow, and therefore, the development of wearable epidermis sensors that can quickly and effectively recognize the intent of the human body is of great importance. The prior art, such as the Chinese patent publication No. CN113607042B, discloses a wearable epidermis sensor for identifying human intention and application thereof, comprising an encapsulation layer, an electrode layer, a friction layer, an adhesion layer and an isolation layer which are sequentially arranged from top to bottom, wherein the electrode layer comprises an inner circuit and electrodes, the inner circuit is dendritic, a plurality of electrodes are respectively arranged at the tail ends of all branches of the inner circuit, the encapsulation layer and the friction layer completely cover the electrode layer from two sides and have the same shape as the electrode layer, the adhesion layer is arranged at the lower end of the friction layer and does not cover the corresponding position of the electrode, and the isolation layer is embedded in the adhesion layer. According to the invention, through the structural design of the patterned adhesion layer and the spacer layer, the device is more effectively attached to the arm muscle, and the contact separation effect can be better realized when the hand action muscle is deformed, so that obvious signal output and better human intention recognition effect are realized, and the wearability, the comfort and the practicability of the human intention recognition sensor are improved.
Disclosure of Invention
The invention aims to provide a human body action recognition, analysis and storage method and a wearable device based on an AI technology, which are novel in recognition target, accurate in prediction recognition, low in monitoring risk and high in learning speed.
An AI technology-based human motion recognition analysis method, comprising:
a bio-reference signal is acquired with a wearable device,
the biological reference signals comprise an electromyographic signal, a blood dissolved oxygen signal and a heart rate signal;
and the motion prediction depth model predicts motion according to the biological reference signal. Through the intelligent model, in the daily monitoring process, the action can be prejudged through the action prediction depth model, and management or early warning information is provided for a monitoring platform or the person.
In order to optimize the technical scheme, the optimization measures adopted further comprise: the action prediction depth model performs training of the deep learning network by acquiring the biological reference signals and corresponding actions as training data sets. Typically, a single electromyographic or heart rate signal is monitored, and what is provided after monitoring is a single meaning. After learning by the action prediction depth model, a plurality of biological reference signals are learned, so that actions with high risk can be predicted and prompted in advance.
The electromyographic signal, the blood dissolved oxygen signal and the heart rate signal are respectively used as a first input signal, a second input signal and a third input signal; the first input signal is one of the biological reference signals, and the second input signal and the third input signal are the rest two signals of the biological reference signals. Excessive signal learning and prediction can increase the computational burden, one signal is used as a main signal, and the other two signals are fused and then used as auxiliary signals, so that the learning complexity of the action prediction depth model is reduced, and the efficiency of the whole system is improved.
The identification method comprises at least the following steps,
1) Inputting the first input signal training data and the fusion signal training data into the deep learning network for training; the fusion signal is obtained after the second input signal training data and the third input signal training data are processed by a DFRWT fusion method;
2) Inputting first signal test data and fusion signal test data into the action prediction depth model obtained after the deep learning network training; the fusion signal test data are obtained after the second input signal test data and the third input signal test data are processed by a DFRWT fusion method;
3) And the motion prediction depth model outputs motion prediction result data.
The deep learning network comprises six layers, wherein the first layer to the fifth layer comprise convolution kernels, local response normalization, a maximum pooling layer and operation of correcting linear units; the sixth layer contains operations of convolution kernels, global average pooling, and convolution kernels. The architecture of the deep learning network adopted by the technical scheme is that a first layer to a fifth layer are convolution layers, and a sixth layer is a full-connection layer. Seven-layer models, i.e., 5 convolutional layers (first through fifth layers) and 3 fully-connected layers (sixth through seventh layers), are commonly used in the art. Finally, the output is performed after the Softmax layer is adopted. The present solution reduces one layer by adding a Local Response Normalization (LRN) and a maximum pooling layer (MXP) at the third and fourth layers and adding a Local Response Normalization (LRN) at the fifth layer. After the first input signal training data and the fusion signal training data are acquired, the first input signal training data and the fusion signal training data are continuously convolved from the first layer to the sixth layer to learn. The input and output of each layer in the convolution layers are designed, the convolution kernel size is 3 x 1 in one to five layers, and the sixth layer adopts 3 x 1 & 1 x 1; the number is in turn 2 to the power of (4+n), n being the number of layers. In AlexNet in the prior art, the LRN layer can realize local inhibition and improve the generalization capability of a model; the MAX layer has the functions of feature fusion and dimension reduction, and the full connection layer (fully connected layer, FC) comprises a large number of neural network nodes and is responsible for logic reasoning; meanwhile, in order to speed up the operation and prevent overfitting, a Dropout layer is inserted in the FC layer. Therefore, the technical scheme can solve the problem that two layers of FC (fully connected layer) are needed in the prior art through a sixth single layer.
The invention also discloses a computer device comprising one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the apparatus, cause the apparatus to perform the method described above.
A computer storage medium storing one or more computer programs which, when executed, perform the method described above.
Wearable device, characterized by: the computer storage medium described above is installed.
The invention adopts the wearable equipment to acquire the biological reference signals, wherein the biological reference signals comprise an electromyographic signal, a blood dissolved oxygen signal and a heart rate signal; and the motion prediction depth model predicts motion according to the biological reference signal. Through an intelligent model, in the daily monitoring process, the action can be prejudged through an action prediction depth model, and management or early warning information is provided for a monitoring platform or a person; by taking one signal as the main signal and the other two signals as the auxiliary signals after being fused, the learning complexity of the motion prediction depth model is reduced, and the efficiency of the whole system is improved. Therefore, the invention has the advantages of novel identification target, accurate prediction and identification, reduced monitoring risk and high learning speed.
Drawings
FIG. 1 is a schematic diagram of a method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an embodiment of a model architecture alignment;
FIG. 3 is a diagram illustrating learning convergence according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below in connection with the following examples.
Example 1:
an AI technology-based human motion recognition analysis method, comprising:
a bio-reference signal is acquired with a wearable device,
the biological reference signals comprise an electromyographic signal, a blood dissolved oxygen signal and a heart rate signal;
and the motion prediction depth model predicts motion according to the biological reference signal. Through the intelligent model, in the daily monitoring process, the action can be prejudged through the action prediction depth model, and management or early warning information is provided for a monitoring platform or the person.
In order to optimize the technical scheme, the optimization measures adopted further comprise: the action prediction depth model performs training of the deep learning network by acquiring the biological reference signals and corresponding actions as training data sets. Typically, a single electromyographic or heart rate signal is monitored, and what is provided after monitoring is a single meaning. After learning by the action prediction depth model, a plurality of biological reference signals are learned, so that actions with high risk can be predicted and prompted in advance.
The electromyographic signal, the blood dissolved oxygen signal and the heart rate signal are respectively used as a first input signal, a second input signal and a third input signal; the first input signal is one of the biological reference signals, and the second input signal and the third input signal are the rest two signals of the biological reference signals. Excessive signal learning and prediction can increase the computational burden, one signal is used as a main signal, and the other two signals are fused and then used as auxiliary signals, so that the learning complexity of the action prediction depth model is reduced, and the efficiency of the whole system is improved.
The identification method comprises at least the following steps,
4) Inputting the first input signal training data and the fusion signal training data into the deep learning network for training; the fusion signal is obtained after the second input signal training data and the third input signal training data are processed by a DFRWT fusion method;
5) Inputting first signal test data and fusion signal test data into the action prediction depth model obtained after the deep learning network training; the fusion signal test data are obtained after the second input signal test data and the third input signal test data are processed by a DFRWT fusion method;
6) And the motion prediction depth model outputs motion prediction result data.
The deep learning network comprises six layers, wherein the first layer to the fifth layer comprise convolution kernels, local response normalization, a maximum pooling layer and operation of correcting linear units; the sixth layer contains operations of convolution kernels, global average pooling, and convolution kernels. The architecture of the deep learning network adopted by the technical scheme is that a first layer to a fifth layer are convolution layers, and a sixth layer is a full-connection layer. Seven-layer models, i.e., 5 convolutional layers (first through fifth layers) and 3 fully-connected layers (sixth through seventh layers), are commonly used in the art. Finally, the output is performed after the Softmax layer is adopted. The present solution reduces one layer by adding a Local Response Normalization (LRN) and a maximum pooling layer (MXP) at the third and fourth layers and adding a Local Response Normalization (LRN) at the fifth layer. After the first input signal training data and the fusion signal training data are acquired, the first input signal training data and the fusion signal training data are continuously convolved from the first layer to the sixth layer to learn. The input and output of each layer in the convolution layers are designed, the convolution kernel size is 3 x 1 in one to five layers, and the sixth layer adopts 3 x 1 & 1 x 1; the number is in turn 2 to the power of (4+n), n being the number of layers. In AlexNet in the prior art, the LRN layer can realize local inhibition and improve the generalization capability of a model; the MAX layer has the functions of feature fusion and dimension reduction, and the full connection layer (fully connected layer, FC) comprises a large number of neural network nodes and is responsible for logic reasoning; meanwhile, in order to speed up the operation and prevent overfitting, a Dropout layer is inserted in the FC layer. Therefore, the technical scheme can solve the problem that two layers of FC (fully connected layer) are needed in the prior art through a sixth single layer.
The invention also discloses a computer device comprising one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the apparatus, cause the apparatus to perform the method described above.
A computer storage medium storing one or more computer programs which, when executed, perform the method described above.
Wearable device, characterized by: the computer storage medium described above is installed.
By modifying the classical AlexNet network, its sensitivity to classifying near-long-term large numbers of image signals is improved. According to the technical scheme, deep abstract data signal characteristics are required to be more focused, so that detailed information of action intention is fully acquired, and the network can have the capability of classification and pre-judgment. The classical AlexNet network model requires input of images with the size of 277 x 227 x 3, and the generated wavelet time-frequency diagram is 200 x 400 x 1, so that the larger the images, the more training parameters are needed, and the improvement of the original AlexNet model is needed. Aiming at the characteristics of the premature beat time-frequency diagram, the original AlexNet model is improved in aspects of the dimension, the convolution kernel size, the number and the like of an input image, and the number of layers of the improved network architecture is reduced by 1 layer, wherein the improved network architecture comprises 6 convolution layers and 1 full connection layer. The abbreviations relevant to the present invention are as follows: convolution kernel (conv.), local Response Normalization (LRN), max pooling layer (MXP), excitation function is modified linear units (ReLU), fully Connected (FC) layer, and Global Average Pooling (GAP).
In order to extract finer features in data information, the convolution layer structures of the third layer to the fifth layer in the prior art are changed into the structure of the first layer, a two-dimensional convolution module is matched with global average pooling to replace a sixth layer and a seventh layer of a full-connection layer in a prior art model, and feature extraction is carried out on the extracted high-order features of the convolution layer again, so that the model can learn differences between early-room signals and normal signals under different conditions. Meanwhile, the convolution kernel size of each layer of convolution layers (the first layer to the sixth layer) is reduced to 3 x 1, and the number of the convolution kernels is increased to extract deeper features.
The input data size that this technical scheme adopted at first through sixth layer is:
200*400*1、97*197*32 、47*97*64、22*47*128、9*22*256、3*9*512。
the conv. output data sizes of the first to sixth layers are:
198*398*32、97*197*64、47*97*128、22*47*256、9*22*512、1*7*1024。
the ReLU output data sizes of the first to sixth layers are:
198*398*32、97*197*64、47*97*128、22*47*256、9*22*512。
the pooled output data sizes of the first to sixth layers are:
97*197*32、47*97*64、22*47*128、9*22*256、3*9*512、1*1*1024。
the LRN output data sizes for the first through fifth layers are:
97*197*32、47*97*64、22*47*128、9*22*256、3*9*512。
in the prior art, local response normalization and other normalization processes are replaced by instance normalization. This approach is also adopted in the present solution. On one hand, the time sequence signal convolution consumes a large amount of memory, and the scheme can normalize single data during model training, so that deviation of estimation of the mean value (formula 1) and standard deviation (formula 2) of complete data due to insufficient batch size is avoided; on the other hand, using instance normalization can maintain independent distribution among different channels of the normalization process.
Figure DEST_PATH_IMAGE001
(1)
Figure DEST_PATH_IMAGE002
(2)
For inputting time sequence data𝑥 ∈ 𝑅 𝑁×𝑇×𝐶 Instance normalization calculates the average μ of the time dimension T of each sample nc And standard deviation sigma nc And the dimensions of lot N and channel C, i.e., the mean and standard deviation, are preserved and solved only within the channel. t is time. The algorithm is simplified in the formula 2, the problem of batch size is solved, meanwhile, the data volume burden of operation is reduced, the electric energy consumption of equipment is saved, and the screening effect of an example sample is not obviously affected. Compared with the learning mode of the existing AlexNet network, the technical scheme has the characteristics of rapid convergence and high accuracy.
The prior art aims at a single electrocardiosignal generally, and dimension conversion is carried out on one-dimensional data. The technical scheme of the invention adopts an electromyographic signal, a blood dissolved oxygen signal and a heart rate signal as corresponding dimensions to input a neural network model; therefore, the invention does not need to process the electrocardiosignals by a two-dimensional algorithm, thereby saving calculation power and energy consumption.
Further, two of the three signals may be fused, and the remaining one signal may be simply filtered and smoothed. For example, the original signal of the heart rate signal is sequentially subjected to band-pass filtering, median filtering, and then gaussian smoothing. And carrying out fusion treatment on the electromyographic signals and the blood dissolved oxygen signals. The mode of signal fusion processing adopts DFRWT fusion: the method comprises the steps of adding a p value selection step on the basis of a multi-resolution analysis framework of a traditional DWT fusion method, decomposing a registered source image into different fractional domains to obtain sub-band coefficients under different p orders, obtaining fusion coefficients according to image features by adopting corresponding fusion rules, and carrying out inverse DFRWT on the fused coefficients to obtain a composite image, so that the remarkable features of the source image are reserved in the composite image. Because the fusion effect is different under different p values, the DFRWT fusion method has flexibility, and the space for selecting the fusion result is improved. The input of the DFRWT fusion can directly use the signals, and can also be the normalized and standardized data image signals. Normalization and standardization processing belongs to the prior art, and is not repeated because of limited space.
While the invention has been described in connection with the preferred embodiments, it is not intended to be limiting, but it will be understood by those skilled in the art that various changes, substitutions and alterations of the subject matter set forth herein can be made without departing from the spirit and scope of the invention, and it is intended that the scope of the invention shall be defined from the appended claims.

Claims (3)

1. A human motion recognition analysis method based on AI technology is characterized in that:
a bio-reference signal is acquired with a wearable device,
the biological reference signals comprise an electromyographic signal, a blood dissolved oxygen signal and a heart rate signal;
the motion prediction depth model predicts motion according to the biological reference signal;
the electromyographic signal, the blood dissolved oxygen signal and the heart rate signal are respectively used as a first input signal, a second input signal and a third input signal;
the action prediction depth model adopts a deep learning network, and the deep learning network acquires the biological reference signals and corresponding actions as a training data set for training;
the deep learning network comprises six layers, wherein the first layer to the fifth layer comprise convolution kernels, local response normalization, a maximum pooling layer and operation of correcting linear units; the sixth layer contains convolution kernel, global average pooling and convolution kernel operation; the deep learning network changes a seven-layer deep learning network into a six-layer deep learning network by adding local response normalization and a maximum pooling layer in a third layer and a fourth layer and adding local response normalization in a fifth layer;
inputting the first input signal training data and the fusion signal training data into the deep learning network for training; the fusion signal training data are obtained by processing second input signal training data and third input signal training data through a DFRWT fusion method;
after the first input signal training data and the fusion signal training data are acquired, continuously convoluting the first layer to the sixth layer of the deep learning network to learn; designing the input and output of each layer of the convolution layers, wherein the convolution kernel sizes of the first layer to the fifth layer are 3 x 1, and the convolution kernel sizes of the sixth layer are 3 x 1 and 1 x 1; the number of convolution kernels of the first layer to the sixth layer is 2 in sequence 4+n N is the number of layers;
inputting first input signal test data and fusion signal test data into a motion prediction depth model obtained after training of the deep learning network; the fusion signal test data are obtained by processing second input signal test data and third input signal test data through a DFRWT fusion method; the first input signal is used as a main signal, the second input signal and the third input signal are fused and then used as auxiliary signals, and the complexity of the motion prediction depth model can be reduced by inputting the main signal and the auxiliary signals into the motion prediction depth model;
and the action prediction depth model outputs action prediction result data according to the input first signal test data and the fusion signal test data.
2. A computer device, characterized by: including one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the computer device, cause the computer device to perform the AI-technology-based human action recognition analysis method of claim 1.
3. A computer storage medium characterized by: the computer storage medium stores one or more computer programs that, when executed, are capable of performing the AI-technology-based human action recognition analysis method of claim 1.
CN202210913498.5A 2022-08-01 2022-08-01 Human action recognition, analysis and storage wearable device based on AI technology Active CN114983447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210913498.5A CN114983447B (en) 2022-08-01 2022-08-01 Human action recognition, analysis and storage wearable device based on AI technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210913498.5A CN114983447B (en) 2022-08-01 2022-08-01 Human action recognition, analysis and storage wearable device based on AI technology

Publications (2)

Publication Number Publication Date
CN114983447A CN114983447A (en) 2022-09-02
CN114983447B true CN114983447B (en) 2023-06-20

Family

ID=83021528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210913498.5A Active CN114983447B (en) 2022-08-01 2022-08-01 Human action recognition, analysis and storage wearable device based on AI technology

Country Status (1)

Country Link
CN (1) CN114983447B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115438705B (en) * 2022-11-09 2023-04-07 武昌理工学院 Human body action prediction method based on wearable equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2329470B1 (en) * 2008-08-28 2013-04-03 Koninklijke Philips Electronics N.V. Fall detection and/or prevention systems
CN106569607A (en) * 2016-11-08 2017-04-19 上海交通大学 Head action identifying system based on myoelectricity and motion sensor
US20180177436A1 (en) * 2016-12-22 2018-06-28 Lumo BodyTech, Inc System and method for remote monitoring for elderly fall prediction, detection, and prevention
CN109011508A (en) * 2018-07-30 2018-12-18 三星电子(中国)研发中心 A kind of intelligent coach system and method
CN111259699A (en) * 2018-12-02 2020-06-09 程昔恩 Human body action recognition and prediction method and device
CN110675596B (en) * 2019-10-09 2023-10-27 台州颐健科技有限公司 Fall detection method applied to wearable terminal
CN114332166A (en) * 2021-12-31 2022-04-12 安徽大学 Visible light infrared target tracking method and device based on modal competition cooperative network

Also Published As

Publication number Publication date
CN114983447A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
Wei et al. A review of algorithm & hardware design for AI-based biomedical applications
Seal et al. DeprNet: A deep convolution neural network framework for detecting depression using EEG
Huang et al. Accurate classification of ECG arrhythmia using MOWPT enhanced fast compression deep learning networks
Pourbabaee et al. Deep convolutional neural networks and learning ECG features for screening paroxysmal atrial fibrillation patients
Naderi et al. Analysis and classification of EEG signals using spectral analysis and recurrent neural networks
Dhull et al. ECG beat classifiers: A journey from ANN to DNN
Manjunath et al. A low-power lstm processor for multi-channel brain eeg artifact detection
Hou et al. EEG-based emotion recognition for hearing impaired and normal individuals with residual feature pyramids network based on time–frequency–spatial features
CN114983447B (en) Human action recognition, analysis and storage wearable device based on AI technology
Meng et al. A motor imagery EEG signal classification algorithm based on recurrence plot convolution neural network
Jia et al. Joint spatial and temporal features extraction for multi-classification of motor imagery EEG
CN114595725B (en) Electroencephalogram signal classification method based on addition network and supervised contrast learning
Parisi et al. Evolutionary denoising-based machine learning for detecting knee disorders
Sharma et al. DepCap: a smart healthcare framework for EEG based depression detection using time-frequency response and deep neural network
CN111543985A (en) Brain control hybrid intelligent rehabilitation method based on novel deep learning model
CN108874137B (en) General model for gesture action intention detection based on electroencephalogram signals
CN117338313B (en) Multi-dimensional characteristic electroencephalogram signal identification method based on stacking integration technology
CN110432899B (en) Electroencephalogram signal identification method based on depth stacking support matrix machine
Hassan et al. Review of EEG Signals Classification Using Machine Learning and Deep-Learning Techniques
Haroon ECG arrhythmia classification Using deep convolution neural networks in transfer learning
CN111613321A (en) Electrocardiogram stroke auxiliary diagnosis method based on dense convolutional neural network
Sharma et al. An automated MDD detection system based on machine learning methods in smart connected healthcare
CN116172514A (en) ECG signal classification method based on TS-BERT neural network model architecture
CN114052734B (en) Electroencephalogram emotion recognition method based on progressive graph convolution neural network
CN115154828A (en) Brain function remodeling method, system and equipment based on brain-computer interface technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant