WO2021184619A1 - 一种人体运动姿态识别评价方法及其*** - Google Patents
一种人体运动姿态识别评价方法及其*** Download PDFInfo
- Publication number
- WO2021184619A1 WO2021184619A1 PCT/CN2020/103074 CN2020103074W WO2021184619A1 WO 2021184619 A1 WO2021184619 A1 WO 2021184619A1 CN 2020103074 W CN2020103074 W CN 2020103074W WO 2021184619 A1 WO2021184619 A1 WO 2021184619A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- human body
- neural network
- video image
- features
- body motion
- Prior art date
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 52
- 238000011156 evaluation Methods 0.000 title claims abstract description 26
- 238000003062 neural network model Methods 0.000 claims abstract description 33
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000012360 testing method Methods 0.000 claims abstract description 21
- 238000012545 processing Methods 0.000 claims abstract description 19
- 230000009471 action Effects 0.000 claims abstract description 6
- 230000002123 temporal effect Effects 0.000 claims description 20
- 230000004927 fusion Effects 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 11
- 230000011218 segmentation Effects 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 5
- 210000002569 neuron Anatomy 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 abstract description 5
- 230000007774 longterm Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000013031 physical testing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Definitions
- the present invention relates to the technical field of gesture recognition, and more specifically to a method and system for identifying and evaluating human motion gestures.
- Existing gesture recognition methods mainly include two kinds, one is human gesture recognition based on motion sensors, and the other is human gesture recognition based on image analysis; sensor-based recognition technology mainly collects correlation by letting researchers carry sensors Motion data.
- sensor-based recognition technology mainly collects correlation by letting researchers carry sensors Motion data.
- sensors mainly include accelerometers, magnetoresistive sensors, gyroscopes, etc. After using the sensors to obtain the motion information of the researchers, combined with the relevant learning methods, the posture of the person can be recognized.
- This method is for the posture
- the recognition results are mainly affected by the feature extraction method, namely the use of sensors and the selection of classifiers, which are not accurate enough for gesture recognition; image-based analysis methods, by extracting the image of the researcher as the characteristics of the research and analysis, the current image-based method It is often used to analyze image contour features such as pile image aspect ratio, shape complexity change, eccentricity, etc., combined with K-means or SVM to judge the pose category of others.
- image contour features such as pile image aspect ratio, shape complexity change, eccentricity, etc.
- K-means or SVM to judge the pose category of others.
- the above-mentioned traditional methods are often difficult to obtain well on a large number of complex and similar samples.
- the classification effect is often difficult to obtain well on a large number of complex and similar samples.
- the present invention provides a human body motion gesture recognition and evaluation method and its system, which can effectively solve the problem of insufficient accuracy and slow speed of the human body motion gesture recognition technology in the prior art, and further provides a In addition, it effectively solves the problem of the human body that cannot be objectively evaluated for the human body movement posture in the prior art.
- a method for recognizing and evaluating human motion gestures including the following steps:
- S01 Collect the video image test data set, and process the data in the video image test data set
- the training process of the LSTM neural network model includes the following steps:
- S12 Input the sample data in the video image sample data set into the LSTM neural network model, and introduce constraints on the weights of joint points and neurons connected in the objective function of the neural network model, and respectively target different frames and Classify the data of different joint points, and complete the study of assigning the importance of different frames and different joint points based on the content type;
- S13 Perform backpropagation on the obtained classification result to update the weight, and execute the content in S12 cyclically.
- the data processing content in S01 includes: time domain segmentation and content type determination of the collected video image sample data set; and preprocessing the segmented video sequence to obtain the optical flow of RGB images and video frames .
- the specific content of S02 includes:
- step (3) The features obtained in step (1) and step (2) are merged to obtain the classification result, and the human body action recognition is completed.
- a human motion posture recognition and evaluation system including: image acquisition module, data processing module, LSTM neural network model, model training module, data center and posture evaluation module;
- the image acquisition module is used to acquire a video image test data set
- the data processing module is used to perform data processing on the collected video image test data set
- the LSTM neural network model is used to input the test data after data processing into the trained LSTM neural network model to recognize the human body motion posture, and output the recognition result;
- the model training module is used to train the LSTM neural network model
- the data center is used to store standard sports data
- the posture evaluation module is used to retrieve standard motion data in the data center, and compare the output recognition result with the standard motion data to obtain an evaluation result of the recognized standard degree of human motion posture.
- the data processing module is specifically configured to perform time domain segmentation and content type determination on the collected video image sample data set; and preprocess the segmented video sequence to obtain the optical flow of RGB images and video frames .
- the LSTM neural network model includes an LSTM main network, a spatial attention sub-network, a temporal attention sub-network, and a feature fusion module;
- the LSTM main network is used to extract temporal flow features and spatial flow features, extract spatio-temporal information to form a fixed-length feature vector, and extract depth features of video frames, and at the same time use a spatiotemporal feature fusion strategy to fuse all the extracted features;
- the airspace attention sub-network is used to automatically learn to allocate the importance of different joint points for different content types, and perform airspace attention calculations during the recognition process to obtain airspace features;
- the time-domain attention sub-network is used to automatically learn to allocate the importance of different frames for different content types, and perform time-domain attention calculations during the recognition process to obtain time-domain features;
- the feature fusion module is used to control the feature fusion to obtain the final classification result.
- the model training module is specifically configured to obtain a video image sample data set, input the sample data in the video image sample data set into the LSTM neural network model, and introduce the joint points and the neural network into the objective function of the neural network model. Constraints on the weights of the meta-connections, and further controls the spatial attention sub-network and the temporal attention sub-network to complete the learning of the importance distribution of different joint points and different frames.
- the present disclosure provides a human body motion gesture recognition and evaluation method and its system.
- the LSTM model itself is a cyclic neural network that can capture long-term time and space by saving time series information. The dependence relationship can also effectively avoid the phenomenon of the disappearance of the gradient.
- the present invention also adds a spatial attention mechanism and a temporal attention mechanism on the basis of the LSTM network, so that the method and system of the present invention can not only capture long-term time information, but also It can also capture the complex spatiotemporal cues of human movements through temporal attention and spatial attention mechanisms, which greatly improves the accuracy of human motion gesture recognition.
- the present invention can also compare the recognized posture of the human body with the standard posture to obtain an objective and accurate evaluation, which solves the problem of subjective action evaluation and difficulty in unifying standards in the prior art.
- Fig. 1 is a schematic flowchart of a method for recognizing and evaluating human motion gestures provided by the present invention.
- Fig. 2 is a schematic diagram of the structure of a human body motion gesture recognition and evaluation system provided by the present invention.
- the embodiment of the present invention discloses a human body motion gesture recognition and evaluation method, as shown in FIG. 1, including the following steps:
- S01 Collect the video image test data set, and process the data in the video image test data set
- the training process of the LSTM neural network model includes the following steps:
- S12 Input the sample data in the video image sample data set into the LSTM neural network model, and introduce constraints on the weights of joint points and neurons connected to the objective function of the neural network model, and target different frames and different joints according to the weights.
- the point data is classified, and the study of assigning the importance of different frames and different joint points based on the content type is completed;
- S13 Perform backpropagation on the obtained classification result to update the weight, and execute the content in S12 cyclically.
- the data processing content in S01 includes: time domain segmentation and content type determination of the collected video image sample data set; and preprocessing the segmented video sequence to obtain RGB images and videos The optical flow of the frame.
- the specific content of S02 includes:
- step (3) The features obtained in step (1) and step (2) are merged to obtain the classification result, and the human body action recognition is completed.
- a human body motion gesture recognition and evaluation system includes: an image acquisition module, a data processing module, an LSTM neural network model, a model training module, a data center, and a gesture evaluation module;
- Image acquisition module used to acquire video image test data set
- Data processing module used for data processing on the collected video image test data set
- the LSTM neural network model is used to input the processed test data into the trained LSTM neural network model to recognize the human body motion posture, and output the recognition result;
- Model training module used for LSTM neural network model training
- Data center used to store standard sports data
- the posture evaluation module is used to retrieve the standard motion data in the data center, and compare the output recognition result with the standard motion data to obtain an evaluation result of the standard degree of the recognized human motion posture.
- the data processing module is specifically used to perform time domain segmentation and content type judgment on the collected video image sample data sets; and preprocess the segmented video sequence to obtain RGB images and video frames. light flow.
- the LSTM neural network model includes the LSTM main network, the spatial attention sub-network, the temporal attention sub-network and the feature fusion module;
- the LSTM main network is used to extract temporal flow features and spatial flow features, extract spatio-temporal information to form a fixed-length feature vector, and extract the depth features of video frames.
- the spatial attention sub-network is used to automatically learn to assign the importance of different joint points for different content types, and calculate the spatial attention during the recognition process to obtain the spatial characteristics;
- the time-domain attention sub-network is used to automatically learn to assign the importance of different frames for different content types, and perform time-domain attention calculations during the recognition process to obtain time-domain features;
- the feature fusion module is used to control the feature fusion to obtain the final classification result.
- the model training module is specifically used to obtain a video image sample data set, input the sample data in the video image sample data set into the LSTM neural network model, and introduce the joint points and nerves in the objective function of the neural network model. Constraints on the weights of the meta-connections, and further controls the spatial attention sub-network and the temporal attention sub-network to complete the learning of the importance distribution of different joint points and different frames.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (7)
- 一种人体运动姿态识别评价方法,其特征在于,包括以下步骤:S01:采集视频图像测试数据集,将视频图像测试数据集中的数据进行数据处理;S02:将数据处理后的测试数据输入训练后的LSTM神经网络模型中进行人体运动姿态的识别,输出识别结果;S03:将输出的识别结果与标准运动数据进行对比,根据对比的结果对所识别出的人体运动姿势的标准程度进行评价;其中,所述LSTM神经网络模型的训练过程包括以下步骤:S11:获取视频图像样本数据集;S12:将所述视频图像样本数据集中的样本数据输入LSTM神经网络模型中,在神经网络模型的目标函数中引入对关节点和神经元相连的权重的约束,根据权重的大小分别针对不同帧和不同关节点的数据进行分类,完成基于内容类型对不同帧和不同关节点重要性进行分配的学习;S13:将得到的分类结果进行反向传播实现权重的更新,并循环执行S12中的内容。
- 根据权利要求1所述的一种人体运动姿态识别评价方法,其特征在于,S01中的数据处理内容包括:对所采集到的视频图像样本数据集进行时域分割和内容类型判断;并将分割后的视频序列进行预处理,得到RGB图像和视频帧的光流。
- 根据权利要求2所述的一种人体运动姿态识别评价方法,其特征在于,S02的具体内容包括:(1)提取时间流特征和空间流特征,提取时空信息形成固定长度的特征向量,并提取视频帧的深度特征,同时利用时空特征融合策略融合所提取到的所有特征;(2)根据序列内容类型,将融合后的特征向量进行空域注意力计算和时域注意力计算,分别得到空域特征和时域特征;(3)将步骤(1)和步骤(2)中得到的特征进行融合,得到分类结果,完成人体动作识别。
- 一种人体运动姿态识别评价***,其特征在于,包括:图像采集模块、数据处理模块、LSTM神经网络模型、模型训练模块、数据中心和姿态评价模块;所述图像采集模块,用于采集视频图像测试数据集;所述数据处理模块,用于对所采集到的视频图像测试数据集进行数据处理;所述LSTM神经网络模型,用于将数据处理后的测试数据输入训练后的LSTM神经网络模型中进行人体运动姿态的识别,输出识别结果;所述模型训练模块,用于所述LSTM神经网络模型进行训练;所述数据中心,用于存储标准运动数据;所述姿态评价模块,用于调取所述数据中心中的标准运动数据,并将输出的识别结果与标准运动数据进行对比,得到所识别出的人体运动姿势的标准程度的评价结果。
- 根据权利要求4所述的一种人体运动姿态识别评价***,其特征在于,所述数据处理模块具体用于对所采集到的视频图像样本数据集进行时域分割和内容类型判断;并将分割后的视频序列进行预处理,得到RGB图像和视频帧的光流。
- 根据权利要求4所述的一种人体运动姿态识别评价***,其特征在于,所述LSTM神经网络模型包括LSTM主网络、空域注意力子网络、时域注意力子网络和特征融合模块;所述LSTM主网络,用于提取时间流特征和空间流特征,提取时空信息形成固定长度的特征向量,并提取视频帧的深度特征,同时利用时空特征融合策略融合所提取到的所有特征;所述空域注意力子网络,用于针对不同内容类型来自动学习对不同关节点重要性进行分配,并在识别的过程中进行空域注意力计算,得到空域特征;所述时域注意力子网络,用于针对不同内容类型来自动学习对不同帧重要性进行分配,并在识别的过程中进行时域注意力计算,得到时域特征;所述特征融合模块,用于控制特征进行融合,得到最终分类结果。
- 根据权利要求6所述的一种人体运动姿态识别评价***,其特征在于,所述模型训练模块具体用于获取视频图像样本数据集,将所述视频图像样本数据集中的样本数据输入LSTM神经网络模型中,在神经网络模型的目标函数中 引入对关节点和神经元相连的权重的约束,并进一步控制所述空域注意力子网络和所述时域注意力子网络完成对不同关节点和不同帧的重要性分配的学习。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010196951.6 | 2020-03-19 | ||
CN202010196951.6A CN111401270A (zh) | 2020-03-19 | 2020-03-19 | 一种人体运动姿态识别评价方法及其*** |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021184619A1 true WO2021184619A1 (zh) | 2021-09-23 |
Family
ID=71432707
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/103074 WO2021184619A1 (zh) | 2020-03-19 | 2020-07-20 | 一种人体运动姿态识别评价方法及其*** |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111401270A (zh) |
WO (1) | WO2021184619A1 (zh) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113902995A (zh) * | 2021-11-10 | 2022-01-07 | 中国科学技术大学 | 一种多模态人体行为识别方法及相关设备 |
CN114310954A (zh) * | 2021-12-31 | 2022-04-12 | 北京理工大学 | 一种护理机器人自适应升降控制方法和*** |
CN114863556A (zh) * | 2022-04-13 | 2022-08-05 | 上海大学 | 一种基于骨骼姿态的多神经网络融合连续动作识别方法 |
CN114913594A (zh) * | 2022-03-28 | 2022-08-16 | 北京理工大学 | 一种基于人体关节点的fms动作分类方法及*** |
CN114926761A (zh) * | 2022-05-13 | 2022-08-19 | 浪潮卓数大数据产业发展有限公司 | 一种基于时空平滑特征网络的动作识别方法 |
CN115019233A (zh) * | 2022-06-15 | 2022-09-06 | 武汉理工大学 | 一种基于姿态检测的精神发育迟滞判别方法 |
CN115068919A (zh) * | 2022-05-17 | 2022-09-20 | 泰山体育产业集团有限公司 | 一种单杠项目的考核方法及其实现装置 |
CN116458852A (zh) * | 2023-06-16 | 2023-07-21 | 山东协和学院 | 基于云平台及下肢康复机器人的康复训练***及方法 |
CN117423166A (zh) * | 2023-12-14 | 2024-01-19 | 广州华夏汇海科技有限公司 | 一种根据人体姿态图像数据的动作识别方法及*** |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401270A (zh) * | 2020-03-19 | 2020-07-10 | 南京未艾信息科技有限公司 | 一种人体运动姿态识别评价方法及其*** |
CN111738218B (zh) * | 2020-07-27 | 2020-11-24 | 成都睿沿科技有限公司 | 人体异常行为识别***及方法 |
CN112434608B (zh) * | 2020-11-24 | 2023-02-28 | 山东大学 | 一种基于双流结合网络的人体行为识别方法及*** |
CN112686111B (zh) * | 2020-12-23 | 2021-07-27 | 中国矿业大学(北京) | 基于注意力机制多视角自适应网络的交警手势识别方法 |
CN112843647A (zh) * | 2021-01-09 | 2021-05-28 | 吉首大学 | 一种啦啦操用拉伸训练控制***及方法 |
CN113408349B (zh) * | 2021-05-17 | 2023-04-18 | 浙江大华技术股份有限公司 | 动作评价模型的训练方法、动作评价方法及相关设备 |
CN113255554B (zh) * | 2021-06-04 | 2022-05-27 | 福州大学 | 一种射击训练瞬时击发动作识别及标准性辅助评价方法 |
CN113239897B (zh) * | 2021-06-16 | 2023-08-18 | 石家庄铁道大学 | 基于时空特征组合回归的人体动作评价方法 |
CN114067436B (zh) * | 2021-11-17 | 2024-03-05 | 山东大学 | 一种基于可穿戴式传感器及视频监控的跌倒检测方法及*** |
CN114119753A (zh) * | 2021-12-08 | 2022-03-01 | 北湾科技(武汉)有限公司 | 面向机械臂抓取的透明物体6d姿态估计方法 |
CN114494962A (zh) * | 2022-01-24 | 2022-05-13 | 上海商汤智能科技有限公司 | 对象识别方法、网络训练方法、装置、设备及介质 |
CN116958190A (zh) * | 2022-04-14 | 2023-10-27 | 华为技术有限公司 | 动态场景处理方法、神经网络模型训练方法和装置 |
CN117671784A (zh) * | 2023-12-04 | 2024-03-08 | 北京中航智信建设工程有限公司 | 一种基于视频分析的人体行为分析方法及*** |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330362A (zh) * | 2017-05-25 | 2017-11-07 | 北京大学 | 一种基于时空注意力的视频分类方法 |
US20180082677A1 (en) * | 2016-09-16 | 2018-03-22 | Apptek, Inc. | Centered, left- and right-shifted deep neural networks and their combinations |
CN108764050A (zh) * | 2018-04-28 | 2018-11-06 | 中国科学院自动化研究所 | 基于角度无关性的骨架行为识别方法、***及设备 |
CN108846332A (zh) * | 2018-05-30 | 2018-11-20 | 西南交通大学 | 一种基于clsta的铁路司机行为识别方法 |
CN110197235A (zh) * | 2019-06-28 | 2019-09-03 | 浙江大学城市学院 | 一种基于独特性注意力机制的人体活动识别方法 |
CN110826453A (zh) * | 2019-10-30 | 2020-02-21 | 西安工程大学 | 一种通过提取人体关节点坐标的行为识别方法 |
CN111401270A (zh) * | 2020-03-19 | 2020-07-10 | 南京未艾信息科技有限公司 | 一种人体运动姿态识别评价方法及其*** |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190236416A1 (en) * | 2018-01-31 | 2019-08-01 | Microsoft Technology Licensing, Llc | Artificial intelligence system utilizing microphone array and fisheye camera |
CN108875708A (zh) * | 2018-07-18 | 2018-11-23 | 广东工业大学 | 基于视频的行为分析方法、装置、设备、***及存储介质 |
CN110222665B (zh) * | 2019-06-14 | 2023-02-24 | 电子科技大学 | 一种基于深度学习和姿态估计的监控中人体动作识别方法 |
CN110472554B (zh) * | 2019-08-12 | 2022-08-30 | 南京邮电大学 | 基于姿态分割和关键点特征的乒乓球动作识别方法及*** |
-
2020
- 2020-03-19 CN CN202010196951.6A patent/CN111401270A/zh active Pending
- 2020-07-20 WO PCT/CN2020/103074 patent/WO2021184619A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180082677A1 (en) * | 2016-09-16 | 2018-03-22 | Apptek, Inc. | Centered, left- and right-shifted deep neural networks and their combinations |
CN107330362A (zh) * | 2017-05-25 | 2017-11-07 | 北京大学 | 一种基于时空注意力的视频分类方法 |
CN108764050A (zh) * | 2018-04-28 | 2018-11-06 | 中国科学院自动化研究所 | 基于角度无关性的骨架行为识别方法、***及设备 |
CN108846332A (zh) * | 2018-05-30 | 2018-11-20 | 西南交通大学 | 一种基于clsta的铁路司机行为识别方法 |
CN110197235A (zh) * | 2019-06-28 | 2019-09-03 | 浙江大学城市学院 | 一种基于独特性注意力机制的人体活动识别方法 |
CN110826453A (zh) * | 2019-10-30 | 2020-02-21 | 西安工程大学 | 一种通过提取人体关节点坐标的行为识别方法 |
CN111401270A (zh) * | 2020-03-19 | 2020-07-10 | 南京未艾信息科技有限公司 | 一种人体运动姿态识别评价方法及其*** |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113902995B (zh) * | 2021-11-10 | 2024-04-02 | 中国科学技术大学 | 一种多模态人体行为识别方法及相关设备 |
CN113902995A (zh) * | 2021-11-10 | 2022-01-07 | 中国科学技术大学 | 一种多模态人体行为识别方法及相关设备 |
CN114310954A (zh) * | 2021-12-31 | 2022-04-12 | 北京理工大学 | 一种护理机器人自适应升降控制方法和*** |
CN114310954B (zh) * | 2021-12-31 | 2024-04-16 | 北京理工大学 | 一种护理机器人自适应升降控制方法和*** |
CN114913594A (zh) * | 2022-03-28 | 2022-08-16 | 北京理工大学 | 一种基于人体关节点的fms动作分类方法及*** |
CN114863556A (zh) * | 2022-04-13 | 2022-08-05 | 上海大学 | 一种基于骨骼姿态的多神经网络融合连续动作识别方法 |
CN114926761A (zh) * | 2022-05-13 | 2022-08-19 | 浪潮卓数大数据产业发展有限公司 | 一种基于时空平滑特征网络的动作识别方法 |
CN114926761B (zh) * | 2022-05-13 | 2023-09-05 | 浪潮卓数大数据产业发展有限公司 | 一种基于时空平滑特征网络的动作识别方法 |
CN115068919A (zh) * | 2022-05-17 | 2022-09-20 | 泰山体育产业集团有限公司 | 一种单杠项目的考核方法及其实现装置 |
CN115068919B (zh) * | 2022-05-17 | 2023-11-14 | 泰山体育产业集团有限公司 | 一种单杠项目的考核方法及其实现装置 |
CN115019233A (zh) * | 2022-06-15 | 2022-09-06 | 武汉理工大学 | 一种基于姿态检测的精神发育迟滞判别方法 |
CN115019233B (zh) * | 2022-06-15 | 2024-05-03 | 武汉理工大学 | 一种基于姿态检测的精神发育迟滞判别方法 |
CN116458852B (zh) * | 2023-06-16 | 2023-09-01 | 山东协和学院 | 基于云平台及下肢康复机器人的康复训练***及方法 |
CN116458852A (zh) * | 2023-06-16 | 2023-07-21 | 山东协和学院 | 基于云平台及下肢康复机器人的康复训练***及方法 |
CN117423166B (zh) * | 2023-12-14 | 2024-03-26 | 广州华夏汇海科技有限公司 | 一种根据人体姿态图像数据的动作识别方法及*** |
CN117423166A (zh) * | 2023-12-14 | 2024-01-19 | 广州华夏汇海科技有限公司 | 一种根据人体姿态图像数据的动作识别方法及*** |
Also Published As
Publication number | Publication date |
---|---|
CN111401270A (zh) | 2020-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021184619A1 (zh) | 一种人体运动姿态识别评价方法及其*** | |
CN109558810B (zh) | 基于部位分割与融合目标人物识别方法 | |
CN109543526B (zh) | 基于深度差异性特征的真假面瘫识别*** | |
CN109815785A (zh) | 一种基于双流卷积神经网络的人脸情绪识别方法 | |
CN108520226B (zh) | 一种基于躯体分解和显著性检测的行人重识别方法 | |
CN108596102B (zh) | 基于rgb-d的室内场景物体分割分类器构造方法 | |
CN107092349A (zh) | 一种基于RealSense的手语识别***及方法 | |
CN109190561B (zh) | 一种视频播放中的人脸识别方法及*** | |
CN110807434A (zh) | 一种基于人体解析粗细粒度结合的行人重识别***及方法 | |
CN109543603A (zh) | 一种基于宏表情知识迁移的微表情识别方法 | |
Pandey et al. | Hand gesture recognition for sign language recognition: A review | |
WO2024051597A1 (zh) | 一种引体向上的规范计数方法、***及其存储介质 | |
CN104200218B (zh) | 一种基于时序信息的跨视角动作识别方法及*** | |
CN111126143A (zh) | 一种基于深度学习的运动评判指导方法及*** | |
CN111860117A (zh) | 一种基于深度学习的人体行为识别方法 | |
CN107578015B (zh) | 一种基于深度学习的第一印象识别与回馈***及方法 | |
Chen et al. | Multi-modality gesture detection and recognition with un-supervision, randomization and discrimination | |
WO2021248814A1 (zh) | 一种鲁棒的家庭儿童学习状态视觉监督方法及装置 | |
CN117133057A (zh) | 基于人体姿态识别的体育运动计数和违规动作判别方法 | |
CN112949369A (zh) | 一种基于人机协同的海量人脸图库检索方法 | |
Mohana et al. | Emotion recognition from facial expression using hybrid CNN–LSTM network | |
CN114662594B (zh) | 一种目标特征识别分析*** | |
CN108197593B (zh) | 基于三点定位方法的多尺寸人脸表情识别方法及装置 | |
CN106446837B (zh) | 一种基于运动历史图像的挥手检测方法 | |
CN110991343A (zh) | 一种基于眼动技术的情感测量***及其方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20925792 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20925792 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20925792 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.04.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20925792 Country of ref document: EP Kind code of ref document: A1 |