CN114267086A - Execution quality evaluation method for complex continuous motion in motion - Google Patents

Execution quality evaluation method for complex continuous motion in motion Download PDF

Info

Publication number
CN114267086A
CN114267086A CN202111646483.9A CN202111646483A CN114267086A CN 114267086 A CN114267086 A CN 114267086A CN 202111646483 A CN202111646483 A CN 202111646483A CN 114267086 A CN114267086 A CN 114267086A
Authority
CN
China
Prior art keywords
human body
key points
action
motion
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111646483.9A
Other languages
Chinese (zh)
Inventor
罗昊
吴哲
陈安成
唐得君
彭博
张俊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU YOUTU TECHNOLOGY CO LTD
Southwest Petroleum University
Original Assignee
CHENGDU YOUTU TECHNOLOGY CO LTD
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU YOUTU TECHNOLOGY CO LTD, Southwest Petroleum University filed Critical CHENGDU YOUTU TECHNOLOGY CO LTD
Priority to CN202111646483.9A priority Critical patent/CN114267086A/en
Publication of CN114267086A publication Critical patent/CN114267086A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an execution quality evaluation method of complex continuous actions in motion, which comprises the following steps: s1: extracting human body key points through a human body posture recognition algorithm; s2: combining the obtained key points with the gravity center of the human body, dividing the key points into 4 grades according to the flexibility of each joint of the human body, connecting corresponding key points step by step to form different vectors, performing stable filtering processing on the modes of the vectors, and reestablishing the key points of the human body according to a key point reconstruction formula; s3: constructing motion parameters such as angles, speeds, angular speeds and the like to represent training motions according to the human body key points reconstructed in the step S2, and then performing parameter fusion processing; s4: and 4 directions are adopted to quantify the action, namely the quality of the action and the speed of the action. The method ensures that the extracted key points of the human body are basically stable, and can represent continuous and complex actions by personalized design parameters to quantify the continuous and complex actions.

Description

Execution quality evaluation method for complex continuous motion in motion
Technical Field
The invention relates to the technical field of human body movement, in particular to an execution quality evaluation method for complex continuous actions in movement.
Background
The phenomenon of limb injury in social groups is more frequent, the traditional method is to use operation to treat, but doctors generally recognize that the operation is only the beginning of function reconstruction, and scientific function training or exercise rehabilitation training under the guidance of doctors is necessary if the function of limbs is recovered. In recent years, most of the exercise rehabilitation approaches are completed through instruments or manual assistance, and the instruments are mainly used for standardizing the actions of trainers and avoiding secondary damage to the bodies of the trainers. The manual assistance means that the trainer completes rehabilitation actions under the manual guidance and evaluates the training actions manually and subjectively.
With the improvement of living standard, more and more people are invested in sports, however, the wrong posture can affect the performance of the athlete and even cause damage to the body of the athlete, so that it is important to regulate the actions of the athlete. In most of the prior sports, athletes improve the action quality under the guidance of a professional coach, and the results are judged by a judge. In daily life, dangerous actions such as illegal behaviors of people in a specific place, falling down of old people and the like often occur, and in order to avoid such things, people guard is adopted by most relevant departments at present. However, among the above examples, there are the following disadvantages: (1) higher cost overhead of manpower and material resources. (2) Rehabilitation patients or athletes need to complete training actions in specific places. (3) The action result evaluation has human subjective factors, which may cause errors.
Disclosure of Invention
The invention aims to provide an execution quality evaluation method of complex continuous actions in motion, aiming at the problem of insufficient quantification of the actions in motion in the prior art.
The invention provides a method for evaluating the execution quality of complex continuous actions in motion, which comprises the following steps:
s1: and extracting key points and position information thereof in the human body by adopting a human body posture recognition algorithm.
S2: combining the obtained key points with the gravity center of the human body, dividing the key points into different grades according to the flexibility of each joint of the human body, connecting corresponding key points step by step to form different vectors, performing stable filtering processing on the modes of the vectors, and reestablishing the key points of the human body according to a key point reconstruction formula. The method comprises the following specific steps:
s21: the human body is divided into 13 rigid bodies, and the gravity center of the human body is obtained:
Figure BDA0003445331510000011
Figure BDA0003445331510000012
wherein, PiIs the gravity of the i-th rigid body, XiIs the abscissa of the center of gravity of the i-th rigid body, YiIs the ordinate and ordinate of the barycenter of the ith rigid body, P is the gravity of the human body, and X and Y are the abscissa and ordinate of the barycenter of the human body respectively.
S22: the center of gravity of the human body obtained in step S21 is combined with the key points of the human body obtained in step S1, and the key points are classified into different levels according to the flexibility of each joint of the human body.
S23: combining key points of different grades with human body structure, combining every two key points step by step to form vectors, and dividing the vectors into different grades according to the flexibility of human joints, wherein each grade has a corresponding filter coefficient.
S24: the modes of the vectors of different levels are filtered, and an infinite response filter or a finite response filter can be adopted for filtering.
S25: and reestablishing each human body key point according to the vector distance after filtering. The formula for re-establishing each human body key point is as follows:
Figure BDA0003445331510000021
Figure BDA0003445331510000022
wherein, x (p'i) Represents the reconstructed abscissa, x (p'i-1) Representing the abscissa of the reconstructed key points of the i-1 level;
Figure BDA0003445331510000023
representing the projection distance of the filtered vector on the x-axis; y (p'i) Represents the reconstructed abscissa, y (p'i-1) Representing the reconstructed abscissa of the keypoint of level i-1,
Figure BDA0003445331510000024
representing the projection distance of the filtered vector on the y-axis.
S3: and (5) constructing motion parameters such as angles, speeds and angular speeds according to the human body key points reconstructed in the step (S2) to represent training motions, and then performing parameter fusion processing. The method comprises the following two substeps:
s31: constructing angles, speeds and angular speed parameters according to the human body key point data obtained in the step S2, wherein each parameter corresponds to a curve changing along with time;
s32: and fusing the parameter curves in the S31 according to the action characteristics.
S4: and 4 directions are adopted to quantify the action, namely the quality of the action and the speed of the action. The method specifically comprises the following steps:
s41: each action has a standard action curve, and the quality of the quantitative action is judged by calculating the distance between the quantitative action and a user curve after automatically generating a high standard action curve and a low standard action curve;
s42: the speed of the quantitative action is judged by the point position in the path of the similarity algorithm.
Compared with the prior art, the invention has the advantages that:
the method ensures that the extracted key points of the human body are basically stable, can represent continuous and complex actions by personalized design parameters, and quantizes the continuous and complex actions. The trainer can evaluate the movement of the patient smoothly and stably in the process of finishing the exercise. Training is not needed in a specific environment, and the operation is simple; human or machine intervention is not needed, and large cost is saved; the motion quantization is accurate, and no human subjective factor interference exists. The problems existing in the prior art are solved: a use environment limitation; shaking key points in human body posture identification; the action is single, and the customization is insufficient; insufficient quantification of the action and the like.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
Fig. 1, an example of human skeletal key point extraction.
Fig. 2, key point stabilization flow chart.
Fig. 3, key point filtering diagram.
Fig. 4, key point angle schematic.
Fig. 5, key point velocity diagram.
Fig. 6, key point velocity diagram.
FIG. 7, multi-standard curve diagram.
Fig. 8, multi-parameter scoring flow chart.
Fig. 9, action scoring flow chart.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The invention relates to a method for evaluating the execution quality of complex continuous actions in motion, which comprises the following steps:
s1: extracting key points
There are many methods for identifying key points of a human body, the most representative of which is Open source item Open pos of the university of kaki-meilong, which includes positioning and identifying each part (such as hand and face) of a human body and the whole human body, and has good robustness. The following description takes the human body key points output by Open Pose as an example, as shown in FIG. 1. However, the execution quality evaluation method is suitable for various human body key point identification algorithms.
S2: key point stabilization
In the identification process, the body movement can cause unstable jitter of positioning points of some human body flexible parts, and the positioning effect and the accuracy of subsequent quantization work are influenced. Therefore, we need to stabilize the key points. The stabilization process workflow may be divided into four steps as shown in fig. 2: the method comprises the steps of finding the gravity center of a human body, grading key points of the human body, setting a filter system, and calculating the coordinates of the key points through graded filtering. The four steps are as follows:
s21: because the gravity center of the human body can well represent the position information of a person, the gravity center of the human body is found firstly, and the specific method comprises the following steps:
Figure BDA0003445331510000031
Figure BDA0003445331510000032
the following description takes only a certain human body keypoint algorithm as an example. According to the extracted 18 key points of the human body (see figure 3), we can divide it into 13 rigid bodies, wherein, PiIs the gravity of the i-th rigid body, XiIs the abscissa of the center of gravity of the i-th rigid body, YiIs the ordinate and ordinate of the barycenter of the ith rigid body, P is the gravity of the human body, and X and Y are the abscissa and ordinate of the barycenter of the human body respectively.
S22: due to the different flexibility of the key points, the key points need to be classified, and the subsequent filtering processing is facilitated. The less flexible head does not participate in the stabilization process, i.e., the key points 0, 14, 15, 16, 17 do not participate in the stabilization process. The remaining keypoints are then divided into four levels, each level having a corresponding filter coefficient.
Zero-order keypoints: a human body center of gravity point;
first-stage key points: 1. 2, 5, 8, 11;
second-level key points: 3. 6, 9 and 12;
third-stage key points: 4. 7, 10 and 13.
We define level 0 as the lower ranked keypoints and level three as the higher ranked keypoints. Wherein the lower the ranking, the lower the flexibility of the key points.
In order to meet the constraint condition of human mechanics, the filtering object is the mode of the vector, namely the vector length, rather than a single key point. As shown in FIG. 3, we connect the key point at the lower level with the key point at the level above it, for example, the coordinate of the key point at the first level is (p)1,p2,p5,p8,p11) The distances after connecting to the center of gravity are (d 1)o1,d1o2,d1o5,d1o8,d1o11) Because there are four level key points, it can be divided into three levels of distances, which are:
zero order vector length: (d1o1,d1o2,d1o5,d1o8,d1o11) Filter coefficient w 0;
first-stage vector length: (d223,d256,d289,d21112) Filter coefficient w 1;
second-stage vector length: (d334,d367,d3910,d31213) And a filter coefficient w 2.
Setting filter coefficients according to the flexibility of the key points, wherein the lower the flexibility is, the heavier the filtering is, and the higher the corresponding filter coefficients are, so that the following can be obtained: w0< w1< w 2. The specific coefficients are set as follows:
w 0-0.3, w 1-0.4 and w 2-0.5. And inter-frame distance difference grading reference value: zero order 15 mm, first order 20 mm, second order 25 mm. When the vector length average of the same level exceeds the range difference level reference value of the level, the filter coefficient increases by 0.05 every 5 mm.
The filtering method comprises the following steps:
d′n=dn-1×(1-w)+dn×w
wherein d isnIs the distance at the n-th frame, dn-1Is distance, d ', at frame n-1'nAnd w is a filter coefficient, which is the distance obtained after filtering. After filtering processing, the coordinates of the key points of the human body are reestablished.
The above example is an infinite response filter, IIR, finite response filter, FIR, or other common filters may also be used.
The reconstruction formula is:
Figure BDA0003445331510000051
Figure BDA0003445331510000052
wherein, x (p'i) Represents the reconstructed abscissa, x (p'i-1) Representing the reconstructed abscissa of the keypoint of level i-1,
Figure BDA0003445331510000053
representing the projection distance of the filtered vector on the x-axis; y (p'i) Represents the reconstructed abscissa, y (p'i-1) Representing the reconstructed abscissa of the keypoint of level i-1,
Figure BDA0003445331510000054
representing the projection distance of the filtered vector on the y-axis.
S3: the motion parameters are designed in a personalized mode, and the angle, the speed and the angular speed are used for representing the motion.
S31: angle of rotation
In extracting the human keypoints, keypoint coordinates can be obtained from which we can construct some angles to assess the accuracy of the action.
Taking an included angle formed by the key point 1, the key point 5, and the key point 6 as an example, as shown in fig. 4, coordinates of the three points are:
p1=x1+iy1、p5=x5+iy5、p6=x6+iy6. According to the formula of the distance between two points, three sides are respectively: l15=p1-p5、l16=p1-p6、l56=p5-p6. From the inverse cosine theorem we can obtain:
Figure BDA0003445331510000055
s32: speed of rotation
Similarly, human skeleton key points are extracted and coordinate information is acquired based on Open Pose. We can calculate the displacement pixelpframe between the previous and next frames of the keypoint, taking keypoint 13 as an example, the previous and next frame displacement is: Δ x ═ p13(n)-p13(n-1),p13(n) represents the location of the nth frame keypoint 13, as shown in fig. 5 and 6.
Second, the frame rate framepsec and how many pixels pixelsperm are contained per millimeter are determined. The final speed expression may be:
Figure BDA0003445331510000056
where mmPerSec indicates how many millimeters the keypoint has moved per second, i.e., the speed of the keypoint.
S33: angular velocity
After the angle and frame rate for each frame are determined, the angular velocity for each angle can be calculated,
ω=framesPerSec*(θnn-1)
wherein theta isnRepresents an angle of a certain angle at the nth frame; thetan-1Representing the angle at frame n-1.
S34: feature fusion
In order to compare with the standard action efficiently, the action parameters designed in the front are fused, and each parameter is normalized firstly:
Figure BDA0003445331510000061
in the formula (I), the compound is shown in the specification,
Figure BDA0003445331510000062
representing the normalized parameter function; di(x) Representing a parametric primitive function; di(x)minRepresents the minimum in the original function; di(x)maxRepresenting the maximum value in the original function.
In weight assignment, we define a value for each feature parameter. The important parameter is determined as the quotient of extremely poor division by the constant value in the characteristic parameters is larger, and a larger weight value is given:
Figure BDA0003445331510000063
Figure BDA0003445331510000064
in the formula: alpha is alphaiAnd (3) representing the weight of the ith feature, Ri representing the range of the ith feature, K representing a constant value, and n representing the number of parameters. Most preferablyFinally, we fused the curves obtained:
Figure BDA0003445331510000065
the above formula is merely a use example. According to the selected human key point identification model, m key points can be set, and then p characteristics exist:
Figure BDA0003445331510000066
the fused curve is then:
Figure BDA0003445331510000067
Figure BDA0003445331510000068
wherein, ViRepresenting the ith characteristic, g (x), f (x) are two functional expressions constructed according to specific action requirements, alphaiIs the weight of the ith feature.
S4: quantifying actions
The approach we take is to quantify the action from four directions: the quality of the action and the speed of the action. We quantitatively evaluate the motion in the above 4 directions using a dynamic time warping algorithm.
The algorithm calculates the similarity of two different time series. Its input is two time series, and its output is a path and difference value. The smaller the difference value, the more similar the two time series are.
(1) The quality of the action:
since the trainer may perform better or worse than the standard motion, in order to determine the difference, when evaluating the trainer motion, we evaluate from multiple standards, and when a standard curve is given, we will automatically generate two other curves, i.e. a low quality curve and a high quality curve, and the motion quality of the standard curve is between the low quality curve and the high quality curve, as shown in fig. 7. The specific curve generation method is as follows:
taking the angle parameter as an example, firstly, an angle curve M (x) of a standard action is given, when the angle is considered to be larger, the action is more standard, and the corresponding curve standard value is multiplied by a number larger than 1, and when the angle is considered to be smaller, the action is more standard, and the corresponding curve standard value is multiplied by a number between 0 and 1.
The multiplied coefficients can be expressed as:
Figure BDA0003445331510000071
wherein
Figure BDA0003445331510000072
Representing the multiplied coefficient and n the number of frames. Thus, the high standard curve is:
Figure BDA0003445331510000073
and the variation with time of the coefficient multiplied by the low standard curve can be expressed as a function
Figure BDA0003445331510000074
The low standard curve is then:
Figure BDA0003445331510000075
comparing the training time series T (x) with H (x), L (x) respectively, a curve most similar to the training time series T (x) can be obtained, if the training time series T (x) is similar to H (x), the curve is a superior difference, and if the training time series T (x) is similar to L (x), the curve is an inferior difference.
(2) Speed of action:
the input is two time series (Q, C) and the output is a path f (x). Suppose Q is a standard action curve, C is a user action curve, Q sequence length is m, and C sequence length is n. At a certain moment t in motion, if:
when in use
Figure BDA0003445331510000076
When, it means that the user action is slower than the standard action;
when in use
Figure BDA0003445331510000077
When, it means that the user action is equal to the standard action;
when in use
Figure BDA0003445331510000078
Time, indicates that the user action is faster than the standard action.
In order to provide training advice to the trainee, it is necessary to judge the comparison condition between each parameter of the trainee and the standard action during the training process, so as to compare each parameter of the trainee with the corresponding parameter of the standard action, as shown in fig. 8.
A stands for standard action, B stands for trainer, and 1 … n stands for different parameters. We extract the action parameters with scores below 60 and give feedback to the trainer.
The score of the whole set of actions of the trainee is calculated by comparing the two fused curves, as shown in FIG. 9.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A method for evaluating the execution quality of complex continuous motion in motion is characterized by comprising the following steps:
s1: acquiring key points of a human body;
s2: dividing key points of the human body into different grades according to the gravity center of the human body and the flexibility of each joint of the human body, connecting corresponding key points step by step to form different vectors, performing stable filtering processing on the modes of the vectors, and reestablishing the key points of the human body according to a key point reconstruction formula;
s3: designing a plurality of action parameters to represent training actions according to the human body key points reconstructed in the step S2, and then fusing all the parameters; representing the training action, and then performing parameter fusion processing;
s4: and 4 directions are adopted to quantify the action, namely the quality of the action and the speed of the action.
2. The method for evaluating the performance quality of a complex continuous motion in motion as claimed in claim 1, wherein said step S2 comprises the following sub-steps:
s21: the human body is divided into 13 rigid bodies, and the gravity center of the human body is obtained:
Figure FDA0003445331500000011
Figure FDA0003445331500000012
wherein, PiIs the gravity of the i-th rigid body, XiIs the abscissa of the center of gravity of the i-th rigid body, YiIs the longitudinal coordinate of the gravity center of the ith rigid body, P is the gravity of the human body, and X and Y are the transverse and longitudinal coordinates of the gravity center of the human body respectively;
s22: combining the human body gravity center obtained in the step S21 with the human body key points obtained in the step S1, and dividing the key points into different grades according to the flexibility of each joint of the human body;
s23: combining key points of different grades with human body structures, combining every two key points into vectors step by step, and dividing the vectors into different grades according to the flexibility of human joints, wherein each grade has a corresponding filter coefficient;
s24: carrying out filtering processing on the modes of the vectors of different levels;
s25: and reestablishing each human body key point according to the vector distance after filtering.
3. The method for evaluating the performance quality of a complex continuous motion in motion according to claim 2, wherein in step S24, an infinite response filter or a finite response filter is used for the filtering process.
4. The method for evaluating the execution quality of a complex continuous motion in motion as claimed in claim 2, wherein in step S25, the formula for re-establishing each human body key point is:
Figure FDA0003445331500000013
Figure FDA0003445331500000014
wherein, x (p'i) Represents the reconstructed abscissa, x (p'i-1) Representing the abscissa of the reconstructed key points of the i-1 level;
Figure FDA0003445331500000021
representing the projection distance of the filtered vector on the x-axis; y (p'i) Represents the reconstructed abscissa, y (p'i-1) Representing the reconstructed abscissa of the keypoint of level i-1,
Figure FDA0003445331500000022
representing the projection distance of the filtered vector on the y-axis.
5. The method for evaluating the performance quality of a complex continuous motion in motion as claimed in claim 1, wherein said step S3 comprises the following two substeps:
s31: constructing three parameters of angle, speed and angular speed according to the human body key point data obtained in the step S2, wherein each parameter corresponds to a curve changing along with time;
s32: and fusing the parameter curves in the step S31 according to the action characteristics.
6. The method for evaluating the performance quality of a complex sequence of movements in motion as claimed in claim 1, wherein said step S4 comprises:
s41: each action has a standard action curve, and the quality of the quantitative action is judged by calculating the distance between the quantitative action and a user curve after automatically generating a high standard action curve and a low standard action curve;
s42: the speed of the quantitative action is judged by the point position in the path of the similarity algorithm.
7. The method for evaluating the execution quality of a complex continuous motion in motion as claimed in claim 1, wherein in step S1, a human posture recognition algorithm is used to extract the key points and their position information in the human body.
CN202111646483.9A 2021-12-30 2021-12-30 Execution quality evaluation method for complex continuous motion in motion Pending CN114267086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111646483.9A CN114267086A (en) 2021-12-30 2021-12-30 Execution quality evaluation method for complex continuous motion in motion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111646483.9A CN114267086A (en) 2021-12-30 2021-12-30 Execution quality evaluation method for complex continuous motion in motion

Publications (1)

Publication Number Publication Date
CN114267086A true CN114267086A (en) 2022-04-01

Family

ID=80831624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111646483.9A Pending CN114267086A (en) 2021-12-30 2021-12-30 Execution quality evaluation method for complex continuous motion in motion

Country Status (1)

Country Link
CN (1) CN114267086A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738192A (en) * 2019-10-29 2020-01-31 腾讯科技(深圳)有限公司 Human motion function auxiliary evaluation method, device, equipment, system and medium
CN110941990A (en) * 2019-10-22 2020-03-31 泰康保险集团股份有限公司 Method and device for evaluating human body actions based on skeleton key points
CN112686208A (en) * 2021-01-22 2021-04-20 上海喵眼智能科技有限公司 Motion recognition characteristic parameter algorithm based on machine vision
CN113505735A (en) * 2021-05-26 2021-10-15 电子科技大学 Human body key point stabilizing method based on hierarchical filtering
CN113762133A (en) * 2021-09-01 2021-12-07 哈尔滨工业大学(威海) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941990A (en) * 2019-10-22 2020-03-31 泰康保险集团股份有限公司 Method and device for evaluating human body actions based on skeleton key points
CN110738192A (en) * 2019-10-29 2020-01-31 腾讯科技(深圳)有限公司 Human motion function auxiliary evaluation method, device, equipment, system and medium
CN112686208A (en) * 2021-01-22 2021-04-20 上海喵眼智能科技有限公司 Motion recognition characteristic parameter algorithm based on machine vision
CN113505735A (en) * 2021-05-26 2021-10-15 电子科技大学 Human body key point stabilizing method based on hierarchical filtering
CN113762133A (en) * 2021-09-01 2021-12-07 哈尔滨工业大学(威海) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
彭亮: "竞技健美操剪式变身跳转360度动作运动学分析", 《西南石油大学学报(社会科学版)》, vol. 14, no. 5, 31 August 2012 (2012-08-31), pages 117 - 122 *
彭博 等: "超声图像序列运动估计相似度函数研究", 《计算机仿真》, vol. 29, no. 9, 15 September 2012 (2012-09-15), pages 258 - 261 *
熊成鑫 等: "时域候选优化的时序动作检测", 《中国图象图形学报》, vol. 25, no. 7, 16 July 2020 (2020-07-16), pages 1447 - 1458 *

Similar Documents

Publication Publication Date Title
CN108764120B (en) Human body standard action evaluation method
CN106650687B (en) Posture correction method based on depth information and skeleton information
Chaudhari et al. Yog-guru: Real-time yoga pose correction system using deep learning methods
Zhao et al. Realtime motion assessment for rehabilitation exercises: Integration of kinematic modeling with fuzzy inference
US20090042661A1 (en) Rule based body mechanics calculation
CN112101315B (en) Deep learning-based exercise judgment guidance method and system
CN113255522B (en) Personalized motion attitude estimation and analysis method and system based on time consistency
Wang et al. Synthesis and evaluation of linear motion transitions
CN113705540A (en) Method and system for recognizing and counting non-instrument training actions
CN114550027A (en) Vision-based motion video fine analysis method and device
CN115131879B (en) Action evaluation method and device
CN110956141A (en) Human body continuous action rapid analysis method based on local recognition
CN113974612B (en) Automatic evaluation method and system for upper limb movement function of stroke patient
Agarwal et al. FitMe: a fitness application for accurate pose estimation using deep learning
CN114973048A (en) Method and device for correcting rehabilitation action, electronic equipment and readable medium
Yan et al. A review of basketball shooting analysis based on artificial intelligence
WO2016021152A1 (en) Orientation estimation method, and orientation estimation device
CN110070036B (en) Method and device for assisting exercise motion training and electronic equipment
CN114267086A (en) Execution quality evaluation method for complex continuous motion in motion
Trejo et al. Recognition of Yoga poses through an interactive system with Kinect based on confidence value
CN116740618A (en) Motion video action evaluation method, system, computer equipment and medium
CN116543455A (en) Method, equipment and medium for establishing parkinsonism gait damage assessment model and using same
CN115530814A (en) Child motion rehabilitation training method based on visual posture detection and computer deep learning
Sharma et al. A pilot study on human pose estimation for sports analysis
CN113505735A (en) Human body key point stabilizing method based on hierarchical filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination