CN114093032A - Human body action evaluation method based on action state information - Google Patents

Human body action evaluation method based on action state information Download PDF

Info

Publication number
CN114093032A
CN114093032A CN202111421516.XA CN202111421516A CN114093032A CN 114093032 A CN114093032 A CN 114093032A CN 202111421516 A CN202111421516 A CN 202111421516A CN 114093032 A CN114093032 A CN 114093032A
Authority
CN
China
Prior art keywords
motion
rotation angle
limb
human body
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111421516.XA
Other languages
Chinese (zh)
Inventor
王新月
郭雨薇
徐乾锋
冉琳
钟代笛
仲元红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202111421516.XA priority Critical patent/CN114093032A/en
Publication of CN114093032A publication Critical patent/CN114093032A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of computer vision image processing, in particular to a human body action evaluation method based on action state information, which comprises the following steps: acquiring a video to be tested of a tester; performing skeleton analysis and posture analysis on a video frame of a video to be detected to generate a corresponding human body key point coordinate graph; selecting a plurality of frames of optimal video frames based on the human body key point coordinate graph and corresponding action evaluation standards; calculating the motion state information of the video to be detected based on the optimal video frames of the multiple frames; and comparing the motion state information of the video to be detected with the motion standard information of the template video to generate a corresponding action evaluation result. The human body action evaluation method can effectively embody action continuity and action stability, thereby improving the accuracy of action evaluation.

Description

Human body action evaluation method based on action state information
Technical Field
The invention relates to the technical field of computer vision image processing, in particular to a human body action evaluation method based on action state information.
Background
With the wide application of internet big data information technology, the application scenes of human behavior detection and identification technology based on visual images are more and more. By analyzing the action characteristics of human body such as expression, posture and the like, the human body behavior detection and prediction device can provide rich identification characteristic information for the technical application of people in public places or specific activity spaces, and is an important component of human activity big data information.
For example, in the fields of sports competition and health screening, the action of a human body needs to be recognized so as to evaluate the standard situation of the action. Action evaluation needs to be performed based on action evaluation criteria and action evaluation decision grounds. An early commonly used assessment method was that the assessor visually observed the movements of the tester and manually compared the movements of the tester to a standard movement pattern to give a score; meanwhile, an evaluator holds the camera for video extraction, and then stores the video screenshot as a backup. The existing method does not waste manpower and material resources, and the evaluation result is not objective and accurate enough due to subjectivity in manual judgment.
With the development of computer technology, methods for performing motion estimation based on video have appeared in the prior art. For example, chinese patent publication No. CN108205654B discloses "a video-based motion detection method and apparatus", and the method includes: detecting limb key points in a video frame of a video to be detected; acquiring corresponding behavior characteristics according to the limb key points; and detecting the completion degree of the action based on the behavior characteristics and a global action evaluation model to obtain a global detection result, wherein the global action evaluation model is used for detecting the completion degree of the human body action in the video relative to the standard action.
The motion detection method in the existing scheme obtains the behavior characteristics through the body key points in the video frame, and then performs motion evaluation based on the behavior characteristics and the global motion evaluation model. The motion estimation generally refers to comparing each video frame in the video to be detected with a corresponding video frame in the template video and calculating the similarity, and then judging the motion completion condition according to the similarity, which is a static motion estimation. However, for some difficult actions, it is necessary to assess the duration and stability of the action. However, the existing static evaluation method is difficult to effectively reflect the continuity and stability of the action of the tester, resulting in low accuracy of the action evaluation. Therefore, how to design a human body motion evaluation method capable of effectively showing motion continuity and stability is an urgent technical problem to be solved.
Disclosure of Invention
Aiming at the defects of the prior art, the technical problems to be solved by the invention are as follows: how to provide a human body action evaluation method based on action state information to effectively embody action continuity and action stability, thereby improving the accuracy of action evaluation.
In order to solve the technical problems, the invention adopts the following technical scheme:
a human body action evaluation method based on action state information comprises the following steps:
s1: acquiring a video to be tested of a tester;
s2: performing skeleton analysis and posture analysis on a video frame of a video to be detected to generate a corresponding human body key point coordinate graph;
s3: selecting corresponding multi-frame optimal video frames based on the human body key point coordinate graph and corresponding action evaluation standards;
s4: calculating the motion state information of the video to be detected based on the optimal video frames of the multiple frames;
s5: and comparing the motion state information of the video to be detected with the motion standard information of the template video to generate a corresponding action evaluation result.
Preferably, in step S3, when selecting a plurality of optimal video frames, first calculating angle information between each key point based on a human body key point coordinate graph; then, determining recommended key points in a human body key point coordinate graph based on angle information among the key points; and finally, selecting corresponding multi-frame optimal video frames based on the recommended key points and action evaluation criteria in the human body key point coordinate graph.
Preferably, the angle information between the key points is a counterclockwise rotation angle;
calculating the counterclockwise rotation angle by:
s301: acquiring keypoint coordinates A, B, C for calculating a counterclockwise rotation angle;
s302: computing corresponding keypoint vectors based on keypoint coordinates A, B, C
Figure BDA0003377570650000021
And a keypoint vector
Figure BDA0003377570650000022
S303: vector of key points
Figure BDA0003377570650000023
Rotate in the counterclockwise direction until the vector of the key point
Figure BDA0003377570650000024
Overlapping; then the key point vector is calculated
Figure BDA0003377570650000025
Rotation to keypoint vector
Figure BDA0003377570650000026
As a corresponding counter-clockwise rotation angle.
Preferably, the recommendation key point is determined by:
s311: calculating the variance of each anticlockwise rotation angle in the human body key point coordinate graph;
s312: calculating a corresponding motion information proportion based on the variance of each counterclockwise rotation angle;
s313: and selecting the key point corresponding to the anticlockwise rotation angle with the maximum motion information proportion as a recommended key point.
Preferably, the variance of the counterclockwise rotation angle is calculated by the following formula:
Figure BDA0003377570650000027
the motion information ratio of the counterclockwise rotation angle is calculated by the following formula:
Figure BDA0003377570650000028
in the above formula: sigma2A variance representing the counterclockwise rotation angle; r represents a counterclockwise rotation angle; u. ofrThe mean value of the anticlockwise rotation angle in the coordinate graph of the key points of the human body is represented; n represents the number of anticlockwise rotation angles in the human body key point coordinate graph; i isnRepresenting the motion information proportion of the nth anticlockwise rotation angle in the human body key point coordinate graph;
Figure BDA0003377570650000031
the variance of the nth anticlockwise rotation angle in the coordinate graph of the human body key point is represented; e denotes a natural constant.
Preferably, in step S4, the motion state information of the video to be tested includes a limb motion speed, a limb motion stability and a limb motion trend.
Preferably, the limb movement velocity is calculated by the following formula:
Figure BDA0003377570650000032
in the above formula: v (N)1,N2) Representing an optimal video frame N2And an optimal video frame N1The movement speed of the middle corresponding limb; r1、R2Respectively representing an optimal video frame N2And an optimal video frame N1The counterclockwise rotation angle between key points of the middle corresponding limb; fps represents the frame rate of the video to be measured.
Preferably, the limb movement stability is judged by the following formula:
Figure BDA0003377570650000033
in the above formula: s2Representing the stability characteristic value of the corresponding finger body; when the body motion stability is judged, S is2Comparing with the set stability threshold Y: if S2If the motion is more than or equal to Y, the limb motion is stable, otherwise, the limb motion is unstable; r ═ R1,R2,…RnRepresenting the anticlockwise rotation angle between key points of corresponding limbs in the optimal video frames of the multi-frame, wherein n represents the frame number of the optimal video frames;
Figure BDA0003377570650000034
represents the mean value of the counterclockwise rotation angle in R.
Preferably, the limb movement trend is judged by the following formula:
θ=R2-R1
in the above formula: theta denotes the optimal video frame N2And an optimal video frame N1Counter-clockwise rotation angle R of the middle limb1And R2The angle difference between the two points is used for judging the limb movement trend: if theta is larger than 0, the limb is rotated anticlockwise, and if theta is smaller than 0, the limb is rotated clockwise.
Preferably, in step S5, the motion standard information includes a motion speed standard range and a standard motion trend;
when the limb movement speed is in the standard movement speed range, the limb movement trend corresponds to the standard movement trend and the limb movement is stable, the movement evaluation result is a passing evaluation; otherwise, the action evaluation fails.
Compared with the prior art, the human body action evaluation method has the following beneficial effects:
1. the method can effectively reflect the action continuity and the action stability of the tester by selecting the multi-frame optimal video frame and calculating the motion state information of the video to be tested, further generates the action evaluation result by comparing the motion state information of the video to be tested with the motion standard information of the template video, and can ensure the accuracy of human action evaluation.
2. According to the invention, the coordinate graph of the key point of the human body is generated in a framework analysis and posture analysis mode, and then the multi-frame optimal video frame is selected by combining with the action evaluation standard, so that the selected multi-frame optimal video frame can be associated with the corresponding action evaluation standard, the selection accuracy of the optimal video frame can be ensured, and the accuracy and the effect of the action evaluation are further ensured.
Drawings
For purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made in detail to the present invention as illustrated in the accompanying drawings, in which:
FIG. 1 is a logic block diagram of a human motion assessment method;
FIG. 2 is a schematic illustration of ten counterclockwise rotational angles on a human body;
FIG. 3 is a schematic view of the limb angle between the right large arm and the right small arm;
fig. 4 is a schematic diagram showing the counterclockwise rotation angle between the right large arm and the right small arm.
Detailed Description
The following is further detailed by the specific embodiments:
example (b):
first, the meaning of the action evaluation criterion and the action evaluation decision basis will be explained.
Action evaluation criteria: refers to items that need to be evaluated when evaluating actions.
For example, in the case of deep squat, the action evaluation criteria include: 1) testing whether the rod is right above the head top; 2) whether the trunk is parallel to the shank or vertical to the ground; 3) whether the thigh is lower than the horizontal line when squatting; 4) whether the knees are kept in the same direction as the feet.
Action evaluation decision basis: refers to a scoring criterion at the time of action assessment.
Taking squat as an example, the action evaluation decision basis includes: 1) the test rod is arranged right above the top of the head, the trunk is parallel to the shank or vertical to the ground, the thigh is lower than the horizontal line when squatting, the directions of the knees and the feet are kept consistent, and 3 minutes are obtained; 2) the required action can not be completed or the heel is padded with a wooden plate to complete the required action, and the score is 2; 3) the heel lower pad upper template still can not finish the required action, and the score is 1; 4) pain was noted in any part of the body during the test, and a score of 0 was obtained.
Based on the above description, the present embodiment discloses a human body motion estimation method based on motion state information.
As shown in fig. 1, the human body motion estimation method based on motion state information includes:
s1: acquiring a video to be tested of a tester;
s2: performing skeleton analysis and posture analysis on a video frame of a video to be detected to generate a corresponding human body key point coordinate graph; in the embodiment, skeleton analysis and posture analysis are performed on the video frame to be detected of the video to be detected through the AlphaPose model provided by Shanghai university of transportation.
S3: selecting corresponding multi-frame optimal video frames based on the human body key point coordinate graph and corresponding action evaluation standards;
s4: calculating the motion state information of the video to be detected based on the optimal video frames of the multiple frames;
s5: and comparing the motion state information of the video to be detected with the motion standard information of the template video to generate a corresponding action evaluation result.
In the invention, the action continuity and the action stability of a tester can be effectively embodied by selecting the multi-frame optimal video frame and calculating the motion state information of the video to be tested, and then the action evaluation result is generated by comparing the motion state information of the video to be tested with the motion standard information of the template video, so that the accuracy of human action evaluation can be ensured. Meanwhile, the human body key point coordinate graph is generated in a skeleton analysis and posture analysis mode, and then the multi-frame optimal video frame is selected by combining the action evaluation standard, so that the selected multi-frame optimal video frame can be associated with the corresponding action evaluation standard, the selection accuracy of the optimal video frame can be ensured, and the accuracy and the effect of the action evaluation are further ensured.
In the specific implementation process, when multiple optimal video frames are selected, firstly, angle information among key points is calculated based on a human body key point coordinate graph; then, determining recommended key points in a human body key point coordinate graph based on angle information among the key points; and finally, selecting corresponding multi-frame optimal video frames based on the recommended key points and action evaluation criteria in the human body key point coordinate graph.
Specifically, the angle information between the key points is an anticlockwise rotation angle; referring to fig. 2, the types of the counterclockwise rotation angle include an angle between the left forearm and the left forearm, an angle between the left forearm and the left shoulder, an angle between the left forearm and the trunk, an angle between the trunk and the left thigh, an angle between the left thigh and the left calf, an angle between the right forearm and the right forearm, an angle between the right shoulder and the right forearm, an angle between the trunk and the right thigh, and an angle between the right thigh and the right calf.
Calculating the counterclockwise rotation angle by:
s301: acquiring keypoint coordinates A, B, C for calculating a counterclockwise rotation angle;
s302: computing corresponding keypoint vectors based on keypoint coordinates A, B, C
Figure BDA0003377570650000051
And a keypoint vector
Figure BDA0003377570650000052
S303: vector of key points
Figure BDA0003377570650000053
Rotate in the counterclockwise direction until the vector of the key point
Figure BDA0003377570650000054
Overlapping; then the key point vector is calculated
Figure BDA0003377570650000055
Rotation to keypoint vector
Figure BDA0003377570650000056
As a corresponding counter-clockwise rotation angle.
In the specific implementation process, the recommendation key points are determined through the following steps:
s311: calculating the variance of each anticlockwise rotation angle in the human body key point coordinate graph;
s312: calculating a corresponding motion information proportion based on the variance of each counterclockwise rotation angle;
s313: and selecting the key point corresponding to the anticlockwise rotation angle with the maximum motion information proportion as a recommended key point.
In a specific implementation process, the variance of the counterclockwise rotation angle is calculated by the following formula:
Figure BDA0003377570650000057
the motion information ratio of the counterclockwise rotation angle is calculated by the following formula:
Figure BDA0003377570650000058
in the above formula: sigma2A variance representing the counterclockwise rotation angle; r represents a counterclockwise rotation angle; u. ofrThe mean value of the anticlockwise rotation angle in the coordinate graph of the key points of the human body is represented; n represents the number of anticlockwise rotation angles in the human body key point coordinate graph; i isnRepresenting the motion information proportion of the nth anticlockwise rotation angle in the human body key point coordinate graph;
Figure BDA0003377570650000061
the variance of the nth anticlockwise rotation angle in the coordinate graph of the human body key point is represented; e denotes a natural constant.
In the invention, the accuracy of selecting the optimal video frame can be ensured by calculating the angle information, selecting the recommended key points based on the angle information and finally selecting the multi-frame optimal video frame based on the recommended key points and the action evaluation standard, and the selected multi-frame optimal video frame can be associated with the corresponding action evaluation standard, so that the accuracy of selecting the optimal video frame can be ensured. Meanwhile, the recommended key points with large motion amplitude are selected to participate in the selection of the optimal video frame, on one hand, the recommended key points can accurately reflect limbs which perform main motion, and therefore the accuracy of motion evaluation can be guaranteed; on the other hand, the key points corresponding to the limbs which do not move or have a large movement amplitude do not participate in calculation, so that the calculation amount of motion evaluation can be reduced.
In addition, the two-dimensional posture is obtained by performing skeleton analysis and posture analysis on the video frame, and the key points in the human body key point coordinate graph are actually projections of the real posture on a two-dimensional plane, so that a simple limb angle is difficult to accurately represent a motion limb characteristic. As shown in fig. 3, the limb angle between the right large arm and the right small arm is the same when the right arm is bent at the front of the chest and at the side of the body, respectively. From a data point of view, the right arm's motion limb characteristics are the same, since the limb angle between the right big arm and the right small arm is the same, and in fact, there is a large difference between the two motions.
Therefore, the direction information, namely the rotation direction is added on the basis of the limb angle, so that the generated anticlockwise rotation angle has angle information and direction information (as shown in figure 4), the problem that part of posture information is lost when the real posture is projected to a two-dimensional plane can be solved, the characteristics of the action limb can be accurately represented, and the accuracy of human body action evaluation can be guaranteed. Meanwhile, the ten anticlockwise rotation angles designed by the invention can basically cover the important action limb characteristics of the human body posture, so that the effect of human body action evaluation can be further ensured.
In the specific implementation process, the motion state information of the video to be detected comprises the limb motion speed, the limb motion stability and the limb motion trend. The motion standard information comprises a motion speed standard range and a standard motion trend; when the limb movement speed is in the standard movement speed range, the limb movement trend corresponds to the standard movement trend and the limb movement is stable, the movement evaluation result is a passing evaluation; otherwise, the action evaluation fails.
The limb movement speed is calculated by the following formula:
Figure BDA0003377570650000062
in the above formula: v (N)1,N2) Representing an optimal video frame N2And an optimal video frame N1The movement speed of the middle corresponding limb; r1、R2Respectively representing an optimal video frame N2And an optimal video frame N1The counterclockwise rotation angle between key points of the middle corresponding limb; fps represents the frame rate of the video to be measured.
In the specific implementation process, the limb action stability is judged through the following formula:
Figure BDA0003377570650000071
in the above formula: s2Representing the stability characteristic value of the corresponding finger body; when the body motion stability is judged, S is2Comparing with the set stability threshold Y: if S2If the motion is more than or equal to Y, the limb motion is stable, otherwise, the limb motion is unstable; r ═ R1,R2,…RnRepresenting the anticlockwise rotation angle between key points of corresponding limbs in the optimal video frames of the multi-frame, wherein n represents the frame number of the optimal video frames;
Figure BDA0003377570650000072
represents the mean value of the counterclockwise rotation angle in R.
In the specific implementation process, the limb movement trend is judged through the following formula:
θ=R2-R1
in the above formula: theta denotes the optimal video frame N2And an optimal video frame N1Counter-clockwise rotation angle R of the middle limb1And R2The angle difference between the two points is used for judging the limb movement trend: if theta is larger than 0, the limb is rotated anticlockwise, and if theta is smaller than 0, the limb is rotated clockwise.
According to the invention, the motion state information of the tester can be well reflected through the limb motion speed, the limb motion stability and the limb motion trend, so that the motion continuity and the stability of the tester can be effectively embodied, and the motion evaluation result is generated through the comparison of the motion state information of the video to be tested and the motion standard information of the template video, so that the accuracy of human motion evaluation can be ensured. Meanwhile, the calculation formula and the calculation mode of the invention can well calculate the limb movement speed, the limb movement stability and the limb movement trend, thereby better reflecting the movement state information of the testers.
Specifically, when a human body key point coordinate graph is generated, firstly, a corresponding video frame is input into a pre-trained attitude estimation model, and a corresponding heat graph is output; then calculating the coordinates of the key points through the heat map to obtain a corresponding coordinate map of the key points of the human body;
when the attitude estimation model is trained, acquiring an attitude data set for training; then, converting labels marked in advance on the posture data set training diagram into corresponding heat diagram labels to obtain a corresponding label heat diagram; finally, training a posture estimation model based on the label heat map;
when generating the tag heat map, first, the size (W) of the tag heat map is seth×Hh) To generate a size Wh×HhThen calculating the heat distribution of the pre-labeled labels on the label heat map by the following formula to generate a corresponding label heat map;
Figure BDA0003377570650000073
when calculating the coordinates of the key points, obtaining the coordinates with the size of Wh×HhAnd reducing the heat map into 1 × Wh*Hh(ii) a Then calculating the maximum heat value index of the corresponding key point in the heat map by the following formula; and finally, calculating the coordinates of the corresponding key points by combining the index corresponding to the maximum heat value in the heat map and the size of the heat map, specifically, dividing the index by the corresponding WhThe quotient obtained is Wh×HhIs key toNumber of rows of dots x, remainder Wh×HhThe number y of the rows of the key points, namely the coordinates (x, y) of the key points;
Figure BDA0003377570650000081
in the above formula: g represents a calorific value; x is the number of0、y0Representing real coordinates of a label marked in advance; x and y represent coordinates of the label in the label heat map; sigma represents a standard deviation, and the value is 2 or 3; e represents a natural constant; i. j respectively represents an index of the one-dimensional heat map; x is the number ofi、xjRepresenting the heat values of the corresponding indices i and j; β represents a calibration coefficient.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that, while the invention has been described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Meanwhile, the detailed structures, characteristics and the like of the common general knowledge in the embodiments are not described too much. Finally, the scope of the claims should be determined by the content of the claims, and the description of the embodiments and the like in the specification should be used for interpreting the content of the claims.

Claims (10)

1. A human body action evaluation method based on action state information is characterized by comprising the following steps:
s1: acquiring a video to be tested of a tester;
s2: performing skeleton analysis and posture analysis on a video frame of a video to be detected to generate a corresponding human body key point coordinate graph;
s3: selecting corresponding multi-frame optimal video frames based on the human body key point coordinate graph and corresponding action evaluation standards;
s4: calculating the motion state information of the video to be detected based on the optimal video frames of the multiple frames;
s5: and comparing the motion state information of the video to be detected with the motion standard information of the template video to generate a corresponding action evaluation result.
2. The human motion estimation method based on motion state information according to claim 1, wherein: in the step S3, when selecting a plurality of optimal video frames, firstly, calculating angle information among key points based on a human body key point coordinate graph; then, determining recommended key points in a human body key point coordinate graph based on angle information among the key points; and finally, selecting corresponding multi-frame optimal video frames based on the recommended key points and action evaluation criteria in the human body key point coordinate graph.
3. The human motion estimation method based on motion state information according to claim 2, wherein: the angle information among the key points is an anticlockwise rotation angle;
calculating the counterclockwise rotation angle by:
s301: acquiring keypoint coordinates A, B, C for calculating a counterclockwise rotation angle;
s302: computing corresponding keypoint vectors based on keypoint coordinates A, B, C
Figure FDA0003377570640000011
And a keypoint vector
Figure FDA0003377570640000012
S303: vector of key points
Figure FDA0003377570640000013
Rotate in the counterclockwise direction until the vector of the key point
Figure FDA0003377570640000014
Overlapping; then the key point vector is calculated
Figure FDA0003377570640000015
Rotate to key point(Vector)
Figure FDA0003377570640000016
As a corresponding counter-clockwise rotation angle.
4. The human motion estimation method based on motion state information of claim 3, wherein the recommendation key point is determined by:
s311: calculating the variance of each anticlockwise rotation angle in the human body key point coordinate graph;
s312: calculating a corresponding motion information proportion based on the variance of each counterclockwise rotation angle;
s313: and selecting the key point corresponding to the anticlockwise rotation angle with the maximum motion information proportion as a recommended key point.
5. The human motion estimation method based on motion state information according to claim 4, wherein:
the variance of the counterclockwise rotation angle is calculated by the following formula:
Figure FDA0003377570640000017
the motion information ratio of the counterclockwise rotation angle is calculated by the following formula:
Figure FDA0003377570640000021
in the above formula: sigma2A variance representing the counterclockwise rotation angle; r represents a counterclockwise rotation angle; u. ofrThe mean value of the anticlockwise rotation angle in the coordinate graph of the key points of the human body is represented; n represents the number of anticlockwise rotation angles in the human body key point coordinate graph; i isnRepresenting the motion information proportion of the nth anticlockwise rotation angle in the human body key point coordinate graph;
Figure FDA0003377570640000022
the variance of the nth anticlockwise rotation angle in the coordinate graph of the human body key point is represented; e denotes a natural constant.
6. The human motion estimation method based on motion state information according to claim 1, wherein: in step S4, the motion state information of the video to be detected includes the limb motion speed, the limb motion stability and the limb motion trend.
7. The human motion estimation method based on motion state information of claim 6, wherein the limb movement velocity is calculated by the following formula:
Figure FDA0003377570640000023
in the above formula: v (N)1,N2) Representing an optimal video frame N2And an optimal video frame N1The movement speed of the middle corresponding limb; r1、R2Respectively representing an optimal video frame N2And an optimal video frame N1The counterclockwise rotation angle between key points of the middle corresponding limb; fps represents the frame rate of the video to be measured.
8. The human motion estimation method based on motion state information according to claim 6, wherein the limb motion stability is judged by the following formula:
Figure FDA0003377570640000024
in the above formula: s2Representing the stability characteristic value of the corresponding finger body; when the body motion stability is judged, S is2Comparing with the set stability threshold Y: if S2If the motion is more than or equal to Y, the limb motion is stable, otherwise, the limb motion is unstable; r ═ R1,R2,…RnExpressing corresponding limbs in the optimal video frame of a multi-frameThe counterclockwise rotation angle between the key points, where n represents the number of frames of the optimal video frame;
Figure FDA0003377570640000025
represents the mean value of the counterclockwise rotation angle in R.
9. The human motion estimation method based on motion state information according to claim 6, wherein the limb motion tendency is judged by the following formula:
θ=R2-R1
in the above formula: theta denotes the optimal video frame N2And an optimal video frame N1Counter-clockwise rotation angle R of the middle limb1And R2The angle difference between the two points is used for judging the limb movement trend: if theta is larger than 0, the limb is rotated anticlockwise, and if theta is smaller than 0, the limb is rotated clockwise.
10. The human motion estimation method based on motion state information according to claim 6, wherein: in step S5, the motion standard information includes a motion speed standard range and a standard motion trend;
when the limb movement speed is in the standard movement speed range, the limb movement trend corresponds to the standard movement trend and the limb movement is stable, the movement evaluation result is a passing evaluation; otherwise, the action evaluation fails.
CN202111421516.XA 2021-11-26 2021-11-26 Human body action evaluation method based on action state information Pending CN114093032A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111421516.XA CN114093032A (en) 2021-11-26 2021-11-26 Human body action evaluation method based on action state information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111421516.XA CN114093032A (en) 2021-11-26 2021-11-26 Human body action evaluation method based on action state information

Publications (1)

Publication Number Publication Date
CN114093032A true CN114093032A (en) 2022-02-25

Family

ID=80304994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111421516.XA Pending CN114093032A (en) 2021-11-26 2021-11-26 Human body action evaluation method based on action state information

Country Status (1)

Country Link
CN (1) CN114093032A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373531A (en) * 2022-02-28 2022-04-19 深圳市旗扬特种装备技术工程有限公司 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium
CN116630551A (en) * 2023-07-25 2023-08-22 凝动万生医疗科技(武汉)有限公司 Motion capturing and evaluating device and method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373531A (en) * 2022-02-28 2022-04-19 深圳市旗扬特种装备技术工程有限公司 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium
CN116630551A (en) * 2023-07-25 2023-08-22 凝动万生医疗科技(武汉)有限公司 Motion capturing and evaluating device and method thereof
CN116630551B (en) * 2023-07-25 2023-11-07 凝动万生医疗科技(武汉)有限公司 Motion capturing and evaluating device and method thereof

Similar Documents

Publication Publication Date Title
JP6733738B2 (en) MOTION RECOGNITION DEVICE, MOTION RECOGNITION PROGRAM, AND MOTION RECOGNITION METHOD
CN111437583B (en) Badminton basic action auxiliary training system based on Kinect
Zago et al. Multi-segmental movements as a function of experience in karate
CN109191588A (en) Move teaching method, device, storage medium and electronic equipment
CN109409348A (en) A kind of determination method, apparatus, equipment and the storage medium of user's sign
CN114093032A (en) Human body action evaluation method based on action state information
KR20220028654A (en) Apparatus and method for providing taekwondo movement coaching service using mirror dispaly
CN114092863A (en) Human body motion evaluation method for multi-view video image
CN114092971A (en) Human body action evaluation method based on visual image
WO2023108842A1 (en) Motion evaluation method and system based on fitness teaching training
Xie et al. Visual feedback for core training with 3d human shape and pose
CN114092862A (en) Action evaluation method based on optimal frame selection
CN116271757A (en) Auxiliary system and method for basketball practice based on AI technology
CN115116087A (en) Action assessment method, system, storage medium and electronic equipment
JP2024508782A (en) Methods to improve markerless motion analysis
Karunaratne et al. Objectively measure player performance on Olympic weightlifting
Sharma et al. Digital Yoga Game with Enhanced Pose Grading Model
Sarwar et al. Skeleton Based Keyframe Detection Framework for Sports Action Analysis: Badminton Smash Case
Zhai Research on 3D modeling and detection methods of wrong actions in sports
Luo A sports digital training system based on middle and bottom visual information
Trinh et al. Design and Analysis of an FPGA-based CNN for Exercise Recognition
JP2021099666A (en) Method for generating learning model
Chen et al. Research on Table Tennis Swing Recognition Based on Lightweight OpenPose
Llanos et al. Computer-assisted table tennis posture analysis using machine learning
Yan et al. Research on the Application of Intelligent Sports Tracking System in Improving the Teaching Effect of University Physical Education

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination