CN111144217B - Motion evaluation method based on human body three-dimensional joint point detection - Google Patents
Motion evaluation method based on human body three-dimensional joint point detection Download PDFInfo
- Publication number
- CN111144217B CN111144217B CN201911193095.2A CN201911193095A CN111144217B CN 111144217 B CN111144217 B CN 111144217B CN 201911193095 A CN201911193095 A CN 201911193095A CN 111144217 B CN111144217 B CN 111144217B
- Authority
- CN
- China
- Prior art keywords
- video
- motion
- similarity
- vector
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a motion evaluation method based on human body three-dimensional joint point detection, which belongs to the field of computer vision and comprises the following steps: s1: detecting three-dimensional joint points of a human body on a single-frame picture after video framing; s2: extracting key frames of a video appointed frame number; s3: constructing motion vector characteristics and joint kinetic energy characteristics, and extracting characteristic values; s4: constructing a key frame action similarity contrast model by multi-feature fusion: combining the sub-features in the step S3, and constructing an individual model aiming at different types of actions; constructing a motion vector feature similarity function based on the cosine similarity, and constructing a joint kinetic energy similarity function based on the weighting function; and obtaining a key frame action similarity comparison model based on the two similarity functions, comparing the action to be detected with the key frame set of the standard action, and finally obtaining the action similarity of the motion video. The method is more accurate and scientific and can be used for correcting and teaching physical fitness actions.
Description
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a motion evaluation method based on human body three-dimensional joint point detection.
Background
With the progress of artificial intelligence algorithm and computer image processing performance, pose evaluation and behavior understanding of targets in videos become hot problems in the field of computer vision. And has been applied in many fields such as sports training aid, behavior abnormality detection, gesture and gait recognition, and the like.
The human posture assessment can be widely applied to various sports, human action recognition is utilized in sports teaching and fitness teaching, and actions are captured and analyzed, so that an individualized technical diagnosis report is obtained, an auxiliary training tool is provided for athletes and coaches, and the level of sporters is improved.
The standard degree of action in sports training determines the quality of training effect, and at present, sports and fitness training usually depends on the observation and experience of coaches, so that technical guidance is provided for athletes. Athletes also lack intuitive feedback, reducing training efficiency.
At present, the intelligent product of sports analysis adopts sensor + APP mode mostly, like ZEEP tennis intelligence tracking analysis inductor, installs in the racket bottom, can take notes batting speed, position etc. and data passes through APP real-time feedback. The same type of products also comprise a ZEPP golf swing analyzer, a cool wave and small feather badminton sensor and the like. In addition to sensors, there are also motion analysis products that utilize high speed cameras, such as the SAP Smart basketball coach, with the help of high speed cameras and powerful computing platforms, to analyze the posture of the shooter, such as take-off height, angle, etc. The device is expensive, complex to operate and not suitable for general sporters. With the improvement of the capability of processing computer images and videos and the rapid development of a deep learning algorithm, the analysis of human motion gestures through videos becomes possible.
However, analyzing and evaluating the actions of the target person using sports videos still lacks an effective solution. One of the difficulties is that the two-dimensional human posture assessment is easily influenced by a shelter, and the accuracy of the assessment of unconventional actions such as joint staggering and the like is low; the second difficulty is that the individuals have physical differences, are fat, thin and tall, and the accuracy rate of evaluating the action by directly calculating the Euclidean distance of the joint points is low; the third difficulty is that different people can do the same action quickly or slowly, and cannot perform the comparison analysis frame by frame.
In summary, at present, there is no mature algorithm and product for judging the action standard of the motion video.
Disclosure of Invention
In view of the above, the present invention is directed to a motion evaluation method based on human body three-dimensional joint point detection, which solves the problem of motion standard evaluation.
In order to achieve the purpose, the invention provides the following technical scheme:
a motion evaluation method based on human body three-dimensional joint point detection comprises the following steps:
s1: detecting three-dimensional joint points of the human body: detecting three-dimensional joint points of a human body on a single-frame picture after video framing;
s2: extracting key frames: extracting key frames with specified frame numbers of the video to realize time alignment of the video to be detected and the standard video;
s3: constructing and extracting features based on joint points: constructing two types of sub-features and extracting feature values, comprising the following steps:
constructing motion vector characteristics: considering the action postures of the human body including head movement, limb movement and chest and waist movement, limbs capable of expressing movement information are selected to form movement vector characteristics;
constructing joint kinetic energy characteristics: calculating the kinetic energy of each joint point in each frame according to the change amplitude of the coordinates in two adjacent frames of the video;
s4: constructing a key frame action similarity contrast model by multi-feature fusion: combining the sub-features in the step S3, and constructing personalized models aiming at different types of actions;
constructing a motion vector feature similarity function based on the cosine similarity, and constructing a joint kinetic energy similarity function based on the weighting function;
and obtaining a key frame action similarity comparison model based on the motion vector feature similarity function and the joint kinetic energy similarity function, and comparing the action to be detected with a key frame set of standard actions to finally obtain the action similarity of the motion video.
Further, in step S1, the three-dimensional joint point coordinates of the human body in the video are obtained using a three-dimensional joint point detection network based on a deep learning algorithm.
Further, in step S1, inputting a video, implementing 3D human body posture estimation from 2D joint point trajectories by using a time-space domain convolution algorithm, and outputting three-dimensional joint point coordinate information;
among the three-dimensional joint point coordinate information, Loci,tAnd (x, y and z) represents the coordinate position of the human skeletal joint point numbered i in the t frame, and comprises 17 skeletal joint points of the top of the head, the nose, the neck, the shoulder, the right shoulder, the left elbow, the right elbow, the left wrist, the right wrist, the waist, the middle hip, the left hip, the right hip, the left knee, the right knee, the left ankle and the right ankle.
Further, in step S2, extracting the key frames based on a clustering algorithm, clustering the three-dimensional joint coordinates, selecting k clustering centers, calculating the distance between each frame and the joint coordinates corresponding to the clustering centers, selecting the frame closest to the clustering centers as key frames, and summing up k key frames; and sequencing the key frames according to the time index to obtain a key frame set of the video.
Further, in the above step S2, the joint coordinates Loc of the t-th frame are calculatedi,tThe joint coordinates Loc corresponding to the frame where the k-th clustering center is locatedi,kHas a Euclidean distance O betweenk,tThe calculation formula is
Wherein n represents the number of joint points; for a standard motion sequence X ═ X1,X2,X3,...XNAnd the motion sequence Y to be detected is { Y ═ Y }1,Y2,Y3,...YMAnd (5) selecting a frame closest to a clustering center as a key frame, and sequencing the key frames according to time indexes to obtain a key frame set { f) of the video to be detected, wherein N and M are action sequence lengths1,f2,…,fk},fiE.g. X, and the set of keyframes { f ] of the standard video1',f2',…,fk'},fi'∈Y。
Further, in step S3, constructing a motion vector feature, and selecting a limb component capable of representing motion information, where the limb component includes 9 vectors, i.e., a neck vector, a left upper arm vector, a left lower arm vector, a right upper arm vector, a right lower arm vector, a left upper leg vector, a left lower leg vector, a right upper leg vector, and a right lower leg vector; the motion plane normal vector characteristics consist of the vector products, and comprise 6 vector characteristics of a left arm normal vector, a right arm normal vector, a left leg normal vector, a right leg normal vector, a chest normal vector and a hip normal vector; a total of 15 motion vector features.
Further, in step S3, the joint kinetic energy characteristic calculation formula is
The formula represents the kinetic energy characteristic value corresponding to the ith joint point in the t frame, wherein c is a kinetic energy parameter, and delta t is a frame difference and represents the change condition of the kinetic energy between two frames; loci,tAnd the coordinate position of the human skeletal joint point numbered i in the t frame.
Further, in step S4, the features extracted in step S3 are used to construct a key frame motion similarity comparison model by fusing multiple features, and a motion vector feature similarity function is proposed:
the expression represents the similarity of the motion vector characteristics of the video to be detected and the standard video in the t frames, wherein Vi,tIs the ith motion vector characteristic value V 'of the video to be detected'i,tThe motion vector is the ith motion vector characteristic value of the standard video, and n is the number of the motion vector characteristics;
and (3) providing a joint kinetic energy similarity function:
the above formula represents the view to be detected in the t frameSimilarity of joint kinetic energy characteristics of the video and the standard video, Ej,tIs the jth motion vector characteristic value, E 'of the video to be detected'j,tGiving different weights w to j joint kinetic energy characteristic values of the standard video and m to the number of joint kinetic energy characteristics according to different action types;
constructing a key frame action similarity contrast model based on the similarity function:
d(ft,ft')=SVt+SEt
the above formula represents the key frame f of the video to be detectedtAnd standard video key frame ft' attitude similarity;
and finally, obtaining an action sequence similarity evaluation function based on the similarity of the key frames:
and D (X, Y) is the motion difference distance between the video X to be detected and the standard video Y, the smaller the value is, the more similar the motion is, and k is the number of key frames.
The invention has the beneficial effects that: compared with the existing motion evaluation method based on two-dimensional joint points, the method has the advantages that the advantages of a deep convolution neural network and a time-space domain convolution algorithm are fully utilized, the two-dimensional coordinate track is converted into a three-dimensional coordinate, and the motion condition of a human body can be evaluated more accurately; the invention integrates multiple characteristics to establish an evaluation model, the motion vector characteristics represent the relative position and angle information of the joint, the joint kinetic energy characteristics represent the motion change amplitude and frequency information, and more comprehensive action standard evaluation can be given; the invention adopts a key frame extraction method based on clustering to carry out time alignment, and solves the problem of unequal action time.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a motion evaluation method based on human body three-dimensional joint point detection according to the present invention;
FIG. 2 is a schematic diagram of the detection of three-dimensional joint points of a human body according to the present invention;
FIG. 3 is a schematic diagram of 17 human skeletal joints according to the present invention;
FIG. 4 is a schematic diagram of key frame extraction provided by the present invention;
FIG. 5 is a diagram illustrating the motion vector characteristics provided by the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; for a better explanation of the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
As shown in fig. 1, the present invention provides a motion evaluation method based on human body joint point detection, comprising the following steps:
step S1: detecting human body joint points: and carrying out joint point detection on the single-frame picture after the video is framed.
Step S11: the two-dimensional position coordinates of the human skeleton are directly obtained from the images by means of a human joint point detection network, and then the two-dimensional coordinates are converted into three-dimensional coordinates by utilizing a time-space domain convolution algorithm according to the coordinate information of the images of adjacent frames.
Detected three-dimensional coordinates, Loci,tAnd (x, y and z) represents the coordinate position of the human skeletal joint point numbered i in the t frame, and comprises 17 skeletal joint points of the top of the head, the nose, the neck, the shoulder, the right shoulder, the left elbow, the right elbow, the left wrist, the right wrist, the waist, the middle hip, the left hip, the right hip, the left knee, the right knee, the left ankle and the right ankle.
Fig. 2 shows an example of the detection of three-dimensional joint points of a human body.
Step S12: storing the coordinate information of the articulated point, Loci,tAnd (x, y, z) represents the coordinate position of the human bone joint point numbered i in the t frame.
As shown in fig. 3, 17 human skeletal joint points are shown.
Step S2: extracting video key frames based on a joint point coordinate clustering algorithm: and extracting a specified number of key frames by using a k-means clustering algorithm.
Step S21: from a set of frames XNSelecting K frames as a clustering center, and expressing as C ═ C1,c2,...,ck}。
Step S22: calculating Euclidean distance between each frame and corresponding joint of cluster centerAnd classifying the cluster into the cluster with the minimum distance to the cluster center. In the above formula, Loci,kAnd the joint coordinates of the frame where the kth cluster center is located are shown, n is the number of joint points, and k is the number of cluster centers. For a standard motion sequence X ═ X1,X2,X3,...XNAnd the motion sequence Y to be detected is { Y ═ Y }1,Y2,Y3,...YMAnd (5) selecting a frame closest to the cluster center as a key frame, and sequencing the key frames according to the time index to obtain a key frame set { f) of the video to be detected, wherein N and M are the action sequence lengths1,f2,…,fk},fiE.g. X, and the set of keyframes { f ] of the standard video1',f2',…,fk'},fi'∈Y。
Step S23: for each class ciRecalculating its cluster centerThis process is repeated until the cluster center position does not change.
Step S24: selecting a frame nearest to the clustering center as a key frame, sequencing the key frames according to time indexes, and regarding a standard action sequence X ═ X1,X2,X3,...XNAnd a sequence of actions to be detected Y ═ Y } Y1,Y2,Y3,...YMObtaining a key frame set { f) of the video to be detected, wherein N and M are action sequence lengths1,f2,…,fk},fiE.g. X, and the set of keyframes { f ] of the standard video1',f2',…,fk'},fi'∈Y。
Referring to fig. 4, a diagram illustrating key frame extraction is shown.
Step S3: constructing and extracting features based on joint points: and constructing subclass characteristics, and extracting the characteristics of the to-be-detected video and the standard video.
Step S31: extracting the motion vector characteristics: considering the action posture of the human body including head movement, limb movement, and chest and waist movement, 15 motion vector features as shown in the following table are selected.
Referring to fig. 5, a picture shows planar and vector features in three-dimensional space.
Step S32: extracting joint kinetic energy features:
Ei,tthe kinetic energy of the ith joint point in the t frame is obtained; loci,tThree-dimensional coordinates of the ith joint point; c is a kinetic energy parameter, and the value is different according to different motions; Δ t is the time interval between two adjacent frames.
Step S4: and (5) fusing the sub-features to construct a key frame action similarity contrast model.
Step S41: constructing a motion vector feature similarity function based on the cosine similarity, wherein the formula is as follows:
Vi,t、V′i,trespectively representing the motion vector characteristic values in t frames of the video to be detected and the standard video. n is 15, and the total number is 15 characteristic values.
Step S42: constructing a joint kinetic energy feature similarity function based on the weighting function, wherein the formula is as follows:
Ei,t、E′i,tindividual watchAnd displaying the joint kinetic energy characteristic values in t frames of the video to be detected and the standard video. And m is 17, and the total number of the characteristic values is 17. Endowing the feature vector with different weights w according to different action types, wherein the weight value range is [0, 1%]。
Step S43: fusing the similarity function to obtain a key frame similarity contrast function d (f)t,ft')=SVt+SEtCalculating the distance d (f) of the action in the key frame of the video to be detectedt,ft')。
Step S44: evaluating the action sequences based on the key frame similarity model, wherein the similarity of the two action sequences can be obtained by the following formula:
and calculating according to the formula to obtain the value of D (X, Y), namely the motion difference distance of the motion flow sequence X, Y. Smaller distances indicate more similar motion.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.
Claims (7)
1. A motion evaluation method based on human body three-dimensional joint point detection is characterized by comprising the following steps: the method comprises the following steps:
s1: detecting three-dimensional joint points of the human body: detecting three-dimensional joint points of a human body on a single-frame picture after video framing;
s2: extracting key frames: extracting key frames with specified frame numbers of the video to realize time alignment of the video to be detected and the standard video;
s3: constructing and extracting features based on joint points: constructing two types of sub-features and extracting feature values, comprising the following steps:
constructing motion vector characteristics: considering that the action postures of the human body comprise head movement, limb movement and chest and waist movement, selecting limbs capable of expressing movement information to form a movement vector characteristic;
constructing joint kinetic energy characteristics: calculating the kinetic energy of each joint point in each frame according to the change amplitude of the coordinates in two adjacent frames of the video;
s4: constructing a key frame action similarity contrast model by multi-feature fusion: combining the sub-features in the step S3, and constructing personalized models aiming at different types of actions; constructing a motion vector feature similarity function based on the cosine similarity, and constructing a joint kinetic energy similarity function based on the weighting function; obtaining a key frame action similarity comparison model based on the motion vector feature similarity function and the joint kinetic energy similarity function, and comparing the action to be detected with a key frame set of standard actions to finally obtain the action similarity of the motion video; the method specifically comprises the following steps:
s41: and (5) constructing a key frame action similarity contrast model by using the features extracted in the step (S3) and fusing multiple features, and providing a motion vector feature similarity function:
the expression represents the similarity of the motion vector characteristics of the video to be detected and the standard video in the t frames, wherein Vi,tIs the ith motion vector characteristic value V 'of the video to be detected'i,tThe motion vector is the ith motion vector characteristic value of the standard video, and n is the number of the motion vector characteristics;
s42: and (3) providing a joint kinetic energy similarity function:
the formula represents the joint kinetic energy characteristic similarity of the video to be detected and the standard video in the t frame, Ej,tIs the jth motion vector characteristic value, E 'of the video to be detected'j,tIs the first of the standard videoj joint kinetic energy characteristic values, m is the number of joint kinetic energy characteristics, and different weights w are given according to different action types;
s43: constructing a key frame action similarity contrast model based on the similarity function:
d(ft,ft')=SVt+SEt
the above formula represents the key frame f of the video to be detectedtAnd standard video key frame ft' attitude similarity;
s44: obtaining an action sequence similarity evaluation function based on the similarity of the key frames:
and D (X, Y) is the motion difference distance between the video X to be detected and the standard video Y, the smaller the value is, the more similar the motion is, and k is the number of key frames.
2. The motion evaluation method based on human body three-dimensional joint point detection according to claim 1, characterized in that: in step S1, the three-dimensional joint point coordinates of the human body in the video are obtained using a three-dimensional joint point detection network based on a deep learning algorithm.
3. The motion evaluation method based on human body three-dimensional joint point detection according to claim 2, characterized in that: in the step S1, inputting a video, realizing 3D human body posture estimation from a 2D joint point track by using a time-space domain convolution algorithm, and outputting three-dimensional joint point coordinate information;
among the three-dimensional joint point coordinate information, Loci,tAnd (x, y and z) represents the coordinate position of the human skeletal joint point numbered i in the t frame, and comprises 17 skeletal joint points of the top of the head, the nose, the neck, the shoulder, the right shoulder, the left elbow, the right elbow, the left wrist, the right wrist, the waist, the middle hip, the left hip, the right hip, the left knee, the right knee, the left ankle and the right ankle.
4. The motion evaluation method based on human body three-dimensional joint point detection according to claim 1, characterized in that: in step S2, extracting the key frames based on a clustering algorithm, clustering the coordinates of three-dimensional joint points, selecting k clustering centers, calculating the distance between each frame and the corresponding joint point coordinate of the clustering center, selecting the frame closest to the clustering center as a key frame, and totaling k key frames; and sequencing the key frames according to the time index to obtain a key frame set of the video.
5. The motion evaluation method based on human body three-dimensional joint point detection according to claim 4, characterized in that: in step S2, the joint coordinates Loc of the t-th frame are calculatedi,tThe joint coordinates Loc corresponding to the frame where the k-th clustering center is locatedi,kHas a Euclidean distance O betweenk,tThe calculation formula is
Wherein n represents the number of joint points; for a standard motion sequence X ═ X1,X2,X3,...XNAnd the motion sequence Y to be detected is { Y ═ Y }1,Y2,Y3,...YMAnd (5) selecting a frame closest to the clustering center as a key frame, and sequencing the key frames according to time indexes to obtain a key frame set { f of the video to be detected, wherein N and M are action sequence lengths1,f2,…,fk},fiE.g. X, and the set of keyframes { f ] of the standard video1',f2',…,fk'},fi'∈Y。
6. The motion evaluation method based on human body three-dimensional joint point detection according to claim 1, characterized in that: in step S3, constructing a motion vector feature, and selecting a limb component capable of representing motion information, including 9 vectors, including a neck vector, a left upper arm vector, a left lower arm vector, a right upper arm vector, a right lower arm vector, a left thigh vector, a left lower leg vector, a right upper leg vector, and a right lower leg vector; the motion plane normal vector feature is composed of the vector product, and comprises 6 vector features of a left arm normal vector, a right arm normal vector, a left leg normal vector, a right leg normal vector, a chest normal vector and a hip normal vector; a total of 15 motion vector features.
7. The motion evaluation method based on human body three-dimensional joint point detection according to claim 6, wherein: in the step S3, the joint kinetic energy characteristic calculation formula is
The formula represents the kinetic energy characteristic value corresponding to the ith joint point in the t frame, wherein c is a kinetic energy parameter, and delta t is a frame difference and represents the change condition of the kinetic energy between two frames; loci,tAnd the coordinate position of the human body bone joint point numbered i in the frame t.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911193095.2A CN111144217B (en) | 2019-11-28 | 2019-11-28 | Motion evaluation method based on human body three-dimensional joint point detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911193095.2A CN111144217B (en) | 2019-11-28 | 2019-11-28 | Motion evaluation method based on human body three-dimensional joint point detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111144217A CN111144217A (en) | 2020-05-12 |
CN111144217B true CN111144217B (en) | 2022-07-01 |
Family
ID=70517314
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911193095.2A Active CN111144217B (en) | 2019-11-28 | 2019-11-28 | Motion evaluation method based on human body three-dimensional joint point detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111144217B (en) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7367775B2 (en) | 2019-12-24 | 2023-10-24 | 日本電気株式会社 | Feature learning system, feature learning method and program |
CN111931804B (en) * | 2020-06-18 | 2023-06-27 | 南京信息工程大学 | Human body action automatic scoring method based on RGBD camera |
CN111563487B (en) * | 2020-07-14 | 2020-10-23 | 平安国际智慧城市科技股份有限公司 | Dance scoring method based on gesture recognition model and related equipment |
CN114093021A (en) * | 2020-08-06 | 2022-02-25 | 上海哔哩哔哩科技有限公司 | Dance video motion extraction method and device, computer equipment and storage medium |
CN112205979A (en) * | 2020-08-18 | 2021-01-12 | 同济大学 | Device and method for measuring mechanical energy of moving human body in real time |
CN111967407B (en) * | 2020-08-20 | 2023-10-20 | 咪咕互动娱乐有限公司 | Action evaluation method, electronic device, and computer-readable storage medium |
CN112085105A (en) * | 2020-09-10 | 2020-12-15 | 上海庞勃特科技有限公司 | Motion similarity evaluation method based on human body shape and posture estimation |
CN111985853A (en) * | 2020-09-10 | 2020-11-24 | 成都拟合未来科技有限公司 | Interactive practice ranking evaluation method, system, terminal and medium |
CN112582064A (en) * | 2020-11-05 | 2021-03-30 | 中国科学院深圳先进技术研究院 | Action evaluation method, device, equipment and storage medium |
CN112487965B (en) * | 2020-11-30 | 2023-01-31 | 重庆邮电大学 | Intelligent fitness action guiding method based on 3D reconstruction |
CN112528823B (en) * | 2020-12-04 | 2022-08-19 | 燕山大学 | Method and system for analyzing batcharybus movement behavior based on key frame detection and semantic component segmentation |
CN112464847B (en) * | 2020-12-07 | 2021-08-31 | 北京邮电大学 | Human body action segmentation method and device in video |
US11625938B2 (en) | 2020-12-29 | 2023-04-11 | Industrial Technology Research Institute | Method and device for detecting human skeletons |
CN112842261B (en) * | 2020-12-30 | 2021-12-28 | 西安交通大学 | Intelligent evaluation system for three-dimensional spontaneous movement of infant based on complex network |
CN112989121B (en) * | 2021-03-08 | 2023-07-28 | 武汉大学 | Time sequence action evaluation method based on key frame preference |
CN113052138B (en) * | 2021-04-25 | 2024-03-15 | 广海艺术科创(深圳)有限公司 | Intelligent contrast correction method for dance and movement actions |
CN113033501A (en) * | 2021-05-06 | 2021-06-25 | 泽恩科技有限公司 | Human body classification method and device based on joint quaternion |
CN113239797B (en) * | 2021-05-12 | 2022-02-25 | 中科视语(北京)科技有限公司 | Human body action recognition method, device and system |
CN113221815A (en) * | 2021-05-25 | 2021-08-06 | 北京无垠创新科技有限责任公司 | Gait identification method based on automatic detection technology of skeletal key points |
CN113401774A (en) * | 2021-05-26 | 2021-09-17 | 杭州法维莱科技有限公司 | 90 degrees vertical hinged door systems of elevator with prevent pressing from both sides function |
CN113392745A (en) * | 2021-06-04 | 2021-09-14 | 北京格灵深瞳信息技术股份有限公司 | Abnormal action correction method, abnormal action correction device, electronic equipment and computer storage medium |
CN113486771B (en) * | 2021-06-30 | 2023-07-07 | 福州大学 | Video action uniformity evaluation method and system based on key point detection |
CN113327267A (en) * | 2021-07-15 | 2021-08-31 | 东南大学 | Action evaluation method based on monocular RGB video |
CN113780206B (en) * | 2021-09-16 | 2022-11-04 | 福建平潭瑞谦智能科技有限公司 | Video image analysis processing method |
WO2023106201A1 (en) * | 2021-12-09 | 2023-06-15 | Necソリューションイノベータ株式会社 | Play analysis device, play analysis method, and computer-readable storage medium |
CN114534224A (en) * | 2022-01-13 | 2022-05-27 | 上海凯视力成科技有限公司 | Intelligent mirror for golf swing |
CN114373531B (en) * | 2022-02-28 | 2022-10-25 | 深圳市旗扬特种装备技术工程有限公司 | Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium |
CN116805433B (en) * | 2023-06-27 | 2024-02-13 | 北京奥康达体育科技有限公司 | Human motion trail data analysis system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105912985A (en) * | 2016-04-01 | 2016-08-31 | 上海理工大学 | Human skeleton joint point behavior motion expression method based on energy function |
WO2017115887A1 (en) * | 2015-12-29 | 2017-07-06 | 경일대학교 산학협력단 | Device for providing motion recognition-based game, method for same, and computer-readable recording medium on which said method is recorded |
WO2018066359A1 (en) * | 2016-10-07 | 2018-04-12 | パイオニア株式会社 | Examination device, examination method, computer program, and recording medium |
CN110096950A (en) * | 2019-03-20 | 2019-08-06 | 西北大学 | A kind of multiple features fusion Activity recognition method based on key frame |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014062441A1 (en) * | 2012-10-16 | 2014-04-24 | University Of Florida Research Foundation, Inc. | Screening for neurologial disease using speech articulation characteristics |
-
2019
- 2019-11-28 CN CN201911193095.2A patent/CN111144217B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017115887A1 (en) * | 2015-12-29 | 2017-07-06 | 경일대학교 산학협력단 | Device for providing motion recognition-based game, method for same, and computer-readable recording medium on which said method is recorded |
CN105912985A (en) * | 2016-04-01 | 2016-08-31 | 上海理工大学 | Human skeleton joint point behavior motion expression method based on energy function |
WO2018066359A1 (en) * | 2016-10-07 | 2018-04-12 | パイオニア株式会社 | Examination device, examination method, computer program, and recording medium |
CN110096950A (en) * | 2019-03-20 | 2019-08-06 | 西北大学 | A kind of multiple features fusion Activity recognition method based on key frame |
Non-Patent Citations (4)
Title |
---|
Identity Based Approach Under a Unified Service Model for Secure Content Distribution in ICN;Jiangtao Luo等;《2018 1st IEEE International Conference on Hot Information-Centric Networking (HotICN)》;20180817;全文 * |
基于深度图的三维激光雷达点云目标分割方法;许国良等;《中国激光》;20190731;全文 * |
基于运动能量模型的人体运动捕捉数据库的检索;冯林等;《计算机辅助设计与图形学学报》;20070815(第08期);全文 * |
姿态估计和跟踪结合的运动视频关键帧提取;石念峰等;《电视技术》;20170517;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111144217A (en) | 2020-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111144217B (en) | Motion evaluation method based on human body three-dimensional joint point detection | |
Kamal et al. | A hybrid feature extraction approach for human detection, tracking and activity recognition using depth sensors | |
CN107423730B (en) | Human gait behavior active detection and recognition system and method based on semantic folding | |
CN109344694B (en) | Human body basic action real-time identification method based on three-dimensional human body skeleton | |
CN105512621A (en) | Kinect-based badminton motion guidance system | |
Yang et al. | Human upper limb motion analysis for post-stroke impairment assessment using video analytics | |
Elaoud et al. | Skeleton-based comparison of throwing motion for handball players | |
CN110991268B (en) | Depth image-based Parkinson hand motion quantization analysis method and system | |
CN112668531A (en) | Motion posture correction method based on motion recognition | |
CN113255522B (en) | Personalized motion attitude estimation and analysis method and system based on time consistency | |
CN106846372B (en) | Human motion quality visual analysis and evaluation system and method thereof | |
CN106815855A (en) | Based on the human body motion tracking method that production and discriminate combine | |
Ko et al. | CNN and bi-LSTM based 3D golf swing analysis by frontal swing sequence images | |
CN109993116A (en) | A kind of pedestrian mutually learnt based on skeleton recognition methods again | |
Almasi et al. | Investigating the Application of Human Motion Recognition for Athletics Talent Identification using the Head-Mounted Camera | |
CN111539364A (en) | Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting | |
CN114191797B (en) | Free skiing intelligent training system | |
CN114360052A (en) | Intelligent somatosensory coach system based on AlphaPose and joint point angle matching algorithm | |
Zhao et al. | Recognition of Volleyball Player's Arm Motion Trajectory and Muscle Injury Mechanism Analysis Based upon Neural Network Model | |
CN112364815A (en) | High jump posture detection method for high jump athletes based on three-dimensional model | |
Murthy et al. | DiveNet: Dive Action Localization and Physical Pose Parameter Extraction for High Performance Training | |
Maik et al. | Hierarchical pose classification based on human physiology for behaviour analysis | |
CN117671738B (en) | Human body posture recognition system based on artificial intelligence | |
Yadav et al. | An Efficient Deep Convolutional Neural Network Model For Yoga Pose Recognition Using Single Images | |
Drory | Computer Vision and Machine Learning for Biomechanics Applications: Human Detection, Pose and Shape Estimation and Tracking in Unconstrained Environment from Uncalibrated Images, Videos and Depth |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |