CN113537128A - Method, system and equipment for comparing and analyzing continuous actions based on deep learning posture assessment - Google Patents

Method, system and equipment for comparing and analyzing continuous actions based on deep learning posture assessment Download PDF

Info

Publication number
CN113537128A
CN113537128A CN202110862230.9A CN202110862230A CN113537128A CN 113537128 A CN113537128 A CN 113537128A CN 202110862230 A CN202110862230 A CN 202110862230A CN 113537128 A CN113537128 A CN 113537128A
Authority
CN
China
Prior art keywords
video
area
comparing
key points
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110862230.9A
Other languages
Chinese (zh)
Inventor
刘师岐
马佳鑫
郭涛涛
戴旭强
赵懿博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Zhongjin Yuneng Education Technology Co ltd
Original Assignee
Guangzhou Zhongjin Yuneng Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Zhongjin Yuneng Education Technology Co ltd filed Critical Guangzhou Zhongjin Yuneng Education Technology Co ltd
Priority to CN202110862230.9A priority Critical patent/CN113537128A/en
Publication of CN113537128A publication Critical patent/CN113537128A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a system and equipment for comparing and analyzing continuous actions based on deep learning posture evaluation, aiming at ensuring the accuracy of comparison and analysis; the method comprises the following steps: a method for comparing and analyzing continuous actions based on deep learning posture assessment comprises the following steps: step 1: acquiring a video A serving as an evaluation standard and a video B to be evaluated; step 2: acquiring key points in each frame of the video A and the video B, and planning a preset graph containing the key points; and step 3: in the process of acquiring a starting frame and an ending frame in a video A and a video B, a moving track of a preset graph forms a first area and a second area; and 4, step 4: comparing the first area with the second area to obtain a matching degree; belonging to the technical field of motion attitude assessment.

Description

Method, system and equipment for comparing and analyzing continuous actions based on deep learning posture assessment
Technical Field
The invention belongs to the technical field of motion posture assessment, and particularly relates to a method, a system and equipment for comparing and analyzing continuous actions based on deep learning posture assessment.
Background
At present, the application range of human body posture recognition is very wide, and the human body posture recognition system can be used in various fields of human-computer interaction, movie and television production, motion analysis, game entertainment, intelligent monitoring and the like; at present, human body posture recognition is mainly divided into recognition based on a computer visual angle and recognition based on a motion capture technology.
The human body posture recognition based on computer vision can easily acquire the information of the track, the outline and the like of human body movement, but has no way to specifically express the movement details of the human body, and has the problems of easy recognition error and the like caused by shielding; the human body posture recognition based on the motion capture technology is to recognize the human body motion track by positioning the joint points of the human body and storing the motion data information of the joint points.
Compared with human body posture recognition of a computer visual angle, the human body posture recognition based on the motion capture technology can better reflect human body posture information, can also better process and record motion details, and cannot influence the recognition of a motion track due to object colors.
For example, patent publication No. CN108256433B discloses a method and system for evaluating a motion posture, which compare motion posture information with a posture feature library to obtain a first comparison result, compare key point information with a key point feature library based on the first comparison result to obtain a second comparison result, and output feedback information for the key point information through the second comparison result, thereby implementing evaluation of a motion posture.
By the scheme, the motion attitude is evaluated, and the extracted frame cannot be subjected to refined measurement and calculation, so that the accuracy of contrast analysis cannot be guaranteed.
Disclosure of Invention
The invention mainly aims to provide a method for comparing and analyzing continuous actions based on deep learning posture evaluation, aiming at ensuring the accuracy of comparison and analysis; the invention also provides a system and equipment based on the method.
According to a first aspect of the present invention, there is provided a method for comparing and analyzing continuous actions based on deep learning pose estimation, comprising the steps of:
step 1: acquiring a video A serving as an evaluation standard and a video B to be evaluated;
step 2: acquiring key points in each frame of the video A and the video B, and planning a preset graph containing the key points;
and step 3: in the process of acquiring a starting frame and an ending frame in a video A and a video B, a moving track of a preset graph forms a first area and a second area;
and 4, step 4: and comparing the first area with the second area to obtain the matching degree.
In a specific embodiment of the present invention, the number of the key points is one or more, and the number of the first area and the number of the second area are one or more;
and judging whether the matching degree is within a preset threshold range, if so, matching the key points in the video A and the video B, and if not, not matching the key points in the video A and the video B.
In a particular embodiment of the invention, the keypoints associated with the action are selected as the selected keypoints in step 2 according to the action to be evaluated.
In a specific embodiment of the present invention, the step 4 specifically includes: and comparing the first region with the second region, and obtaining the matching degree according to the ratio of the superposed areas of the first region and the second region.
In a specific embodiment of the present invention, a user selects a video segment from an original video as video B;
the frame number of the video B is n frames, and n is more than or equal to 1.
In a specific embodiment of the present invention, the key points are human body joint points, and the predetermined pattern is one of a square, a sector, and a circle.
The invention also provides a system for comparing and analyzing continuous actions based on deep learning posture evaluation, which comprises the following modules:
the video acquisition module: the video evaluation system is used for acquiring a video A serving as an evaluation standard and a video B to be evaluated;
a posture evaluation module: the method comprises the steps of acquiring key points in each frame of a video A and a video B;
a first processing module: the system comprises preset graphs used for planning, wherein one preset graph comprises a key point;
a second processing module: the method comprises the steps that in the process of obtaining a starting frame and an ending frame in a video A and a video B, the moving track of a preset graph forms a first area and a second area;
an analysis module: and comparing the first area with the second area to obtain the matching degree.
In a specific embodiment of the present invention, the system further comprises a human-computer interaction module: the matching degree and the threshold are displayed; the threshold value can be modified through a human-computer interaction module; the human-computer interaction module can also interact with the video acquisition module, and the human-computer interaction module is used for processing and matching the video acquired by the video acquisition module.
The invention also provides a device for comparing and analyzing continuous actions based on deep learning posture evaluation, which comprises a memory and a processor, wherein the memory is in communication connection with the processor; the memory stores program instructions executable by the processor, the processor including an alignment and analysis system as described above, the program instructions being invoked to perform an alignment and analysis method as described above.
One of the above technical solutions of the present invention has at least one of the following advantages or beneficial effects:
in the invention, a preset graph is arranged at a key point of each frame, and a moving track of the preset graph forms an area in the process from a starting frame to an ending frame; the method comprises the steps that a first area is formed by the movement track of a preset graph of a key point in a video A, a second area is formed by the movement track of the preset graph of the key point in a video B, the matching degree of the track of the key point in the video B and the track of the corresponding key point in the video A can be obtained by comparing the first area and the second area of the corresponding key point, the matching degree of the key point is calculated by comparing the first area and the second area, and the accuracy of contrast analysis can be guaranteed.
Drawings
The invention is further described below with reference to the accompanying drawings and examples;
FIG. 1 is a flow chart of example 1 of the present invention;
FIG. 2 is a structural view of embodiment 2 of the present invention;
fig. 3 is a structural diagram of embodiment 3 of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more features.
In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the term "connected" is to be interpreted broadly, and may be, for example, a fixed connection or a movable connection, a detachable connection or a non-detachable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other elements or indirectly connected through one or more other elements or in an interactive relationship between two elements.
The following disclosure provides many different embodiments, or examples, for implementing different aspects of the invention.
Example 1
Referring to fig. 1, a method for comparing and analyzing continuous actions based on deep learning pose estimation includes the following steps:
step 1: acquiring a video A serving as an evaluation standard and a video B to be evaluated;
in practical application, the video A is a video with standard action, and the action of the video B is evaluated by taking the video A as a standard; video a may exist in a pre-stored state or may be captured along with video B.
Step 2: acquiring key points in each frame of the video A and the video B, and planning a preset graph containing the key points;
the key points, commonly known as human body postures, generally correspond to joints with certain degrees of freedom on a human body, such as neck, shoulder, elbow, wrist, waist, knee, ankle and the like, and all key points of a single frame form a skeleton model;
in this embodiment, the key points are obtained by the human body posture evaluator, the human body posture evaluator can obtain all people in the image and all key points, then the key points in the video a and the video B associated with the motion are selected as the obtained key points according to the motion to be evaluated, and a preset graph is planned for all the obtained key points;
the movement is divided into a whole body movement or a part of the body movement; when the action to be evaluated is the action of whole body activity, the key points related to the action are all key points of the whole body; when the action to be evaluated is an action of a part of the body, the key points associated with the action are key points of the part of the body.
Preferably, the preset graph takes the key point as the center, so that the algorithm design is facilitated; the preset pattern is one of a square, a sector and a circle, which is not limited in this embodiment.
And step 3: in the process of acquiring a starting frame and an ending frame in a video A and a video B, a moving track of a preset graph forms a first area and a second area;
the videos A and B show actions, in the process from the starting frame to the ending frame, the positions of the key points in the actions change, the preset graph moves along with the position change of the key points, and the moving track of the preset graph forms a first area and a second area.
In this embodiment, the start frame and the end frame of the video a and the video B are obtained by automatic identification, for example, the positions of the beat point and the key point of the video are taken as identification points to identify the start frame or the end frame of the video; of course, this embodiment does not exclude that the start frame and the end frame of the video a and the video B may be set by manual operation.
And 4, step 4: comparing the first area with the second area to obtain a matching degree;
specifically, the matching degree is obtained according to the ratio of the overlapping areas of the first area and the second area, wherein the ratio is the overlapping ratio of the areas of the first area and the second area;
if the number of frames between the initial frame and the ending frame of the video A and the video B is different, taking the number of frames of the video with less number of frames as a standard, extracting the same number of frames as the video with less number of frames from the other video, and obtaining the matching degree by comparing the areas of the areas formed under the number of frames;
whether the corresponding key points in the video A and the video B are matched or not is judged by analyzing the matching degree, so that the action comparison of the video A and the video B is realized;
and judging whether the matching degree is within a preset threshold range, if so, matching the key points in the video A and the video B, and if not, not matching the key points in the video A and the video B.
The preset threshold is an empirical value, and the embodiment does not specifically limit this.
In this embodiment, the number of the key points is one or more, and the number of the first area and the number of the second area are one or more;
when one key point is available, one key point is available in both the video A and the video B, the key points are corresponding to each other, and the first area and the second area are corresponding to each other; at the moment, judging whether the key points are matched or not, namely comparing and evaluating the actions of the single key point;
when a plurality of key points are available, a plurality of key points are respectively arranged in the video A and the video B, the plurality of key points are in one-to-one correspondence, and the first area and the second area are in a plurality of correspondence; at this time, it is determined whether the plurality of key points match, and one or more actions are compared and evaluated.
In this embodiment, a user selects a segment of video from an original video as a video B, where the video B may cover all or part of actions, and the user selects the segment of video according to the action to be evaluated; the video A is configured according to the selection of the video B, so that the video A serving as an evaluation standard is adapted to the video B to be evaluated;
when the action to be evaluated is a plurality of continuous actions and is recorded by a video to be evaluated, the video to be evaluated can be decomposed into a plurality of videos B covering partial actions, the actions of the plurality of videos B jointly form all the actions in the video to be evaluated, the action of the plurality of videos B is evaluated to obtain the matching degree of each key point of each video B corresponding to the video A, then all the matching degrees are counted and analyzed, the evaluation result of the whole video is obtained, the plurality of continuous actions are evaluated by the method, the fine measurement and calculation of the plurality of actions can be realized, and the accuracy of the comparative analysis is ensured.
In this embodiment, the frame number of the video B is n frames, where n is greater than or equal to 1;
specifically, the value of n is determined according to practical application, if the action to be evaluated is a plurality of continuous actions and is recorded by a video to be evaluated, the total number of frames of the video to be evaluated is not necessarily divided by n, if the total number of frames is divided by n, the remaining x frames are processed according to practical selection, and if the remaining x frames are not processed, the number of frames of the video B is n; if the rest x frames are processed, the frame number of the last video B is x frames, and x is more than or equal to 1 and less than or equal to n.
Example 2
Referring to fig. 2, a system for comparing and analyzing continuous actions based on deep learning pose estimation includes the following modules:
the video acquisition module 1: the video evaluation system is used for acquiring a video A serving as an evaluation standard and a video B to be evaluated;
the posture evaluation module 2: the method comprises the steps of acquiring key points in each frame of a video A and a video B;
the first processing module 3: the system comprises preset graphs used for planning, wherein one preset graph comprises a key point;
the second processing module 4: the method comprises the steps that in the process of obtaining a starting frame and an ending frame in a video A and a video B, the moving track of a preset graph forms a first area and a second area;
the analysis module 5: and comparing the first area with the second area to obtain the matching degree.
The working process of the comparison and analysis system is as follows: acquiring a video A and a video B to be evaluated serving as evaluation standards through a video acquisition module 1, and after the acquisition is finished, performing attitude evaluation on each frame of the video A and the video B through an attitude evaluation module 2 to obtain a key point of each frame; after key points are obtained, the first processing module 3 and the second processing module 4 process the video A and the video B, preset graphs are planned on the key points, and then a first area and a second area formed by the moving tracks of the preset graphs are obtained; and then the analysis module 5 compares the corresponding first region with the second region to obtain the matching degree of the key point.
And comparing the matching degree with a preset threshold value to obtain whether the key points in the video A and the video B are matched, and judging whether the actions of the video A and the video B are matched according to the matching degree of one key point or a plurality of key points.
Still include human-computer interaction module 6: the matching degree and the threshold are displayed; the threshold value can be modified through the human-computer interaction module 6; the human-computer interaction module 6 can also interact with the video acquisition module 1, and the human-computer interaction module 6 is used for processing and matching videos acquired by the video acquisition module 1.
Example 3
Referring to fig. 3, the comparison and analysis device for continuous motion based on deep learning posture assessment comprises a memory 7 and a processor 8, wherein the memory 7 is in communication connection with the processor 8; the memory 7 stores program instructions executable by the processor 8, the processor 8 includes the alignment and analysis system according to embodiment 2, and the program instructions can be invoked to perform the alignment and analysis method according to embodiment 1, for example, including acquiring a video a and a video B to be evaluated as evaluation criteria; acquiring key points in each frame of the video A and the video B, and planning a preset graph containing the key points; in the process of acquiring a starting frame and an ending frame in a video A and a video B, a moving track of a preset graph forms a first area and a second area; and comparing the first area with the second area to obtain the matching degree.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (9)

1. A method for comparing and analyzing continuous actions based on deep learning posture assessment is characterized by comprising the following steps:
step 1: acquiring a video A serving as an evaluation standard and a video B to be evaluated;
step 2: acquiring key points in each frame of the video A and the video B, and planning a preset graph containing the key points;
and step 3: in the process of acquiring a starting frame and an ending frame in a video A and a video B, a moving track of a preset graph forms a first area and a second area;
and 4, step 4: and comparing the first area with the second area to obtain the matching degree.
2. The method for matching and analyzing continuous actions based on deep learning pose estimation according to claim 1, wherein the key points are one or more, and the first region and the second region are one or more;
and judging whether the matching degree is within a preset threshold range, if so, matching the key points in the video A and the video B, and if not, not matching the key points in the video A and the video B.
3. The deep learning pose assessment based continuous motion comparison and analysis method according to claim 1, wherein the keypoints associated with the motion are selected as the selected keypoints in step 2 according to the motion to be assessed.
4. The method for comparing and analyzing continuous actions based on deep learning pose estimation according to claim 1, wherein the step 4 is specifically as follows: and comparing the first region with the second region, and obtaining the matching degree according to the ratio of the superposed areas of the first region and the second region.
5. The deep learning pose assessment based continuous motion comparison and analysis method according to claim 1, wherein a user selects a video from original videos as video B;
the frame number of the video B is n frames, and n is more than or equal to 1.
6. The method for comparing and analyzing continuous actions based on deep learning pose estimation of claim 1, wherein the key points are human body joint points, and the predetermined pattern is one of a square, a fan and a circle.
7. A system for comparing and analyzing continuous actions based on deep learning posture assessment is characterized by comprising the following modules:
the video acquisition module: the video evaluation system is used for acquiring a video A serving as an evaluation standard and a video B to be evaluated;
a posture evaluation module: the method comprises the steps of acquiring key points in each frame of a video A and a video B;
a first processing module: a preset graph for planning at the key points;
a second processing module: the method comprises the steps that in the process of obtaining a starting frame and an ending frame in a video A and a video B, the moving track of a preset graph forms a first area and a second area;
an analysis module: and comparing the first area with the second area to obtain the matching degree.
8. The deep learning pose assessment based continuous motion comparison and analysis method according to claim 7, further comprising a human-machine interaction module: the matching degree and the threshold are displayed; the threshold value can be modified through a human-computer interaction module; the human-computer interaction module can also interact with the video acquisition module, and the human-computer interaction module is used for processing and matching the video acquired by the video acquisition module.
9. A device for comparing and analyzing continuous actions based on deep learning pose assessment, comprising a memory and a processor, wherein the memory is in communication with the processor; the memory stores program instructions executable by the processor, the processor comprising the alignment and analysis system of any of claims 7 to 8, the program instructions being invoked to perform the alignment and analysis method of any of claims 1 to 6.
CN202110862230.9A 2021-07-29 2021-07-29 Method, system and equipment for comparing and analyzing continuous actions based on deep learning posture assessment Pending CN113537128A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110862230.9A CN113537128A (en) 2021-07-29 2021-07-29 Method, system and equipment for comparing and analyzing continuous actions based on deep learning posture assessment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110862230.9A CN113537128A (en) 2021-07-29 2021-07-29 Method, system and equipment for comparing and analyzing continuous actions based on deep learning posture assessment

Publications (1)

Publication Number Publication Date
CN113537128A true CN113537128A (en) 2021-10-22

Family

ID=78121357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110862230.9A Pending CN113537128A (en) 2021-07-29 2021-07-29 Method, system and equipment for comparing and analyzing continuous actions based on deep learning posture assessment

Country Status (1)

Country Link
CN (1) CN113537128A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091963A (en) * 2022-12-22 2023-05-09 广州奥咨达医疗器械技术股份有限公司 Quality evaluation method and device for clinical test institution, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091963A (en) * 2022-12-22 2023-05-09 广州奥咨达医疗器械技术股份有限公司 Quality evaluation method and device for clinical test institution, electronic equipment and storage medium
CN116091963B (en) * 2022-12-22 2024-05-17 广州奥咨达医疗器械技术股份有限公司 Quality evaluation method and device for clinical test institution, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108846365B (en) Detection method and device for fighting behavior in video, storage medium and processor
US20180211104A1 (en) Method and device for target tracking
US8509484B2 (en) Information processing device and information processing method
US8824802B2 (en) Method and system for gesture recognition
US8879787B2 (en) Information processing device and information processing method
US9183431B2 (en) Apparatus and method for providing activity recognition based application service
CN102831439B (en) Gesture tracking method and system
US20170086712A1 (en) System and Method for Motion Capture
CN111401330A (en) Teaching system and intelligent mirror adopting same
JP5598751B2 (en) Motion recognition device
CN101131609A (en) Interface apparatus and interface method
CN104914989B (en) The control method of gesture recognition device and gesture recognition device
WO2012117392A1 (en) Device, system and method for determining compliance with an instruction by a figure in an image
KR20190099537A (en) Motion learning device, function determining device and function determining system
CN110633004A (en) Interaction method, device and system based on human body posture estimation
CN113537128A (en) Method, system and equipment for comparing and analyzing continuous actions based on deep learning posture assessment
CN116311497A (en) Tunnel worker abnormal behavior detection method and system based on machine vision
CN111223549A (en) Mobile end system and method for disease prevention based on posture correction
CN112785564B (en) Pedestrian detection tracking system and method based on mechanical arm
CN112907635A (en) Method for extracting eye abnormal motion characteristics based on geometric analysis
CN112200126A (en) Method for identifying limb shielding gesture based on artificial intelligence running
CN110458076A (en) A kind of teaching method based on computer vision and system
CN115118536B (en) Sharing method, control device and computer readable storage medium
CN114742090A (en) Cockpit man-machine interaction system based on mental fatigue monitoring
CN114495272A (en) Motion recognition method, motion recognition device, storage medium, and computer apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination