CN116264965A - Physical training method and system based on video - Google Patents

Physical training method and system based on video Download PDF

Info

Publication number
CN116264965A
CN116264965A CN202111526409.3A CN202111526409A CN116264965A CN 116264965 A CN116264965 A CN 116264965A CN 202111526409 A CN202111526409 A CN 202111526409A CN 116264965 A CN116264965 A CN 116264965A
Authority
CN
China
Prior art keywords
training
standard
action video
actual
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111526409.3A
Other languages
Chinese (zh)
Inventor
闫一力
路国华
夏娟娟
祁富贵
郑丽娟
曹育森
景裕
李钊
雷涛
张林媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Medical University of PLA
Original Assignee
Air Force Medical University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Medical University of PLA filed Critical Air Force Medical University of PLA
Priority to CN202111526409.3A priority Critical patent/CN116264965A/en
Publication of CN116264965A publication Critical patent/CN116264965A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Epidemiology (AREA)
  • Computational Linguistics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Dentistry (AREA)
  • Physiology (AREA)
  • Veterinary Medicine (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Pathology (AREA)
  • Primary Health Care (AREA)
  • Artificial Intelligence (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a physical training method and system based on video; the method may include: collecting actual training actions made by a training object in the process of playing a standard training action video aiming at the training object currently; extracting the actual motion gesture of the training object through a set artificial intelligence algorithm according to the actual training action; comparing the actual motion gesture of the training object with the standard motion gesture in the standard training action video, and determining the actual adaptation degree of the standard training action video and the training object; determining a standard training action video for a training object according to the actual adaptation degree of the standard training action video and the training object; and in the process of playing the standard training action video aiming at the training object, acquiring the physiological signal of the training object, and sending an alarm to the training object to stop training when the physiological signal is abnormal.

Description

Physical training method and system based on video
Technical Field
The embodiment of the invention relates to the technical field of rehabilitation physical training devices, in particular to a method and a system for physical training based on video.
Background
Physical training is a set of systematic and scientific training system which is used for strengthening physical characteristics of a training object such as muscle strength, explosive force, speed, endurance, agility, softness and the like and is produced by dividing different special sports and matching different training periods. Can be widely applied to professional athletes and non-professional common people. Currently, with the improvement of social and economic level and national health consciousness, the popularity of physical training in the general public has increased significantly.
The prior physical training scheme is dependent on personal knowledge, and training lacks regularity and effective feedback; or depending on the hardware of the gymnasium and the experience of physical training, the training cost is higher, the training objects cannot be allowed to adhere for a long time, the scientificity of the whole physical training plan is finally insufficient, supervision and management of each action in the training process are lacked, training injuries are easy to form, and the expected training target is difficult to achieve.
Disclosure of Invention
In view of the foregoing, embodiments of the present invention desirably provide a method and system for video-based physical training; the special physical training can be completed in a non-special environment, the occurrence probability of training injury is reduced, and the input resources for carrying out the special physical training are reduced.
The technical scheme of the embodiment of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video-based physical training method, including:
collecting actual training actions made by a training object in the process of playing a standard training action video aiming at the training object currently;
extracting the actual motion gesture of the training object through a set artificial intelligence algorithm according to the actual training action;
comparing the actual motion gesture of the training object with the standard motion gesture in the standard training action video, and determining the actual adaptation degree of the standard training action video and the training object;
determining a standard training action video for a training object according to the actual adaptation degree of the standard training action video and the training object;
and in the process of playing the standard training action video aiming at the training object, acquiring the physiological signal of the training object, and sending an alarm to the training object to stop training when the physiological signal is abnormal.
In a second aspect, an embodiment of the present invention provides a video-based physical training system, including: display device, intelligent training device and physiological monitoring device; the intelligent training equipment is respectively connected with the display equipment and the physiological monitoring equipment; wherein,,
the intelligent training device is configured to play a standard training action video aiming at a training object through the display device; the method comprises the steps of,
in the playing process, collecting actual training actions made by the training object; the method comprises the steps of,
extracting the actual motion gesture of the training object through a set artificial intelligence algorithm according to the actual training action; the method comprises the steps of,
comparing the actual motion gesture of the training object with the standard motion gesture in the standard training action video, and determining the actual adaptation degree of the standard training action video and the training object; the method comprises the steps of,
determining a standard training action video for a training object according to the actual adaptation degree of the standard training action video and the training object;
the physiological monitoring equipment is configured to collect physiological signals of the training object in the standard training action video playing process and transmit the physiological signals to the intelligent training equipment so as to send an alarm to the training object and stop training when the physiological signals are abnormal.
The embodiment of the invention provides a method and a system for physical training based on video; the training object can make training actions according to the standard training action video which is played currently and aims at the training object, the actual motion gesture is extracted according to the training actions by means of an artificial intelligence algorithm, the actual motion gesture is compared with the standard motion gesture in the standard training action video to obtain the actual adaptation degree of the standard training action video and the training object, and the subsequent standard training action video is adjusted, so that systematic and scientific physical training can be performed in a non-professional environment, physiological signals of the training object are collected in the training process, and an alarm is given out in time for an abnormal state, and the generation of training injury is avoided. The input resources for performing professional physical training are reduced.
Drawings
FIG. 1 is a schematic flow chart of a physical training method based on video provided by an embodiment of the invention;
FIG. 2 is a schematic diagram of a physical training system based on video according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of another video-based physical training system according to an embodiment of the present invention;
fig. 4 is a schematic diagram of the composition of an intelligent training apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
At present, the conventional physical training scheme lacks of regularity and effective feedback, and relies on the hardware of a gymnasium and the experience of a physical training, so that the training cost is high. Based on this, referring to fig. 1, there is shown a video-based physical training method according to an embodiment of the present invention, where the method includes:
s101: collecting actual training actions made by a training object in the process of playing a standard training action video aiming at the training object currently;
s102: extracting the actual motion gesture of the training object through a set artificial intelligence algorithm according to the actual training action;
s103: comparing the actual motion gesture of the training object with the standard motion gesture in the standard training action video, and determining the actual adaptation degree of the standard training action video and the training object;
s104: determining a standard training action video for a training object according to the actual adaptation degree of the standard training action video and the training object;
s105: and in the process of playing the standard training action video aiming at the training object, acquiring the physiological signal of the training object, and sending an alarm to the training object to stop training when the physiological signal is abnormal.
According to the technical scheme shown in fig. 1, a training object can perform training actions according to a standard training action video which is played currently and aims at the training object, an actual motion gesture is extracted according to the training actions by means of an artificial intelligence algorithm, the actual motion gesture is compared with a standard motion gesture in the standard training action video to obtain the actual adaptation degree of the standard training action video and the training object, and subsequent adjustment of the standard training action video is performed, so that systematic and scientific physical training can be performed in a non-professional environment, physiological signals of the training object are collected in the training process, and an alarm is timely given to an abnormal state, and the occurrence of training injury is avoided.
For the solution shown in fig. 1, in some possible implementations, the method further includes:
determining training items to be performed according to training targets, expected training positions and physical state information of the training objects;
and generating the standard training action video aiming at the training object according to the training item correspondence to be performed.
For the above implementation manner, in detail, standard training action videos may be recorded according to different training targets, such as aerobic training, anaerobic training, and the like, and training positions expected by the training object, such as upper limbs, lower limbs, core strength, coordination, and the like, and a standard training action video database may be formed. For example, each training program may pick 1-3 representative actions to form a standard training action video. In some examples, the training object may fill in a training requirement plan before starting a complete training course, so as to generate an optimal training item set for the training object according to the training requirement of the training object, and further generate a standard training action video for the training object.
For the solution shown in fig. 1, in some possible implementations, the acquiring the actual training action made by the training object includes:
and shooting an actual action video made by the training object in the process of playing the standard training action video aiming at the training object through a camera.
For the implementation manner, in the process that the training object trains against the standard training action video of the training object at present, the RGB camera shoots the video of the training object to make the actual training action.
Based on the above implementation, in some examples, the extracting, according to the actual training action, the actual motion gesture of the training object through a set artificial intelligence algorithm includes:
capturing the training object from the actual action video;
extracting the motion gesture of a captured training object in the actual action video based on the trained convolutional neural network; the convolutional neural network comprises 24 convolutional layers and 2 full-connection layers, and is trained based on the existing human body posture training set; the motion gesture includes a preselected articulation point of the training object and a line between the preselected articulation points.
For the above example, it should be noted that, target recognition may be performed on the video of the actual training action captured by the camera to determine the training object in the video of the actual training action; then, a trained convolutional neural network can be utilized to extract the training gesture of the training object from the video of the actual training action. For the convolutional neural network, the structure can comprise 24 convolutional layers and 2 fully-connected layers, and an initial convolutional neural network conforming to the structure can be trained through a disclosed human body posture training set (such as MPII, COCO, VGG, CMU), so that the trained convolutional neural network is obtained.
Based on the above example, preferably, the comparing the actual motion gesture of the training object with the standard motion gesture in the standard training action video, determining the actual fitness of the standard training action video and the training object includes:
and comparing the similarity between the preselected articulation point of the training object in the actual action video and the connecting line between the preselected articulation point with the standard motion gesture in the standard training action video to obtain a similarity evaluation value:
when the similarity evaluation value exceeds a set first threshold value, determining that the standard training action video difficulty is lower than the training capacity of the training object;
when the similarity evaluation value is between a set second threshold value and the first threshold value, determining that the standard training action video difficulty is suitable for the training capacity of the training object;
and when the similarity evaluation value is lower than the second threshold value, determining that the standard training action video difficulty is higher than the training capability of the training object.
For the above preferred example, it should be noted that the actual motion gesture of the training object may be compared with the standard motion gesture in the standard training motion video to obtain the similarity evaluation value. It will be appreciated that the similarity assessment value may be used as a measure of whether the standard training motion video difficulty level is compatible with the training capabilities of the training object. Specifically, the higher the similarity evaluation value is, the more the actual motion gesture of the training object is attached to the standard motion gesture in the standard training action video, the more standard the actual training action of the training object is, and the lower the difficulty of the training object to finish the standard training action video can be described, so that the training difficulty can be improved for the training object; correspondingly, the lower the similarity evaluation value is, the less the actual motion gesture of the training object is matched with the standard motion gesture in the standard training action video, the less the actual training action of the training object is standard, the higher the difficulty of the training object to finish the standard training action video can be described, and the training difficulty can be reduced for the training object.
For the judging result of whether the standard training action video difficulty is adapted to the training ability of the training object, correspondingly, determining the standard training action video of the subsequent adaptive training object according to the actual adaptation degree of the standard training action video and the training object includes:
the difficulty and strength of training items in the standard training action video of the subsequent adaptive training object are improved correspondingly to the fact that the standard training action video is lower than the training capability of the training object;
corresponding to the standard training action video difficulty, adapting to the training ability of the training object, and taking the standard training action video of the current training object as the standard training action video of the subsequent adapting training object;
and the difficulty and strength of training items in the standard training action video of the subsequent adaptive training object are reduced corresponding to the fact that the standard training action video is lower than the training ability of the training object.
For the solution shown in fig. 1, in some possible implementations, the physiological signal includes at least one of a heart rate, a blood oxygen saturation, and an electromyographic signal of the training subject; accordingly, the sending an alert to the training subject to stop training when the physiological signal is abnormal comprises:
and stopping playing the marked training action video and sending warning information to the training object when the heart rate or the blood oxygen saturation of the training object exceeds the set maximum exercise load or the muscle fatigue of the training object is represented by the electromyographic signals of the training object to reach the set warning threshold.
For the implementation mode, particularly, when the human body performs physical training, the heart rate and the blood oxygen saturation are important parameters for representing the heart bearing load, and when the heart rate is 110-180bp/min, the heart rate has a remarkable relation with oxygen uptake, energy metabolism and exercise intensity. According to the prior literature, the training person is warned when the intensity of the secondary exercise is reached according to the human body, so that the occurrence of training injury can be effectively reduced, and the health of the training person is actively protected. The heart rate sub-maximum load guard position is 150bp/min, and the blood oxygen sub-maximum load guard position is 93%. And (3) evaluating the fatigue degree of the muscle, and analyzing the surface electromyographic signals by adopting a frequency domain power spectrum method according to the characteristic that the electromyographic power spectrum shifts from high frequency to low frequency during fatigue. And performing fast Fourier transform on the autocorrelation function of the surface electromyographic signal, respectively calculating the integral power spectrum energy in the low-frequency range of 2-125Hz and the high-frequency range of 125-500Hz, and reaching fatigue warning when the integral power spectrum energy of the low-frequency range rises by 15% or the integral power spectrum energy of the high-frequency range drops by 15%. Therefore, during the training process, physiological signals of a trainer, including heart rate, blood oxygen, myoelectricity and the like, can be monitored in real time through the wearable wireless device, and when the heart rate or the blood oxygen saturation exceeds the maximum exercise load of the trainer or the muscle fatigue reaches the warning threshold value, the training is stopped, so that the occurrence of training injury is prevented.
Based on the same inventive concept as the foregoing technical solution, referring to fig. 2, there is shown a video-based physical training system 20 according to an embodiment of the present invention, where the system 20 includes: a display device 201, an intelligent training device 202, and a physiological monitoring device 203; the intelligent training apparatus 202 is connected with the display apparatus 201 and the physiological monitor apparatus 203, respectively; wherein,,
the intelligent training apparatus 202 is configured to play, through the display apparatus 201, a standard training action video for a training object currently; the method comprises the steps of,
in the playing process, collecting actual training actions made by the training object; the method comprises the steps of,
extracting the actual motion gesture of the training object through a set artificial intelligence algorithm according to the actual training action; the method comprises the steps of,
comparing the actual motion gesture of the training object with the standard motion gesture in the standard training action video, and determining the actual adaptation degree of the standard training action video and the training object; the method comprises the steps of,
determining a standard training action video for a training object according to the actual adaptation degree of the standard training action video and the training object;
the physiological monitor device 203 is configured to collect physiological signals of the training object during the playing process of the standard training action video, and transmit the physiological signals to the intelligent training device 202, so as to send an alarm to the training object and stop training when the physiological signals are abnormal.
Based on the system 20 described above, referring to fig. 3, the system 20 further includes a server 204 having a video database 2041; the server 204 is configured to determine training items to be performed according to training targets, expected training positions and physical state information of the training subjects; the method comprises the steps of,
generating the standard training action video of the current training object according to the training program correspondence to be performed and transmitting the standard training action video to the intelligent training equipment 202; the method comprises the steps of,
the method comprises the steps of receiving the actual adaptation degree of the standard training action video and the training object transmitted by the intelligent training equipment 202, determining the standard training action video for the training object according to the actual adaptation degree of the standard training action video and the training object, and transmitting the standard training action video to the intelligent training equipment 202.
Based on the above system 20, referring to fig. 4, the intelligent training apparatus 202 includes a main processor 2021, an embedded neural network processor 2022, a video acquisition module 2023, a video output module 2024, a storage module 2025, a communication module 2026, and a power supply module 2027 for providing power to components within the intelligent training apparatus 202; wherein,,
the video output module 2024 is configured to output, through the communication module 2026, the standard training action video currently targeted to the training object in the storage module 2025 to the display device 201 for playing;
the video acquisition module 2023 is configured to acquire actual training actions made by the training object during the playing process; and may transmit the collected actual training actions to the embedded neural network processor 2022;
the embedded neural network processor 2022 is configured to extract an actual motion gesture of the training object through a set artificial intelligence algorithm according to the actual training action;
the main processor 2021 is configured to compare the actual motion pose of the training object with the standard motion pose in the standard training action video, determine an actual fitness of the standard training action video with the training object; and transmitting the actual adaptation degree to the server 204 through the communication module 2026, and receiving a subsequent standard training action video for a training object transmitted by the server 204.
It will be understood that each component in this embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional modules.
The integrated units, if implemented in the form of software functional modules, may be stored in a computer-readable storage medium, if not sold or used as separate products, and based on this understanding, the technical solution of the present embodiment may be embodied essentially or partly in the form of a software product, or all or part of the technical solution, which is stored in a storage medium, and includes several instructions to cause a computer device (which may be a personal computer, a server 204, or a network device, etc.) or processor to perform all or part of the steps of the method described in the present embodiment. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Accordingly, the present embodiment provides a computer storage medium storing a video-based fitness training program which, when executed by at least one processor, implements the video-based fitness training method steps of the above-described technical solution.
It will be appreciated that the exemplary solution of the video-based fitness training system 20 described above is within the same concept as the solution of the video-based fitness training method described above, and therefore, for details not described in detail in the solution of the video-based fitness training system 20 described above, reference may be made to the description of the solution of the video-based fitness training method described above. The embodiments of the present invention will not be described in detail.
It should be noted that: the technical schemes described in the embodiments of the present invention may be arbitrarily combined without any collision.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A video-based physical fitness training method, the method comprising:
collecting actual training actions made by a training object in the process of playing a standard training action video aiming at the training object currently;
extracting the actual motion gesture of the training object through a set artificial intelligence algorithm according to the actual training action;
comparing the actual motion gesture of the training object with the standard motion gesture in the standard training action video, and determining the actual adaptation degree of the standard training action video and the training object;
determining a standard training action video for a training object according to the actual adaptation degree of the standard training action video and the training object;
and in the process of playing the standard training action video aiming at the training object, acquiring the physiological signal of the training object, and sending an alarm to the training object to stop training when the physiological signal is abnormal.
2. The method according to claim 1, wherein the method further comprises:
determining training items to be performed according to training targets, expected training positions and physical state information of the training objects;
and generating the standard training action video aiming at the training object according to the training item correspondence to be performed.
3. The method of claim 1, wherein the capturing actual training actions by the training object comprises:
and shooting an actual action video made by the training object in the process of playing the standard training action video aiming at the training object through a camera.
4. A method according to claim 3, wherein said extracting the actual motion pose of the training object from the actual training action by a set artificial intelligence algorithm comprises:
capturing the training object from the actual action video;
extracting the motion gesture of a captured training object in the actual action video based on the trained convolutional neural network; the convolutional neural network comprises 24 convolutional layers and 2 full-connection layers, and is trained based on the existing human body posture training set; the motion gesture includes a preselected articulation point of the training object and a line between the preselected articulation points.
5. The method of claim 4, wherein the comparing the actual motion pose of the training object to the standard motion pose in the standard training motion video, determining the actual fitness of the standard training motion video to the training object, comprises:
and comparing the similarity between the preselected articulation point of the training object in the actual action video and the connecting line between the preselected articulation point with the standard motion gesture in the standard training action video to obtain a similarity evaluation value:
when the similarity evaluation value exceeds a set first threshold value, determining that the standard training action video difficulty is lower than the training capacity of the training object;
when the similarity evaluation value is between a set second threshold value and the first threshold value, determining that the standard training action video difficulty is suitable for the training capacity of the training object;
and when the similarity evaluation value is lower than the second threshold value, determining that the standard training action video difficulty is higher than the training capability of the training object.
6. The method of claim 5, wherein the determining the standard training action video of the subsequent adaptation training object based on the actual adaptation of the standard training action video to the training object comprises:
the difficulty and strength of training items in the standard training action video of the subsequent adaptive training object are improved correspondingly to the fact that the standard training action video is lower than the training capability of the training object;
corresponding to the standard training action video difficulty, adapting to the training ability of the training object, and taking the standard training action video of the current training object as the standard training action video of the subsequent adapting training object;
and the difficulty and strength of training items in the standard training action video of the subsequent adaptive training object are reduced corresponding to the fact that the standard training action video is lower than the training ability of the training object.
7. The method of claim 1, wherein the physiological signal comprises at least one of a heart rate, blood oxygen saturation, and an electromyographic signal of the training subject; accordingly, the sending an alert to the training subject to stop training when the physiological signal is abnormal comprises:
and stopping playing the marked training action video and sending warning information to the training object when the heart rate or the blood oxygen saturation of the training object exceeds the set maximum exercise load or the muscle fatigue of the training object is represented by the electromyographic signals of the training object to reach the set warning threshold.
8. A video-based physical fitness training system, the system comprising: display device, intelligent training device and physiological monitoring device; the intelligent training equipment is respectively connected with the display equipment and the physiological monitoring equipment; wherein,,
the intelligent training device is configured to play a standard training action video aiming at a training object through the display device; the method comprises the steps of,
in the playing process, collecting actual training actions made by the training object; the method comprises the steps of,
extracting the actual motion gesture of the training object through a set artificial intelligence algorithm according to the actual training action; the method comprises the steps of,
comparing the actual motion gesture of the training object with the standard motion gesture in the standard training action video, and determining the actual adaptation degree of the standard training action video and the training object; the method comprises the steps of,
determining a standard training action video for a training object according to the actual adaptation degree of the standard training action video and the training object;
the physiological monitoring equipment is configured to collect physiological signals of the training object in the standard training action video playing process and transmit the physiological signals to the intelligent training equipment so as to send an alarm to the training object and stop training when the physiological signals are abnormal.
9. The system of claim 8, further comprising a server having a video database; the server is configured to determine training items to be performed according to training targets, expected training positions and physical state information of the training objects; the method comprises the steps of,
generating the standard training action video of the current training object according to the training program to be performed correspondingly and transmitting the standard training action video to the intelligent training equipment; the method comprises the steps of,
and receiving the actual adaptation degree of the standard training action video and the training object transmitted by the intelligent training equipment, determining the standard training action video for the training object according to the actual adaptation degree of the standard training action video and the training object, and transmitting the standard training action video to the intelligent training equipment.
10. The system of claim 8, wherein the intelligent training appliance comprises a main processor, an embedded neural network processor, a video acquisition module, a video output module, a storage module, a communication module, and a power supply module for providing power to components within the intelligent training appliance; wherein,,
the video output module is configured to output the standard training action video of the training object in the storage module to the display equipment for playing through the communication module;
the video acquisition module is configured to acquire actual training actions made by the training object in the playing process; and the acquired actual training actions can be transmitted to the embedded neural network processor;
the embedded neural network processor is configured to extract the actual motion gesture of the training object through a set artificial intelligence algorithm according to the actual training action;
the main processor is configured to compare the actual motion gesture of the training object with the standard motion gesture in the standard training action video, and determine the actual adaptation degree of the standard training action video and the training object; and transmitting the actual adaptation degree to the server through the communication module, and receiving a standard training action video which is transmitted by the server and is used for a training object in a follow-up mode.
CN202111526409.3A 2021-12-14 2021-12-14 Physical training method and system based on video Pending CN116264965A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111526409.3A CN116264965A (en) 2021-12-14 2021-12-14 Physical training method and system based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111526409.3A CN116264965A (en) 2021-12-14 2021-12-14 Physical training method and system based on video

Publications (1)

Publication Number Publication Date
CN116264965A true CN116264965A (en) 2023-06-20

Family

ID=86742904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111526409.3A Pending CN116264965A (en) 2021-12-14 2021-12-14 Physical training method and system based on video

Country Status (1)

Country Link
CN (1) CN116264965A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256433A (en) * 2017-12-22 2018-07-06 银河水滴科技(北京)有限公司 A kind of athletic posture appraisal procedure and system
CN207821805U (en) * 2017-06-01 2018-09-07 中国人民解放军第三军医大学 Wearable sport monitoring device
CN109376705A (en) * 2018-11-30 2019-02-22 努比亚技术有限公司 Dance training methods of marking, device and computer readable storage medium
CN112365954A (en) * 2020-10-26 2021-02-12 埃欧健身管理(上海)有限公司 Method and equipment for dynamically adjusting fitness scheme
CN113641856A (en) * 2021-08-12 2021-11-12 三星电子(中国)研发中心 Method and apparatus for outputting information
CN113694502A (en) * 2021-09-16 2021-11-26 中国人民解放军海军特色医学中心 Ship member fitness training and evaluation system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN207821805U (en) * 2017-06-01 2018-09-07 中国人民解放军第三军医大学 Wearable sport monitoring device
CN108256433A (en) * 2017-12-22 2018-07-06 银河水滴科技(北京)有限公司 A kind of athletic posture appraisal procedure and system
CN109376705A (en) * 2018-11-30 2019-02-22 努比亚技术有限公司 Dance training methods of marking, device and computer readable storage medium
CN112365954A (en) * 2020-10-26 2021-02-12 埃欧健身管理(上海)有限公司 Method and equipment for dynamically adjusting fitness scheme
CN113641856A (en) * 2021-08-12 2021-11-12 三星电子(中国)研发中心 Method and apparatus for outputting information
CN113694502A (en) * 2021-09-16 2021-11-26 中国人民解放军海军特色医学中心 Ship member fitness training and evaluation system

Similar Documents

Publication Publication Date Title
CN106984027B (en) Action comparison analysis method and device and display
Giakoumis et al. Automatic recognition of boredom in video games using novel biosignal moment-based features
CN108209902B (en) Athlete competitive state evaluation method and system
WO2018214532A1 (en) Fitness exercise data feedback method and apparatus
DE102015207415A1 (en) Method and apparatus for associating images in a video of a person's activity with an event
Sevil et al. Social and competition stress detection with wristband physiological signals
Huang et al. BreathLive: Liveness detection for heart sound authentication with deep breathing
CN110464357A (en) A kind of rehabilitation course quality monitoring method and system
US20190246921A1 (en) Contactless-Type Sport Training Monitor Method
US20200382286A1 (en) System and method for smart, secure, energy-efficient iot sensors
Shusong et al. EMG-driven computer game for post-stroke rehabilitation
CN110354479A (en) Fistfight sports points-scoring system and method
CN111161833A (en) Fitness plan generation method and related equipment
CN111860418A (en) Intelligent video examination and consultation system, method, medium and terminal for athletic competition
CN114298089A (en) Multi-mode strength training assisting method and system
CN111375174B (en) Intelligent running machine based on knee joint movement information
CN113713333B (en) Dynamic virtual induction method and system for lower limb rehabilitation full training process
CN111329457A (en) Wearable motion index detection equipment and detection method
CN108209913A (en) For the data transmission method and equipment of wearable device
KR101337821B1 (en) Shinguard for soccer player and automatically medical treatment system using the same
CN116264965A (en) Physical training method and system based on video
CN110517751A (en) A kind of athletic rehabilitation management system
CN111311466B (en) Safety control method and device
Setiawan et al. Real-Time Delayed Onset Muscle Soreness (DOMS) Detection in High Intensity Interval Training Using Artificial Neural Network
Lei et al. Feature Extraction‐Based Fitness Characteristics and Kinesiology of Wushu Sanda Athletes in University Analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination