CN113117306A - Yoga auxiliary learning method, system and storage medium - Google Patents

Yoga auxiliary learning method, system and storage medium Download PDF

Info

Publication number
CN113117306A
CN113117306A CN202110293705.7A CN202110293705A CN113117306A CN 113117306 A CN113117306 A CN 113117306A CN 202110293705 A CN202110293705 A CN 202110293705A CN 113117306 A CN113117306 A CN 113117306A
Authority
CN
China
Prior art keywords
user
yoga
data
limb
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110293705.7A
Other languages
Chinese (zh)
Inventor
岑柏滋
陈荣斌
崔波
李德豪
潘文婷
彭绮琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangmen Polytechnic
Original Assignee
Jiangmen Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangmen Polytechnic filed Critical Jiangmen Polytechnic
Priority to CN202110293705.7A priority Critical patent/CN113117306A/en
Publication of CN113117306A publication Critical patent/CN113117306A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/0647Visualisation of executed movements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/065Visualisation of specific exercise parameters
    • A63B2071/0652Visualisation or indication relating to symmetrical exercise, e.g. right-left performance related to spinal column
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/80Special sensors, transducers or devices therefor
    • A63B2220/806Video cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2230/00Measuring physiological parameters of the user
    • A63B2230/62Measuring physiological parameters of the user posture

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses a yoga assistant learning method, a yoga assistant learning system and a storage medium. The yoga assistant learning method comprises the steps of obtaining sign data of a user, wherein the sign data comprises breathing data of the user; acquiring video image information of a user, and acquiring limb data of the user according to the video image information; comparing and analyzing the physical sign data and the limb data with a standard database to generate a comparison result; and marking the virtual limb of the user in the video image information by multiple colors according to the comparison result. The yoga auxiliary learning method can be applied to a yoga auxiliary learning system, and by acquiring the video image information and the physical sign data of a user, including limb data and breathing data, and carrying out contrastive analysis with a set standard database, the contrastive result is reflected on the virtual limb of the user in the video image, and the virtual limb is marked through various colors, so that the yoga auxiliary learning method is convenient for the user to check and learn, the user experience is improved, and the user can learn the yoga action more intuitively and scientifically.

Description

Yoga auxiliary learning method, system and storage medium
Technical Field
The invention relates to the technical field of sports, in particular to a yoga auxiliary learning method, a yoga auxiliary learning system and a storage medium.
Background
Yoga is one of modern motion methods, yoga uses the old and easy mastered skill, improve people's physiology, psychology, emotion and mental ability, it is a motion mode that reaches health, soul and spirit harmony and unity, including posture method of adjusting oneself, breathing method of the rest, meditation idea of the aligning etc. in order to reach the unification of body and mind, its purpose is to improve your health and psychology, people often need to provide some guides when practicing yoga, go to accomplish complicated yoga action, and along with the development of wisdom city, wisdom pole equipment begins to popularize, need to urge to have some outdoor intelligent applications urgently. In the related art, the yoga learning guidance system and the yoga learning guidance method cannot effectively guide according to the exercise purpose of the yoga, so that a user cannot obtain more scientific yoga action guidance, and the user experience is poor.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the yoga auxiliary learning method, the yoga auxiliary learning system and the storage medium are provided by the invention, so that the user experience can be improved, and the user can learn the yoga action more intuitively and scientifically.
According to a first aspect of the present invention, a yoga learning assistance method is provided, including: acquiring sign data of a user, wherein the sign data comprises breathing data of the user; acquiring video image information of a user, and acquiring limb data of the user according to the video image information; performing comparative analysis on the physical sign data and the limb data with a standard database to generate a comparative result; and marking the virtual limb of the user in the video image information by multiple colors according to the comparison result.
The yoga assistant learning method provided by the embodiment of the invention at least has the following beneficial effects: the yoga assistant learning method in the embodiment of the invention can be applied to a yoga assistant learning system, and by acquiring the video image information and the physical sign data of a user, including the limb data and the respiration data, and performing comparison analysis with the set standard database, the comparison result is reflected to the virtual limb of the user in the video image and is marked by various colors, so that the user can conveniently check and learn, the user experience is improved, and the user can learn the yoga action more intuitively and scientifically.
According to some embodiments of the invention, after generating the comparison result according to the comparison analysis of the sign data and the limb data with the standard database, the method further includes: providing guidance information for the user according to the comparison result, wherein the guidance information comprises at least one of audio guidance information provided through a loudspeaker, video guidance information provided through a display module and vibration guidance information provided through a yoga mat.
According to some embodiments of the present invention, the marking the virtual limb of the user in the video image information by a plurality of colors according to the comparison result includes: acquiring position information of joint points of a virtual limb of a user in the video image information and marking the position information to obtain joint point information; carrying out line marking on the virtual limb of the user through the joint point information to obtain limb line information; and marking the limb line information by multiple colors according to the comparison result.
According to some embodiments of the present invention, the marking the virtual limb of the user in the video image information by a plurality of colors according to the comparison result includes: if the fact that the user is in the preparation state is detected, marking the virtual limb of the user in the video image information through a first color; if the user is detected to be in the motion state, marking the virtual limb of the user in the video image information through a second color; and if the user is detected to be in the finished state, marking the virtual limb of the user in the video image information through a third color.
According to some embodiments of the invention, the readiness state, the movement state, and the completion state are determined according to the breathing data and the limb data, wherein the breathing data includes first breathing data and second breathing data, the limb data includes first limb data, second limb data, and third limb data, the readiness state is obtained by detecting the first breathing data and the first limb data, the movement state is obtained by detecting the second breathing data and the second limb data, and the completion state is obtained by detecting the first breathing data and the third limb data.
According to some embodiments of the present invention, before the obtaining the physical sign data of the user, the method further includes: acquiring equipment state information of the yoga mat and video image information of a user; if the equipment state information is the expansion state information and the user in the video image information is at a preset position, determining that the user and the yoga mat are in a preparation state; and acquiring physical sign data of the user according to the fact that the user and the yoga mat are in the preparation state.
According to some embodiments of the present invention, before the acquiring the physical sign data of the user, the yoga learning method is applied to a yoga learning system, further comprising: acquiring first control information of a user, and determining that a system is in a first working mode according to the first control information; acquiring second control information of the user, and determining the yoga posture according to the second control information; and setting the standard database according to the yoga posture, wherein the standard database comprises standard breathing data and standard limb data of the yoga posture.
According to some embodiments of the present invention, before the acquiring the physical sign data of the user, the yoga learning method is applied to a yoga learning system, further comprising: acquiring third control information of a user, and determining that the system is in a second working mode according to the third control information; and setting the standard database according to the second working mode of the system, wherein the standard database is used for recording the breathing data and/or the physical sign data of the user and generating a new yoga posture.
The yoga learning assistance system according to the embodiment of the second aspect of the present invention includes a camera module, where the camera module is used to obtain video image information of a user or a yoga mat; the breath detection module is used for acquiring breath data of a user; the processing module is used for controlling the working state of the yoga auxiliary learning system; the display module is used for displaying the image information processed by the processing module; the hanging carrier is connected with the camera module, the breath detection module and the display module and used for providing power and network resources.
The yoga auxiliary learning system for yoga auxiliary learning provided by the embodiment of the invention at least has the following beneficial effects: in the yoga assistant learning system, the camera module and the breath detection module acquire the physical sign data of the user, including the limb data and the breath data, and compare and analyze the physical sign data and the breath data with the set standard database, the comparison result is reflected to the virtual limb of the user in the video image of the display module and is marked by various colors, so that the user can conveniently check and learn, the module is connected to the hanging carrier to acquire the power supply and the network resource, and the yoga assistant learning system is plug and play, thereby improving the user experience and leading the user to learn the yoga action more intuitively and scientifically.
According to a third aspect of the present invention, a computer-readable storage medium is characterized in that the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used for causing a computer to execute the yoga assisted learning method according to any one of the first aspect of the present invention
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The invention is further described with reference to the following figures and examples, in which:
fig. 1 is a flowchart of a yoga assistant learning method according to some embodiments of the present invention;
fig. 2 is a flowchart of a yoga assistant learning method according to another embodiment of the present invention;
fig. 3 is a flowchart of a yoga assistant learning method according to another embodiment of the present invention;
fig. 4 is a flowchart of a yoga assistant learning method according to another embodiment of the present invention;
fig. 5 is a flowchart of a yoga assistant learning method according to another embodiment of the present invention;
fig. 6 is a schematic view of a control device for yoga according to some embodiments of the present invention;
fig. 7 is a schematic view of a yoga learning assistance system according to some embodiments of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and the above, below, exceeding, etc. are understood as excluding the present numbers, and the above, below, within, etc. are understood as including the present numbers. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
In the description of the present invention, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The yoga assistant learning method in the embodiment of the invention can be applied to a yoga assistant learning system, and by acquiring video image information and sign data of a user, including limb data and breathing data, and performing comparison analysis with a set standard database, reflecting a comparison result on a virtual limb of the user in a video image, and marking through multiple colors, the yoga assistant learning method is convenient for the user to check and learn, improves user experience, and enables the user to learn yoga actions more intuitively and scientifically.
The embodiments of the present invention will be further explained with reference to the drawings.
An embodiment of the present invention provides a yoga learning aid method, which is applied to a yoga learning aid system, and as shown in fig. 1, the yoga learning aid method according to the embodiment of the present invention includes, but is not limited to, step S110, step S120, step S130, and step S140.
And step S110, acquiring physical sign data of the user.
And step S120, acquiring video image information of the user, and acquiring limb data of the user according to the video image information.
And S130, performing comparative analysis on the physical sign data and the limb data and a standard database to generate a comparative result.
And step S140, marking the virtual limb of the user in the video image information by multiple colors according to the comparison result.
In some embodiments of the invention, the yoga assistant learning system is provided with a camera module, a breath detection module, a processing module, a display module and a hanging carrier, the processing module controls and processes the system, the breath detection module obtains physical sign data of a user, the breath detection module obtains breath data of the user, the breath data judges whether the user exhales or inhales, the camera module obtains video image information of the user, the processing module processes the video image information to obtain limb data of the user, the physical sign data and the limb data are compared with a standard database to generate a comparison result, the standard database stores standard breath and limb information of different yoga postures, the physical sign data and the limb data of the current user are compared with the standard database to obtain the comparison result, the system obtains the comparison result according to the comparison result in the video image information, the virtual limbs of the user are marked with multiple colors and displayed through the display module, the virtual limbs marked with the multiple colors can be observed by the user in the display module, the user can check and learn conveniently, the user experience is improved, and the user can learn the yoga action more intuitively and scientifically.
It should be noted that, in some embodiments of the present invention, when the user is in a different yoga exercise state, the system marks the virtual limbs of the user in different states with different colors and displays the virtual limbs through the display module, or when the physical sign data and the limb data of the user are inconsistent with those in the standard database, the system marks the correct or incorrect actions with different colors through the virtual limbs and displays the virtual limbs through the display module. On the premise of meeting the requirements of the embodiment of the invention, the physical sign data can also include heart rate data and the like, and the invention is not limited to the specific limitations.
It should be noted that, in some embodiments of the present invention, when entering the system, the user needs to log in and input personal information, and the system may perform personalized services for the user, including storing historical exercise data of the user, storing yoga exercise scores of the user and uploading the results to the cloud, so that the terminal can view the results, and also including making personalized yoga posture recommendation for the user according to the historical exercise data of the user, and after completing the yoga action, the user may exit from logging in.
In some embodiments of the present invention, after step S130, providing guidance information to the user according to the comparison result, where the guidance information is obtained by comparing the physical sign data and the limb data of the user with the data in the standard database, and then providing a reasonable suggestion according to whether the user makes the data correct or not, so that the user corrects the yoga action according to the guidance information, and the guidance information includes at least one of audio guidance information provided through a speaker, video guidance information provided through a display module, and vibration guidance information provided through a yoga mat, where the system is provided with a speaker, and plays the audio guidance information through the speaker according to the comparison result; or the system is provided with a display module, the system plays video guidance information through the display module according to the comparison result, and the video guidance information can mark guidance suggestions for virtual limbs in the video image or play correct yoga posture videos; or the system provides vibration guidance information through sending control information to the yoga mat and sends vibration to prompt the user when the user action is not standard.
Referring to fig. 2, in some embodiments of the present invention, step S140 may further include, but is not limited to, step S210, step S220, and step S230.
Step S210, obtaining the position information of the joint point of the virtual limb of the user in the video image information and marking the position information to obtain the joint point information.
Step S220, the virtual limb of the user is subjected to line marking through the joint point information to obtain limb line information.
And step S230, marking the limb line information by multiple colors according to the comparison result.
In some embodiments of the invention, a system acquires video image information of a user through a camera, acquires position information of joint points of a virtual limb of the user in the video image information, marks the position information of the joint points, the joint points correspond to joints on the virtual body of the user in the video image information, connects the joint points through the marked joint point information, marks lines of the virtual limb of the user to obtain limb line information, and marks the limb line information in multiple colors according to a comparison result.
In some embodiments of the present invention, the system divides the yoga process into three states according to the yoga, which are a preparation state, a movement state and a completion state, in step S140, if it is detected that the user is in the preparation state, the virtual limb of the user in the video image information is marked with a first color, if it is detected that the user is in the movement state, the virtual limb of the user in the video image information is marked with a second color, if it is detected that the user is in the completion state, the virtual limb of the user in the video image information is marked with a third color, and the three states of the yoga process of the user are marked with three colors, which is helpful for the user to clearly find the state where the user is located, so that the user can adjust the yoga motion of the user according to the suggestion provided by the system, in one embodiment, the first color is red, the second color is yellow, the third color is green, if it is detected that the user is in a preparation state, the virtual limb of the user in the video image information is marked by red, if it is detected that the user is in a motion state, the virtual limb of the user in the video image information is marked by yellow, and if it is detected that the user is in a completion state, the virtual limb of the user in the video image information is marked by green.
In some embodiments of the invention, the preparation state, the movement state and the completion state are determined according to the breathing data and the limb data, wherein the breathing data comprises first breathing data and second breathing data, the limb data comprises first limb data, second limb data and third limb data, the preparation state is obtained by detecting the first breathing data and the first limb data, the movement state is obtained by detecting the second breathing data and the second limb data, and the completion state is obtained by detecting the first breathing data and the third limb data, the yoga action is determined by the unused breathing and limb actions, in one embodiment, the first breathing data is inspiration, the second breathing data is expiration, the first limb data comprises the stretching of the spine or opening the chest, the second limb data comprises the body lordosis, and the third limb data comprises the tightening of the spine or tightening the chest, when the user is detected to inhale air and stretch the spine or open the chest, the user is judged to be in a preparation state, when the user is detected to exhale air and bend the body forwards, the user is judged to be in a motion state, when the user is detected to inhale air and tighten the spine or tighten the chest, the user is judged to be in a completion state, and the first respiratory data, the second respiratory data, the first limb data, the second limb data and the third limb data are different according to different yoga postures.
Referring to fig. 3, in some embodiments of the present invention, step S110 may further include, but is not limited to, step S310, step S320, and step S330.
And S310, acquiring equipment state information of the yoga mat and video image information of a user.
And S320, if the equipment state information is the expansion state information and the user in the video image information is at a preset position, determining that the user and the yoga mat are in a preparation state.
And S330, acquiring sign data of the user according to the fact that the user and the yoga mat are in a preparation state.
In some embodiments of the present invention, before acquiring the physical sign data of the user, the system further determines whether the user and the yoga mat are in the ready state, and acquires the equipment state information of the yoga mat and the video image information of the user, wherein the system acquires the equipment state information of the yoga mat and sends the self state information to the system through the yoga mat, or the system acquires the equipment state information of the yoga mat through the detection device, for example, the camera module acquires the deployed state of the yoga, if the equipment state information of the yoga mat is the deployed state information and the user in the video image information is in the preset position, it is determined that the user and the yoga mat are in the ready state, the preset position is a position set by the system, the video image information and the physical sign data of the user can be acquired well in the region, and finally, the system works normally to acquire the physical sign data of the user according to the user and the yoga mat are in the ready state, for subsequent operations.
It should be noted that, in some embodiments of the present invention, if the system does not detect that the user is in the preset position, or the equipment state information of the yoga mat is not the deployment state information, the system sends a prompt, and the system may send a prompt through the speaker or the display module, and when the equipment state information is the deployment state information and the user in the video image information is in the preset position, it is determined that the user and the yoga mat are in the preparation state, and the system normally operates.
Referring to fig. 4, in some embodiments of the present invention, step S110 may further include, but is not limited to, step S410, step S420, and step S430.
Step S410, first control information of a user is obtained, and the system is determined to be in a first working mode according to the first control information.
And step S420, second control information of the user is obtained, and the yoga posture is determined according to the second control information.
And step S430, setting a standard database according to the yoga posture.
In some embodiments of the present invention, before acquiring the physical sign data of the user, the system further includes selecting a working mode of the system and setting a standard database, the system acquires first control information of the user, determines that the system is in a first working mode according to the first control information, the first control information can be acquired by a detection device of the system, the acquisition of the first control information is implemented by recognizing a gesture of the user through a camera module, in one embodiment, the first control information is a first gesture of the user, the camera module of the system recognizes that the user makes the first gesture, for example, the user takes 5 fingers out of the camera module and stays for 3 seconds, the first control information of the user is obtained, determines that the system is in the first working mode according to the first control information, the first working mode is an exercise mode, the system acquires second control information of the user, and determines a yoga posture according to the second control information, in an embodiment, the second control information is a second gesture of the user, the camera module of the system recognizes that the user makes the second gesture, that is, the second control information of the user is obtained, and finally, a standard database is set according to the yoga posture, the second control information is selection information of the user on the yoga posture to be exercised, the system determines that the yoga posture is good according to the selection of the user, and sets the standard database according to limb data and physical sign data of the yoga posture, so that the system guides the user in the yoga practicing process of the user.
Referring to fig. 5, in some embodiments of the present invention, step S110 may further include, but is not limited to, step S510 and step S520.
Step S510, obtaining third control information of the user, and determining that the system is in the second working mode according to the third control information.
Step S520, a standard database is set according to the second working mode of the system.
In some embodiments of the present invention, before acquiring the physical sign data of the user, the system further includes selecting a working mode of the system and setting a standard database, the system acquires third control information of the user, determines that the system is in a second working mode according to the third control information, and the third control information can be acquired by a detection device of the system, and includes recognizing a gesture of the user by a camera module to acquire the third control information, in one embodiment, the third control information is the third gesture of the user, the camera module of the system recognizes that the user makes the third gesture to obtain the third control information of the user, determines that the system is in the second working mode according to the third control information, the second working mode is a self-creation mode, sets the standard database according to the second working mode, and is used for receiving and recording the physical sign data and the breathing data of the yoga action made by the user, the data of the new yoga postures are recorded to form the new yoga postures, the second working mode enables the user to have self-innovative yoga postures, the requirement of the user for practicing yoga is met, the system can further comprise other working modes on the premise of meeting the requirement of the embodiment of the invention, and the invention does not limit the invention specifically.
Referring to fig. 6, fig. 6 is a schematic diagram of a control device 100 according to an embodiment of the present invention. The control device 100 of the embodiment of the invention is applied to the yoga learning assistance system, and includes one or more control processors 101 and a memory 102, and fig. 6 illustrates one control processor 101 and one memory 102.
The control processor 101 and the memory 102 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The memory 102, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer-executable programs. Further, the memory 102 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 102 may optionally include memory 102 located remotely from the control processor 101, and these remote memories 102 may be connected to the control device 100 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Those skilled in the art will appreciate that the device configuration shown in fig. 6 does not constitute a limitation of the control device 100, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The non-transitory software program and instructions required for implementing the control method applied to the control device 100 in the above embodiments are stored in the memory 102, and when being executed by the control processor 101, the yoga learning assistance method applied to the control device 100 in the above embodiments is executed, for example, the method steps S110 to S140 in fig. 1, the method steps S210 to S230 in fig. 2, the method steps S310 to S330 in fig. 3, the method steps S410 to S430 in fig. 4, and the method steps S510 to S520 in fig. 5 described above are executed.
Referring to fig. 7, fig. 7 is a yoga learning assistance system according to an embodiment of the present invention, which includes a camera module 201, where the camera module 201 is configured to obtain video image information of a user or a yoga mat; the breath detection module 202, the breath detection module 202 is configured to obtain breath data of the user; the processing module 204, the processing module 204 is configured to control the working state of the yoga assistant learning system; the display module 203, the display module 203 is used for displaying the image information processed by the processing module 204; the hanging carrier 205, the camera module 201, the breath detection module 202 and the display module 203 are connected to the hanging carrier 205, and the hanging carrier 205 is used for providing power and network resources. By applying the yoga learning aid system in the embodiment of the invention, the camera module 201 and the breath detection module 202 acquire the physical sign data of the user, including the limb data and the breath data, and compare and analyze the physical sign data with the set standard database, the comparison result is reflected to the virtual limb of the user in the video image of the display module 203 and is marked by various colors, so that the user can conveniently check and learn, and the module is connected to the hanging carrier 205 to acquire a power supply and network resources, so that the plug and play effect is realized, the user experience is improved, and the user can learn the yoga action more intuitively and scientifically.
It should be noted that, in some embodiments of the present invention, the camera module 201 is a camera, the respiration detection module 202 is a respiration monitoring device capable of recognizing a respiration state, the display module 203 may be a display screen or a projection device, the hanging carrier 205 provides an interface for the device, so that an external device can be connected to the hanging carrier 205 through the interface to share a power source or a network resource of the hanging carrier 205, and the device is plug-and-play, the camera module 201, the respiration detection module 202, and the display module 203 can be disposed on the hanging carrier 205, and can also be connected to the hanging carrier 205 through the interface, only by way of example in fig. 7 that the camera module 201, the respiration detection module 202, and the display module 203 are connected to the hanging carrier 205 through the interface of the processing module 204, and the invention is not limited thereto.
It should be noted that, in some embodiments of the present invention, the respiration detection module 202 is a high-definition camera, and may also be the same high-definition camera as the camera module 201, and the high-definition camera is configured as the respiration detection module 202 for detecting characteristics of respiration of the user, such as detecting contraction and expansion of the lung or chest of the user when breathing, and transmitting a signal to the processing module 204 for processing, so as to determine a respiratory state of the user.
The yoga learning aid system in this embodiment may further have the control device 100 in the above embodiment, so that the yoga learning aid system in this embodiment has the hardware structure of the control device 100 in the above embodiment, and the control processor 101 in the control device 100 can call the control program of the yoga learning aid system stored in the memory 102 to realize control of the control device 100.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium, where computer-executable instructions are stored, and executed by one or more control processors, for example, by one or more control processors, to execute the yoga assisted learning method in the foregoing method embodiment, for example, to execute the above-described method steps S110 to S140 in fig. 1, method steps S210 to S230 in fig. 2, method steps S310 to S330 in fig. 3, method steps S410 to S430 in fig. 4, and method steps S510 to S520 in fig. 5.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
While the preferred embodiments of the present invention have been described in detail, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.

Claims (10)

1. The yoga assistant learning method is characterized by comprising the following steps:
acquiring sign data of a user, wherein the sign data comprises breathing data of the user;
acquiring video image information of a user, and acquiring limb data of the user according to the video image information;
performing comparative analysis on the physical sign data and the limb data with a standard database to generate a comparative result;
and marking the virtual limb of the user in the video image information by multiple colors according to the comparison result.
2. The yoga assistant learning method of claim 1, wherein after the comparing and analyzing the physical sign data and the limb data with a standard database to generate a comparison result, the method further comprises:
providing guidance information for the user according to the comparison result, wherein the guidance information comprises at least one of audio guidance information provided through a loudspeaker, video guidance information provided through a display module and vibration guidance information provided through a yoga mat.
3. The yoga assistant learning method of claim 1, wherein the marking the virtual limb of the user in the video image information by a plurality of colors according to the comparison result comprises:
acquiring position information of joint points of a virtual limb of a user in the video image information and marking the position information to obtain joint point information;
carrying out line marking on the virtual limb of the user through the joint point information to obtain limb line information;
and marking the limb line information by multiple colors according to the comparison result.
4. The yoga assistant learning method of claim 1, wherein the marking the virtual limb of the user in the video image information by a plurality of colors according to the comparison result comprises:
if the fact that the user is in the preparation state is detected, marking the virtual limb of the user in the video image information through a first color;
if the user is detected to be in the motion state, marking the virtual limb of the user in the video image information through a second color;
and if the user is detected to be in the finished state, marking the virtual limb of the user in the video image information through a third color.
5. The yoga learning aid method of claim 4, wherein the preparation status, the exercise status, and the completion status are determined according to the breathing data and the limb data, wherein the breathing data comprises a first breathing data and a second breathing data, the limb data comprises a first limb data, a second limb data, and a third limb data, the preparation status is obtained by detecting the first breathing data and the first limb data, the exercise status is obtained by detecting the second breathing data and the second limb data, and the completion status is obtained by detecting the first breathing data and the third limb data.
6. The yoga assistant learning method of claim 1, wherein before the obtaining the sign data of the user, further comprising:
acquiring equipment state information of the yoga mat and video image information of a user;
if the equipment state information is the expansion state information and the user in the video image information is at a preset position, determining that the user and the yoga mat are in a preparation state;
and acquiring physical sign data of the user according to the fact that the user and the yoga mat are in the preparation state.
7. The yoga learning aid method according to claim 1, wherein the yoga learning aid method is applied to a yoga learning aid system, and further comprises, before the obtaining the physical sign data of the user:
acquiring first control information of a user, and determining that a system is in a first working mode according to the first control information;
acquiring second control information of the user, and determining the yoga posture according to the second control information;
and setting the standard database according to the yoga posture, wherein the standard database comprises standard breathing data and standard limb data of the yoga posture.
8. The yoga learning aid method according to claim 1, wherein the yoga learning aid method is applied to a yoga learning aid system, and further comprises, before the obtaining the physical sign data of the user:
acquiring third control information of a user, and determining that the system is in a second working mode according to the third control information;
and setting the standard database according to the second working mode of the system, wherein the standard database is used for recording the breathing data and/or the physical sign data of the user and generating a new yoga posture.
9. The yoga learning aid system is characterized by comprising:
the camera module is used for acquiring video image information of a user or the yoga mat;
the breath detection module is used for acquiring breath data of a user;
the processing module is used for controlling the working state of the yoga auxiliary learning system;
the display module is used for displaying the image information processed by the processing module;
the hanging carrier is connected with the camera module, the breath detection module and the display module and used for providing power and network resources.
10. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the yoga assisted learning method according to any one of claims 1 to 8.
CN202110293705.7A 2021-03-19 2021-03-19 Yoga auxiliary learning method, system and storage medium Pending CN113117306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110293705.7A CN113117306A (en) 2021-03-19 2021-03-19 Yoga auxiliary learning method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110293705.7A CN113117306A (en) 2021-03-19 2021-03-19 Yoga auxiliary learning method, system and storage medium

Publications (1)

Publication Number Publication Date
CN113117306A true CN113117306A (en) 2021-07-16

Family

ID=76773384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110293705.7A Pending CN113117306A (en) 2021-03-19 2021-03-19 Yoga auxiliary learning method, system and storage medium

Country Status (1)

Country Link
CN (1) CN113117306A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569688A (en) * 2021-07-21 2021-10-29 上海健指树健康管理有限公司 Body fitness testing method and device based on limb recognition technology and storage medium
CN114495262A (en) * 2021-12-24 2022-05-13 北京航空航天大学 Method, system, computer equipment and storage medium for limb evaluation
CN114733161A (en) * 2022-04-26 2022-07-12 广州智康科技开发有限公司 Rehabilitation training system based on breathing and body movement
CN115445170A (en) * 2022-07-30 2022-12-09 华为技术有限公司 Exercise reminding method and related equipment
CN115445170B (en) * 2022-07-30 2024-06-25 华为技术有限公司 Exercise reminding method and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150279230A1 (en) * 2014-03-26 2015-10-01 Wai Lana Productions, Llc Method for yoga instruction with media
CN108985262A (en) * 2018-08-06 2018-12-11 百度在线网络技术(北京)有限公司 Limb motion guidance method, device, server and storage medium
CN109011508A (en) * 2018-07-30 2018-12-18 三星电子(中国)研发中心 A kind of intelligent coach system and method
CN112001346A (en) * 2020-08-31 2020-11-27 江苏正德厚物联网科技发展有限公司 Vital sign detection method and monitoring system based on multi-algorithm fusion cooperation
CN112149472A (en) * 2019-06-28 2020-12-29 广州芊泓运动科技有限公司 Artificial intelligence-based limb action recognition and comparison method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150279230A1 (en) * 2014-03-26 2015-10-01 Wai Lana Productions, Llc Method for yoga instruction with media
CN109011508A (en) * 2018-07-30 2018-12-18 三星电子(中国)研发中心 A kind of intelligent coach system and method
CN108985262A (en) * 2018-08-06 2018-12-11 百度在线网络技术(北京)有限公司 Limb motion guidance method, device, server and storage medium
CN112149472A (en) * 2019-06-28 2020-12-29 广州芊泓运动科技有限公司 Artificial intelligence-based limb action recognition and comparison method
CN112001346A (en) * 2020-08-31 2020-11-27 江苏正德厚物联网科技发展有限公司 Vital sign detection method and monitoring system based on multi-algorithm fusion cooperation

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569688A (en) * 2021-07-21 2021-10-29 上海健指树健康管理有限公司 Body fitness testing method and device based on limb recognition technology and storage medium
CN114495262A (en) * 2021-12-24 2022-05-13 北京航空航天大学 Method, system, computer equipment and storage medium for limb evaluation
CN114733161A (en) * 2022-04-26 2022-07-12 广州智康科技开发有限公司 Rehabilitation training system based on breathing and body movement
CN114733161B (en) * 2022-04-26 2023-09-29 广州智康科技开发有限公司 Rehabilitation training system based on respiration and body movement
CN115445170A (en) * 2022-07-30 2022-12-09 华为技术有限公司 Exercise reminding method and related equipment
CN115445170B (en) * 2022-07-30 2024-06-25 华为技术有限公司 Exercise reminding method and related equipment

Similar Documents

Publication Publication Date Title
CN113117306A (en) Yoga auxiliary learning method, system and storage medium
CN110472099B (en) Interactive video generation method and device and storage medium
US20150170546A1 (en) Software application for a portable device for cpr guidance using augmented reality
US9498123B2 (en) Image recording apparatus, image recording method and image recording program stored on a computer readable medium
US20130265448A1 (en) Analyzing Human Gestural Commands
US8150118B2 (en) Image recording apparatus, image recording method and image recording program stored on a computer readable medium
JP7136216B2 (en) Class support system, judgment device, class support method and program
KR20200129327A (en) Method of providing personal training service and system thereof
KR102011887B1 (en) Supervisory System of Online Lecture by Eye Tracking
US11837233B2 (en) Information processing device to automatically detect a conversation
CN111083397A (en) Recorded broadcast picture switching method, system, readable storage medium and equipment
CN112734799A (en) Body-building posture guidance system
KR20150009229A (en) Appartus and system for interactive cpr simulator based on augmented reality
WO2022161037A1 (en) User determination method, electronic device, and computer-readable storage medium
US20170116952A1 (en) Automatic parameter adjustment system and method for display device, and display device
JP2016100033A (en) Reproduction control apparatus
CN116114250A (en) Display device, human body posture detection method and application
JP2017120366A (en) Picture display device and picture display method
CN112055257B (en) Video classroom interaction method, device, equipment and storage medium
JPH10263126A (en) Form analyzing system for exercise using personal computer
KR20220069355A (en) Smart mirror for managing a body shape
CN108769593B (en) Teenager body monitoring system and method based on thermal imaging technology
JP6625809B2 (en) Electronic device and control method thereof
US20220246060A1 (en) Electronic device and method for eye-contact training
CN116324700A (en) Display equipment and media asset playing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210716