CN111695777A - Teaching method, teaching device, electronic device and storage medium - Google Patents

Teaching method, teaching device, electronic device and storage medium Download PDF

Info

Publication number
CN111695777A
CN111695777A CN202010393608.0A CN202010393608A CN111695777A CN 111695777 A CN111695777 A CN 111695777A CN 202010393608 A CN202010393608 A CN 202010393608A CN 111695777 A CN111695777 A CN 111695777A
Authority
CN
China
Prior art keywords
exercise
target user
teaching
posture
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010393608.0A
Other languages
Chinese (zh)
Inventor
常向月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuiyi Technology Co Ltd
Original Assignee
Shenzhen Zhuiyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuiyi Technology Co Ltd filed Critical Shenzhen Zhuiyi Technology Co Ltd
Priority to CN202010393608.0A priority Critical patent/CN111695777A/en
Publication of CN111695777A publication Critical patent/CN111695777A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Economics (AREA)
  • Theoretical Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The embodiment of the application discloses a teaching method, a teaching device, electronic equipment and a storage medium. The method comprises the steps of obtaining exercise characteristic data of a target user in the current content exercise process, then generating teaching rules corresponding to the voice characteristics based on the posture characteristics, and then guiding the target user to exercise the current content based on the teaching rules through a virtual robot. Therefore, by means of the method, under the condition that the training feature data including the voice features and the posture features of the target user in the current content training process are acquired, the teaching rules corresponding to the voice features are generated based on the posture features, so that the target user can be guided to train the current content through the virtual robot based on the teaching rules, the training pleasure of the user in the training process can be improved, and meanwhile the training efficiency is improved.

Description

Teaching method, teaching device, electronic device and storage medium
Technical Field
The present application relates to the field of human-computer interaction technologies, and in particular, to a teaching method, an apparatus, an electronic device, and a storage medium.
Background
Music is an art reflecting the emotion of real life of human beings, and can be used for mastering temperament and performance to cause emotional resonance of people. Today, with the emphasis on quality education, deepening understanding and learning of music also influences thinking ways and three-fold of people, especially children, to a certain extent. With the development of musical instrument teaching technology and the improvement of living standard of people, children or learners with musical instrument learning requirements can be taught to carry out targeted exercise in an artificial intelligent musical instrument teaching mode as a mode, however, the existing artificial intelligent musical instrument teaching technology still needs to be developed.
Disclosure of Invention
In view of the above, the present application provides a teaching method, an apparatus, an electronic device, and a storage medium to improve the above problems.
In a first aspect, an embodiment of the present application provides a teaching method, where the method includes: acquiring exercise characteristic data of a target user in a current content exercise process, wherein the exercise characteristic data comprises voice characteristics and posture characteristics of the target user; generating a teaching rule corresponding to the voice feature based on the posture feature; and guiding the target user to practice the current content based on the teaching rule through a virtual robot.
Optionally, the gesture feature may include an expressive feature. Further, the generating of the tutorial rule corresponding to the voice feature based on the gesture feature may include: acquiring an exercise state corresponding to the expression characteristics; and if the exercise state accords with a preset exercise state, generating a training plan matched with the exercise state, and taking the training plan as a teaching rule corresponding to the voice feature.
Optionally, the gesture feature may include an expression feature and a posture feature, and further, the generating a teaching rule corresponding to the voice feature based on the gesture feature may include: comparing the expression features and the posture features with reference expression features and reference posture features to obtain expressions to be corrected and postures to be corrected; and generating a teaching rule comprising the expression to be corrected and the posture to be corrected.
Further, the instructing, by the virtual robot, the target user to practice the current content based on the tutorial rule includes: displaying the corresponding position of the expression to be corrected and the content of the corrected expression characteristic needing to be changed by the virtual robot based on the teaching guidance rule; and displaying, by the virtual robot, the corresponding position of the posture to be corrected and the magnitude of deviation required to correct the posture feature based on the teaching guidance rule.
Further, the method further comprises: and generating a reference exercise plan matched with the target user according to the voice characteristics, wherein the reference exercise plan comprises contents corresponding to exercise in a specified time period.
Further, the current content includes a plurality of sub-contents corresponding to an exercise sequence, and after the virtual robot instructs the target user to exercise the current content based on the tutorial rule, the method further includes: acquiring a comprehensive exercise scoring parameter of the target user; and if the scoring parameter does not meet a preset threshold value, adjusting the exercise sequence of the plurality of sub-contents.
Further, the method further comprises: recommending other song contents except the current content to the target user.
In a second aspect, an embodiment of the present application provides a teaching device, including: the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring exercise characteristic data of a target user in the current content exercise process, and the exercise characteristic data comprises voice characteristics and posture characteristics of the target user; the generating module is used for generating teaching rules corresponding to the voice features based on the posture features; and the teaching guidance module is used for guiding the target user to practice the current content through a virtual robot based on the teaching rule.
Optionally, the gesture feature may include an expressive feature. Further, the generating module may be specifically configured to acquire an exercise state corresponding to the expression feature; and if the exercise state accords with a preset exercise state, generating a training plan matched with the exercise state, and taking the training plan as a teaching rule corresponding to the voice feature.
Optionally, the gesture features include expressive features and posture features. Further, the generating module may be specifically configured to compare the expression features and the posture features with reference expression features and reference posture features to obtain an expression to be corrected and a posture to be corrected; and generating a teaching rule comprising the expression to be corrected and the posture to be corrected.
Further, the teaching guidance module may be specifically configured to display, by the virtual robot, a corresponding position of the expression to be corrected and a content that is required to be changed for correcting the expression feature based on the teaching guidance rule; and displaying, by the virtual robot, the corresponding position of the posture to be corrected and the magnitude of deviation required to correct the posture feature based on the teaching guidance rule.
Further, the apparatus may further include an exercise plan generating module configured to generate a reference exercise plan matching the target user according to the voice feature, where the reference exercise plan includes content corresponding to exercise in a specified time period.
Optionally, the current content includes a plurality of sub-contents, and the plurality of sub-contents correspond to an exercise sequence. The device can also comprise a scoring parameter acquisition module and an exercise sequence adjustment module. The scoring parameter acquiring module may be configured to acquire a comprehensive exercise scoring parameter of the target user after the target user is instructed to exercise the current content by the virtual robot based on the teaching rule. The practice sequence adjusting module may be configured to adjust a practice sequence of the plurality of sub-contents if the scoring parameter does not satisfy a preset threshold.
Optionally, the apparatus may further include a content recommending module, configured to recommend other song content besides the current content to the target user.
In a third aspect, an embodiment of the present application provides an electronic device, including one or more processors and a memory; one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of the first aspect described above.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, where the program code executes the method of the first aspect.
The application provides a teaching method, a teaching device, electronic equipment and a storage medium, and relates to the technical field of human-computer interaction. The method comprises the steps of obtaining exercise characteristic data of a target user in the current content exercise process, then generating teaching rules corresponding to the voice characteristics based on the posture characteristics, and then guiding the target user to exercise the current content based on the teaching rules through a virtual robot. Therefore, by means of the method, under the condition that the training feature data including the voice features and the posture features of the target user in the current content training process are acquired, the teaching rules corresponding to the voice features are generated based on the posture features, so that the target user can be guided to train the current content through the virtual robot based on the teaching rules, the training pleasure of the user in the training process can be improved, and meanwhile the training efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application environment provided by an embodiment of the present application.
Fig. 2 shows a method flowchart of a teaching method according to an embodiment of the present application.
Fig. 3 shows a method flowchart of a teaching method according to another embodiment of the present application.
Fig. 4 shows a method flowchart of a teaching method according to another embodiment of the present application.
Fig. 5 shows a method flowchart of a teaching method according to yet another embodiment of the present application.
Fig. 6 shows a method flowchart of a teaching method according to yet another embodiment of the present application.
Fig. 7 shows a block diagram of a teaching apparatus according to an embodiment of the present application.
Fig. 8 shows a block diagram of an electronic device for executing the teaching method according to the embodiment of the present application.
Fig. 9 is a storage unit for storing or carrying program codes for implementing the teaching method according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Music is an art reflecting the emotion of real life of human beings, and can be used for mastering temperament and performance to cause emotional resonance of people. Today, with the emphasis on quality education, deepening understanding and learning of music also influences thinking ways and three-fold of people, especially children, to a certain extent. With the development of musical instrument teaching technology and the improvement of living standard of people, children or learners with musical instrument learning requirements can be taught to carry out targeted exercise in an artificial intelligent musical instrument teaching mode as a mode.
At present, the main product forms of the artificial intelligent musical instrument teaching technology are divided into two systems facing teachers and students. The teacher can keep pace with the learning progress of the students by the teacher, and the teacher can remotely communicate with the students, arrange exercise homework, check exercise conditions and the like outside class. The latter takes student practice as a performance listened by audience through score feedback, the technical principle behind the student practice is to compare the sound data of a microphone with the reference version data, quickly identify places where learners can improve on intonation, beat and melody through artificial intelligence technology, and finally feedback the performance evaluation to the learners visually in the form of comprehensive scores. However, the existing artificial intelligent musical instrument teaching technology only considers the places where the tone, the tempo and the like of the learner can be improved in the playing process, and still needs to be developed.
As a way to improve the above problem, the inventor proposes a teaching method, an apparatus, an electronic device, and a storage medium in the present application, which can improve the above problem by acquiring exercise feature data of a target user during an exercise of current content, then generating a teaching rule corresponding to the voice feature based on the posture feature, and then instructing the target user to exercise the current content based on the teaching rule through a virtual robot. Under the condition that training characteristic data including the voice characteristics and the posture characteristics of the target user in the current content training process are acquired, the teaching rule corresponding to the voice characteristics is generated based on the posture characteristics, so that the target user can be guided to train the current content through the virtual robot based on the teaching rule, the training pleasure of the user in the training process can be improved, and meanwhile, the training efficiency is improved.
In order to better understand the teaching method, apparatus, electronic device, and storage medium provided in the embodiments of the present application, an application environment suitable for the embodiments of the present application will be described below.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment suitable for the embodiment of the present application. The teaching method provided by the embodiment of the present application can be applied to the multi-state interactive system 100 shown in fig. 1. The polymorphic interaction system 100 includes an electronic device 110 and a server 120, the server 120 being communicatively coupled to the electronic device 110. The server 120 may be a conventional server or a cloud server, and is not limited herein.
The electronic device 110 may be various electronic devices having a display screen and supporting data input, including but not limited to a smart phone, a tablet computer, a laptop portable computer, a desktop computer, a wearable electronic device, and the like. Specifically, the data input may be based on a voice module provided on the electronic device 110 to input voice, a character input module to input characters, and so on.
The electronic device 110 may have a client application program of a teaching type installed thereon, and the user may communicate with the server 120 based on the client application program (for example, an Application (APP), a wechat applet, and the like). Specifically, the server 120 is installed with a corresponding server application, the user may register a user account in the server 120 based on the client application, and communicate with the server 120 based on the user account, for example, the user logs in the user account in the client application, inputs information through the client application based on the user account, and may input text information or voice information, and the like, after receiving the information input by the user, the client application may send the information to the server 120, so that the server 120 may receive, process, and store the information, and the server 120 may also receive the information and return a corresponding output information to the electronic device 110 according to the information.
In some implementations, a client application may be used to provide educational services to a user, to provide educational courses to a user, etc., and the client application may interact with the user based on the virtual robot. In particular, the client application may receive information input by a user and respond to the information based on the virtual robot. The virtual robot is a software program based on visual graphics, and the software program can present robot forms simulating biological behaviors or ideas to a user after being executed. The virtual robot may be a robot simulating a real person, for example, a robot shaped like a real person built according to the shape of the user himself or other people, or a robot having an animation effect, for example, a robot shaped like an animal or a cartoon character, and is not limited herein.
In some embodiments, after acquiring the reply information corresponding to the information input by the user, the electronic device 110 may display a virtual robot image corresponding to the reply information on a display screen of the electronic device 110 or other image output devices connected thereto (wherein the virtual robot image characteristics may include a sex of the virtual robot, a reply emotion corresponding to the reply audio, and a character characteristic, etc.). As a mode, while the virtual robot image is played, the audio corresponding to the virtual robot image may be played through a speaker of the electronic device 110 or other audio output devices connected thereto, and a text or a graphic corresponding to the reply information may be displayed on a display screen of the electronic device 110, so that multi-state interaction with the user in multiple aspects of image, voice, text, and the like is realized.
In some embodiments, the means for processing the information input by the user can also be disposed on the electronic device 110, so that the electronic device 110 can interact with the user without relying on establishing communication with the server 120, in which case the polymorphic interaction system 100 can include only the electronic device 110.
The above application environments are only examples for facilitating understanding, and it is to be understood that the embodiments of the present application are not limited to the above application environments.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
First embodiment
Referring to fig. 2, an embodiment of the present application provides a teaching method applicable to an electronic device, the method including:
step S110: acquiring exercise characteristic data of a target user in the current content exercise process.
The target user is a user currently in an exercise state, the current content may be exercise content related to the body movement of the target user, for example, the current content may include musical instruments, singing, dancing, martial arts, or other performance activities, and the specific content may not be limited.
In this embodiment, the exercise characteristic data may include voice characteristics as well as gesture characteristics of the target user. The voice features may include characteristics of intonation, tone, beat, timbre, and the like of the target user, and the posture features may include facial expression (including expression, and the like) and posture features of the target user. For example, if the current content of the target user exercise is dance, it is necessary to simulate the expression and posture corresponding to various dance postures during the exercise, in this way, the posture features may include the facial expression and posture features of the target user.
It is understood that during the course of an exercise, as the exercise time increases, the target user may have a boring, boring mood for the current content being exercised, resulting in inefficient exercise. As a way of improving this problem, the present embodiment may acquire exercise characteristic data of the target user during the current content exercise. As an embodiment, the exercise feature data of the target user during the current content exercise may be collected by the sensing device. Optionally, (video) image information of the target user in the current content exercise process may be acquired through an image acquisition device such as a camera, and the posture characteristic of the target user is acquired through the (video) image information. For example, as one mode, a facial micro expression of a target user during an exercise process may be captured through a face sensing technology, and then an emotion of the target user may be extracted according to the micro expression, so as to obtain a training attitude of the target user with respect to current content, for example, whether the target user is happy or tired.
Optionally, voice data of the target user in the current content exercise process may be acquired through an audio device such as a microphone, and the voice feature of the target user is acquired through the voice data. For a specific implementation principle and implementation process of acquiring (video) image information of a target user in a current content exercise process through an image acquisition device such as a camera and acquiring voice data of the target user in the current content exercise process through an audio device such as a microphone, related technologies may be referred to, and details are not repeated here.
Optionally, the exercise feature data of the target user in the current content exercise process may be acquired at the time when the target user starts to exercise, or the exercise feature data may be acquired after the target user starts an exercise feature data acquisition function button, and the specific acquisition start time may not be limited.
Step S120: and generating a teaching rule corresponding to the voice feature based on the posture feature.
As a mode, if the current content of the target user exercise is a musical instrument track, the track may be divided into different paragraphs, and the corresponding posture characteristic and voice characteristic of the target user in the process of practicing the different paragraphs may be obtained. Optionally, the speech features corresponding to different paragraphs may be different, for example, some paragraphs may be high-pitch paragraphs and other paragraphs may be low-pitch paragraphs, in this way, in order to better help the target user to perform the exercise effectively, the posture of the target user during the exercise of each track may be obtained, for example, the expression (which may be happy, confused, depressed, surprised, and difficult) and the posture (which may be noded, fluent in action, forgotten in action/lyrics/beats) during the exercise of each track may be obtained, and then the teaching rule corresponding to the speech features may be generated according to the posture features of the target user.
For example, suppose a track is divided into paragraph a, paragraph B, paragraph C, paragraph D, and paragraph E, and paragraph a, paragraph B, paragraph C, paragraph D, and paragraph E correspond to different speech features respectively. If the target user acquires that the expression corresponding to the practice paragraph a is a smiling face and the corresponding posture is a nodding during the practice of the song, the target user can understand that the target user grasps the practice skill of the paragraph a and is full of confidence in the current practice paragraph. Then the exercise duration may be reduced when practicing to the paragraph content corresponding to the speech feature similar to the speech feature corresponding to paragraph a. If the expressions of the target user when practicing to the paragraphs C and E are all "depressed", the gesture corresponding to the paragraph C is "bent head", and the gesture corresponding to the paragraph E is "low head", in this way, it can be understood that the target user is not familiar with the contents of the paragraphs C and E, and the training skill needs to be improved. During the next training, the target user can be guided to firstly review the exercises of the paragraphs C and E, timely correct the review of the user, and if the exercises are still not correct, the correct exercise mode of the target user can be timely guided, so that the impression of the target user on the paragraphs C and E can be deepened. Optionally, the exercise duration, exercise frequency, and the like of the paragraph contents corresponding to the speech features similar to the speech features corresponding to the paragraphs C and E, respectively, may be increased.
It should be noted that the teaching rules in this embodiment may include a training plan, a training duration, a training period, a training frequency, a training focus section (beat), and the like of current content corresponding to the voice feature of the target user, which are generated based on the gesture feature of the target user, but are not limited herein.
As an implementation manner, if the current content may be divided into a plurality of paragraphs, the number of times of repeatedly training the key paragraphs of the current content may be planned for the target user, and after the key paragraphs are trained, the target user may continue to train which part of the content may implement effective training, or the sequence of training of different paragraphs may be adjusted according to the music score, so as to reduce the boring feeling of the target user during the training process.
Step S130: and guiding the target user to practice the current content based on the teaching rule through a virtual robot.
As a way, the embodiment can guide the target user to practice the current content through the virtual robot based on the teaching rules, so as to guide the target user to the extent that needs to be improved in aspects of the expression, the posture and the like visually, vividly and vividly, optionally, the virtual robot can also prompt the target user to correct the place on the voice characteristics through voice, so as to quickly correct the practice error of the target user in a targeted manner, and realize quick and effective practice, so as to improve the user experience.
In the embodiment, exercise feature data of a target user in the current content exercise process is acquired, then a teaching rule corresponding to the voice feature is generated based on the posture feature, and then the target user is guided to exercise the current content based on the teaching rule through a virtual robot. Therefore, by means of the method, under the condition that the training feature data including the voice features and the posture features of the target user in the current content training process are acquired, the teaching rules corresponding to the voice features are generated based on the posture features, so that the target user can be guided to train the current content through the virtual robot based on the teaching rules, the training pleasure of the user in the training process can be improved, and meanwhile the training efficiency is improved.
Second embodiment
Referring to fig. 3, another embodiment of the present application provides a teaching method applicable to an electronic device, the method including:
step S210: acquiring exercise characteristic data of a target user in the current content exercise process.
Step S220: and acquiring an exercise state corresponding to the expression characteristics.
Optionally, the exercise feature data in this embodiment may include a posture feature, and the posture feature may include an expressive feature. It can be understood that, in the process of the exercise of the target user, if the current content of the exercise is a music track, since the posture of the target user may not change much in the process of the exercise, for example, the target user may keep the same posture all the time in the process of the exercise, in this way, whether the posture of the target user changes within the preset time period may be detected, and the specific value of the preset time period may not be limited. Optionally, if the posture of the target user does not change within a preset time period, the expression feature of the target user may be obtained, and the exercise state of the target user in the exercise process may be obtained through the expression feature.
The corresponding relation between different expression characteristics and different exercise states can be configured in advance, and the corresponding relation is stored, so that when the expression characteristics of the target user in the exercise process are obtained, the corresponding exercise state can be obtained quickly. For example, the expression "smiling face" may be stored in correspondence with the exercise state "good", and the expression "frown" may be stored in correspondence with the exercise state "poor". Alternatively, the expression "smiling face" may include "smiling faces" of different scales, such as "smiling", "brow raising", "laugh laughing", and the like.
Step S230: and judging whether the exercise state accords with a preset exercise state.
Optionally, in this embodiment, an exercise state that needs to be improved may be used as the preset exercise state, for example, the exercise state "poor" corresponding to the expression "frown" may be used as the preset exercise state, in this way, after the exercise state corresponding to the expression feature of the target user is obtained, the obtained exercise state may be compared with the preset exercise state, if the exercise state corresponding to the current expression feature of the target user is the same as the preset exercise state, or the exercise state corresponding to the current expression feature of the target user is within the range of the preset exercise state, it may be determined that the current exercise state of the target user corresponds to the preset exercise state, otherwise, it may be determined that the current exercise state of the target user does not correspond to the preset exercise state.
Step S240: and generating a training plan matched with the exercise state, and taking the training plan as a teaching rule corresponding to the voice feature.
As a mode, if the current exercise state of the target user meets the preset exercise state, a training plan matched with the exercise state may be generated, and then the training plan may be used as a teaching rule corresponding to the voice feature of the target user. It can be understood that if the current exercise state of the target user does not conform to the preset exercise state, it can be determined that the current exercise state of the target user is better, and the exercise state of the target user can be continuously detected, so that the reduction of training efficiency caused by the occurrence of distraction and the like of the target user in the exercise process is avoided.
Step S250: and guiding the target user to practice the current content based on the teaching rule through a virtual robot.
In the embodiment, exercise characteristic data of a target user in the current content exercise process is acquired, then an exercise state corresponding to expression characteristics is acquired, a training plan matched with the exercise state is generated under the condition that the exercise state meets a preset exercise state, the training plan is used as a teaching rule corresponding to the voice characteristics of the target user, and then the target user is guided to exercise the current content through a virtual robot based on the teaching rule. Therefore, by means of the method, the teaching rule matched with the exercise state corresponding to the expression feature of the target user is generated under the condition that the exercise feature data including the posture feature of the target user in the current content exercise process of the target user is obtained, so that the target user can be guided to exercise the current content through the virtual robot based on the teaching rule, exercise pleasure of the user in the exercise process can be improved, and exercise efficiency is improved.
Third embodiment
Referring to fig. 4, another embodiment of the present application provides a teaching method applicable to an electronic device, the method including:
step S310: acquiring exercise characteristic data of a target user in the current content exercise process.
Step S320: and comparing the expression characteristics and the posture characteristics with reference expression characteristics and reference posture characteristics to obtain the expression to be corrected and the posture to be corrected.
In this embodiment, the exercise feature data may include a posture feature of the target user, wherein the posture feature may include an expressive feature and a posture feature.
Optionally, for exercise content such as dances and violins with large limb change amplitude of the target user in the exercise process, expression features and posture features of the target user in the current content exercise process can be obtained, the expression features and the posture features are compared with prestored reference expression features and reference posture features respectively, if differences exist, expressions to be corrected and postures to be corrected can be obtained, and therefore the follow-up virtual robot can guide the target user to correct the expressions and the postures according to the expressions to be corrected and the postures to be corrected.
For example, as an implementation manner, body state detection including human skeleton detection, 2D and 3D body state recognition, 2D and 3D hand state recognition and the like can be performed on a target user in a training process through a posture sensing technology, information of facial expression and body state of the target user is obtained in real time, the facial expression and body state of the target user are compared with reference expression features and reference body state (for example, the expression and body state of a famous performer during playing), scoring and intelligent evaluation are further performed on the training process of the target user and fed back to the target user, so that the target user is helped to better correct the problems of body state, expression and body state and the like in the training process.
Step S330: and generating a teaching rule comprising the expression to be corrected and the posture to be corrected.
As one way, a teaching rule including the expression to be corrected and the posture to be corrected may be generated so as to guide the target user to practice with pertinence. Optionally, the specific generation process of the teaching rule may refer to the description in the foregoing embodiment, and is not described herein again.
Step S340: and guiding the target user to practice the current content based on the teaching rule through a virtual robot.
As a mode, the corresponding position of the expression to be corrected and the content of the corrected expression characteristic needing to be changed can be displayed by the virtual robot based on the teaching guidance rule; and displaying the corresponding position of the posture to be corrected and the required deviation amplitude of the posture correction characteristic by the virtual robot based on the teaching guidance rule.
According to the embodiment, the expression characteristics and the posture characteristics of the target user are compared with the reference expression characteristics and the reference posture characteristics to obtain the expression to be corrected and the posture to be corrected, so that the virtual robot can display the corresponding position of the expression to be corrected and the content that the corrected expression characteristics need to change based on the teaching guidance rules, and display the corresponding position of the posture to be corrected and the deviation amplitude of the corrected posture characteristics need to deviate based on the teaching guidance rules, intuitive and vivid guidance on the training expression and the training posture of the target user is realized, and the interest and friendly experience of exercise are improved.
Fourth embodiment
Referring to fig. 5, another embodiment of the present application provides a teaching method applicable to an electronic device, the method including:
step S410: acquiring exercise characteristic data of a target user in the current content exercise process.
Step S420: and generating a teaching rule corresponding to the voice feature based on the posture feature.
Step S430: and guiding the target user to practice the current content based on the teaching rule through a virtual robot.
Step S440: and generating a reference exercise plan matched with the target user according to the voice characteristics.
Wherein the reference exercise plan may include exercise corresponding content for a specified time period. Optionally, in the process of recognizing the voice feature of the target user, a reasonable training plan may be specified for the target user in combination with the intonation, the tone quality, and the like of the voice feature of the target user, for example, what content is exercised in what time period or at what time and the like is better. Optionally, a generation rule of the reference exercise plan may be set according to actual requirements, so that the corresponding reference exercise plan may be automatically generated according to the voice characteristics of the target user.
The embodiment generates a teaching rule corresponding to the voice feature based on acquiring exercise feature data of a target user in the current content exercise process, then guides the target user to exercise the current content based on the teaching rule through the virtual robot, and generates a reference exercise plan matched with the target user according to the voice feature in the target user exercise process, so that the target user can be guided to exercise the current content based on the teaching rule through the virtual robot, the exercise pleasure of the user in the exercise process can be improved, and the exercise efficiency is improved.
Fifth embodiment
Referring to fig. 6, another embodiment of the present application provides a teaching method applicable to an electronic device, the method including:
step S510: acquiring exercise characteristic data of a target user in the current content exercise process.
Step S520: and generating a teaching rule corresponding to the voice feature based on the posture feature.
Step S530: and guiding the target user to practice the current content based on the teaching rule through a virtual robot.
Step S540: and acquiring the comprehensive exercise scoring parameters of the target user.
In this embodiment, the current content that is practiced by the target user may include a plurality of sub-contents, the plurality of sub-contents may correspond to a practice sequence, and the practice sequences corresponding to the plurality of sub-contents may be different. As one way, after the target user is instructed to practice the current content based on the teaching rule through the virtual robot, the comprehensive practice scoring parameter of the target user may be acquired so that the practice effect of the target user may be evaluated according to the comprehensive practice scoring parameter.
The comprehensive exercise scoring parameter can be used for scoring based on the expression characteristics and the mental state characteristics of the target user in the process of guiding the target user to exercise the current content through the virtual robot based on the teaching rules, for example, if the expression characteristics of the target user are closer to the reference expression characteristics before correction, the comprehensive exercise scoring can be increased, and if the similarity degree difference between the expression of the target user and the reference expression is larger, the comprehensive scoring can be reduced.
Step S550: and judging whether the grading parameters meet a preset threshold value.
Optionally, the preset threshold of the scoring parameter may be set according to actual conditions, for example, the preset threshold may be set to 80, 85, or 90, and the specific value may not be limited. As one way, if the value represented by the scoring parameter is smaller than the preset threshold, it may be determined that the scoring parameter satisfies the preset threshold. If the value represented by the scoring parameter is not less than the preset threshold, it can be determined that the scoring parameter does not satisfy the preset threshold.
Step S560: adjusting an exercise order of the plurality of sub-contents.
As one way, if the scoring parameter satisfies the preset threshold, the exercise sequence of the plurality of sub-contents may be adjusted to increase the freshness and curiosity of the target user during the exercise process, thereby increasing the exercise interest. Optionally, if the score parameter does not satisfy the preset threshold, it may be determined that the current exercise effect of the target user is better, and the exercise process may be ended.
Optionally, if the exercise state of the target user is not promoted continuously or in a period of time during the exercise, other track contents other than the current content may be recommended to the target user, optionally, the other track contents may be a dance track, a song, or an instrument track, and the specific content may not be limited.
This embodiment is through acquireing the exercise characteristic data of target user in current content exercise process, then based on gesture characteristic generate with the teaching rule that the speech characteristic corresponds, then through virtual machine people based on the teaching rule guides the target user practices current content through acquireing target user's comprehensive exercise parameter of grading to the realization is in time evaluateing the exercise effect of target user after accepting virtual machine people to can promote the exercise enjoyment of user in the exercise process, promote exercise efficiency simultaneously.
Sixth embodiment
Referring to fig. 7, an embodiment of the present application provides an instructional apparatus 600, for operating on an electronic device, where the apparatus 600 includes:
an obtaining module 610, configured to obtain exercise feature data of a target user in a current content exercise process, where the exercise feature data includes a voice feature and a posture feature of the target user.
A generating module 620, configured to generate a teaching rule corresponding to the voice feature based on the gesture feature.
Optionally, the gesture feature may include an expressive feature. As a manner, the generating module 620 may be specifically configured to acquire an exercise state corresponding to the expression feature; and if the exercise state accords with a preset exercise state, generating a training plan matched with the exercise state, and taking the training plan as a teaching rule corresponding to the voice feature.
Optionally, the gesture features may include expressive features and posture features. As another way, the generating module 620 may be specifically configured to compare the expression features and the posture features with reference expression features and reference posture features to obtain an expression to be corrected and a posture to be corrected; and generating a teaching rule comprising the expression to be corrected and the posture to be corrected.
A teaching guidance module 630, configured to guide the target user to practice the current content based on the teaching rule through a virtual robot.
As a manner, the teaching guidance module 630 may be specifically configured to display, by the virtual robot, the corresponding position of the expression to be corrected and the content that needs to be changed for correcting the expression feature based on the teaching guidance rule; and displaying, by the virtual robot, the corresponding position of the posture to be corrected and the magnitude of deviation required to correct the posture feature based on the teaching guidance rule.
Optionally, the apparatus 600 may further include an exercise plan generating module configured to generate a reference exercise plan matching the target user according to the voice feature, where the reference exercise plan includes content corresponding to exercise in a specified time period.
Optionally, the current content may include a plurality of sub-contents, and the plurality of sub-contents correspond to an exercise sequence. The device 600 may further include a scoring parameter obtaining module and an exercise sequence adjusting module. The scoring parameter acquiring module may be configured to acquire a comprehensive exercise scoring parameter of the target user after the target user is instructed to exercise the current content by the virtual robot based on the teaching rule. The practice sequence adjusting module may be configured to adjust a practice sequence of the plurality of sub-contents if the scoring parameter does not satisfy a preset threshold.
The apparatus 600 may further include a content recommending module configured to recommend song content other than the current content to the target user.
The application provides a teaching device, through the exercise characteristic data that obtains the target user at current content exercise in-process, then based on gesture feature generates with the teaching rule that the speech characteristic corresponds, then through virtual machine people based on the teaching rule guides the target user practices current content. Therefore, by means of the method, under the condition that the training feature data including the voice features and the posture features of the target user in the current content training process are acquired, the teaching rules corresponding to the voice features are generated based on the posture features, so that the target user can be guided to train the current content through the virtual robot based on the teaching rules, the training pleasure of the user in the training process can be improved, and meanwhile the training efficiency is improved.
It should be noted that the device embodiment and the method embodiment in the present application correspond to each other, and specific principles in the device embodiment may refer to the contents in the method embodiment, which is not described herein again.
An electronic device provided by the present application will be described with reference to fig. 8.
Referring to fig. 8, based on the above teaching method and apparatus, another electronic device 100 capable of performing the teaching method is provided in the embodiment of the present application. The electronic device 100 includes one or more processors 102 (only one shown) and a memory 104 coupled to each other. The memory 104 stores therein a program that can execute the content in the foregoing embodiments, and the processor 102 can execute the program stored in the memory 104, and the memory 104 includes the apparatus 800 described in the foregoing embodiments.
Processor 102 may include one or more processing cores, among other things. The processor 102 interfaces with various components throughout the electronic device 100 using various interfaces and circuitry to perform various functions of the electronic device 100 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104 and invoking data stored in the memory 104. Alternatively, the processor 102 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 102 may integrate one or more of a Central Processing Unit (CPU), a video Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 102, but may be implemented by a communication chip.
The Memory 104 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, a video image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The data storage area may also store data created by the electronic device 100 during use (e.g., phone book, audio-video data, chat log data), and the like.
Referring to fig. 9, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 700 has stored therein program code that can be called by a processor to perform the methods described in the above-described method embodiments.
The computer-readable storage medium 700 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer-readable storage medium 700 includes a non-volatile computer-readable storage medium. The computer readable storage medium 700 has storage space for program code 710 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 710 may be compressed, for example, in a suitable form.
According to the teaching method, the teaching device, the electronic equipment and the storage medium, the exercise characteristic data of the target user in the current content exercise process is obtained, then the teaching rule corresponding to the voice characteristic is generated based on the posture characteristic, and then the target user is guided to exercise the current content through the virtual robot based on the teaching rule. Therefore, by means of the method, under the condition that the training feature data including the voice features and the posture features of the target user in the current content training process are acquired, the teaching rules corresponding to the voice features are generated based on the posture features, so that the target user can be guided to train the current content through the virtual robot based on the teaching rules, the training pleasure of the user in the training process can be improved, and meanwhile the training efficiency is improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method of teaching, the method comprising:
acquiring exercise characteristic data of a target user in a current content exercise process, wherein the exercise characteristic data comprises voice characteristics and posture characteristics of the target user;
generating a teaching rule corresponding to the voice feature based on the posture feature;
and guiding the target user to practice the current content based on the teaching rule through a virtual robot.
2. The method of claim 1, wherein the gesture feature comprises an expressive feature, and wherein generating the tutorial rule corresponding to the phonetic feature based on the gesture feature comprises:
acquiring an exercise state corresponding to the expression characteristics;
and if the exercise state accords with a preset exercise state, generating a training plan matched with the exercise state, and taking the training plan as a teaching rule corresponding to the voice feature.
3. The method of claim 1, wherein the gesture features comprise expressive features and morphological features, and wherein generating the tutorial rules corresponding to the phonetic features based on the gesture features comprises:
comparing the expression features and the posture features with reference expression features and reference posture features to obtain expressions to be corrected and postures to be corrected;
and generating a teaching rule comprising the expression to be corrected and the posture to be corrected.
4. The method of claim 3, wherein said instructing, by a virtual robot, the target user to practice the current content based on the tutorial rules comprises:
displaying the corresponding position of the expression to be corrected and the content of the corrected expression characteristic needing to be changed by the virtual robot based on the teaching guidance rule; and
and displaying the corresponding position of the posture to be corrected and the deviation amplitude required for correcting the posture characteristic by the virtual robot based on the teaching guidance rule.
5. The method according to any one of claims 1-4, further comprising:
and generating a reference exercise plan matched with the target user according to the voice characteristics, wherein the reference exercise plan comprises contents corresponding to exercise in a specified time period.
6. The method of claim 5, wherein the current content comprises a plurality of sub-content corresponding to an exercise sequence, and wherein after the target user is instructed to exercise the current content by the virtual robot based on the tutorial rules, the method further comprises:
acquiring a comprehensive exercise scoring parameter of the target user;
and if the scoring parameter does not meet a preset threshold value, adjusting the exercise sequence of the plurality of sub-contents.
7. The method of claim 5, further comprising:
recommending other song contents except the current content to the target user.
8. An educational apparatus, wherein the apparatus comprises:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring exercise characteristic data of a target user in the current content exercise process, and the exercise characteristic data comprises voice characteristics and posture characteristics of the target user;
the generating module is used for generating teaching rules corresponding to the voice features based on the posture features;
and the teaching guidance module is used for guiding the target user to practice the current content through a virtual robot based on the teaching rule.
9. An electronic device, comprising a memory;
one or more processors;
one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium, having program code stored therein, wherein the program code when executed by a processor performs the method of any of claims 1-7.
CN202010393608.0A 2020-05-11 2020-05-11 Teaching method, teaching device, electronic device and storage medium Pending CN111695777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010393608.0A CN111695777A (en) 2020-05-11 2020-05-11 Teaching method, teaching device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010393608.0A CN111695777A (en) 2020-05-11 2020-05-11 Teaching method, teaching device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN111695777A true CN111695777A (en) 2020-09-22

Family

ID=72477549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010393608.0A Pending CN111695777A (en) 2020-05-11 2020-05-11 Teaching method, teaching device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN111695777A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361415A (en) * 2021-06-08 2021-09-07 浙江工商大学 Micro-expression data set collection method based on crowdsourcing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909867A (en) * 2017-12-01 2018-04-13 深圳市科迈爱康科技有限公司 English Teaching Method, device and computer-readable recording medium
CN108537321A (en) * 2018-03-20 2018-09-14 北京智能管家科技有限公司 A kind of robot teaching's method, apparatus, server and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909867A (en) * 2017-12-01 2018-04-13 深圳市科迈爱康科技有限公司 English Teaching Method, device and computer-readable recording medium
CN108537321A (en) * 2018-03-20 2018-09-14 北京智能管家科技有限公司 A kind of robot teaching's method, apparatus, server and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361415A (en) * 2021-06-08 2021-09-07 浙江工商大学 Micro-expression data set collection method based on crowdsourcing

Similar Documents

Publication Publication Date Title
CN109949783B (en) Song synthesis method and system
CN108563780B (en) Course content recommendation method and device
Engwall et al. Designing the user interface of the computer-based speech training system ARTUR based on early user tests
CN103080991A (en) Music-based language-learning method, and learning device using same
CN110808038B (en) Mandarin evaluating method, device, equipment and storage medium
US20200251014A1 (en) Methods and systems for language learning through music
US10978045B2 (en) Foreign language reading and displaying device and a method thereof, motion learning device based on foreign language rhythm detection sensor and motion learning method, electronic recording medium, and learning material
Wang et al. Computer-assisted audiovisual language learning
CN117541444B (en) Interactive virtual reality talent expression training method, device, equipment and medium
KR102225435B1 (en) Language learning-training system based on speech to text technology
CN117522643B (en) Talent training method, device, equipment and storage medium
CN111695777A (en) Teaching method, teaching device, electronic device and storage medium
JPH11237971A (en) Voice responding device
Lezhenin et al. Study intonation: Mobile environment for prosody teaching
Fonteles et al. User experience in a kinect-based conducting system for visualization of musical structure
Nomoto et al. Qilin, a robot-assisted Chinese language learning bilingual chatbot
EP4033487A1 (en) Method and system for measuring the cognitive load of a user
KR102432132B1 (en) Server and method for providing brain development application, keyboard
CN115050344A (en) Method and terminal for generating music according to images
US20130072270A1 (en) Coded vocal beatboxing expression and its use in a beatboxing game
AU2012100262B4 (en) Speech visualisation tool
Salamon et al. Seeing the movement through sound: giving trajectory information to visually impaired people
Sporka Non-speech sounds for user interface control
JP4612329B2 (en) Information processing apparatus and program
KR102260280B1 (en) Method for studying both foreign language and sign language simultaneously

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination