CN106251717A - Intelligent robot speech follow read learning method and device - Google Patents
Intelligent robot speech follow read learning method and device Download PDFInfo
- Publication number
- CN106251717A CN106251717A CN201610836278.1A CN201610836278A CN106251717A CN 106251717 A CN106251717 A CN 106251717A CN 201610836278 A CN201610836278 A CN 201610836278A CN 106251717 A CN106251717 A CN 106251717A
- Authority
- CN
- China
- Prior art keywords
- user
- output
- result
- data
- learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/04—Electrically-operated educational appliances with audible presentation of the material to be studied
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The present invention provides a kind of intelligent robot speech follow read learning method, and the method comprises the following steps: export multi-modal teaching data step, and robot is according to the language content output multi-modal data to be learnt selected by user;Receive Multimodal Learning data step, receive user's imitation that described teaching data is carried out and the Multimodal Learning data that send, and resolve;Multilevel iudge step, it is judged that whether the teaching data of output mates with the learning data of reception;Output result step, if it does, multi-modal output user speech imitates the result passed through.By the language teaching of the present invention, although be to realize by machine, but still can reach the effect of true man's teaching, save man power and material.Additionally, the present invention provides diversified learning experience to learner.
Description
Technical field
The present invention relates to field in intelligent robotics, specifically, relate to a kind of intelligent robot speech follow read learning method
And device.
Background technology
At present, in the application that robot and the mankind interact, the most still it is embodied in the company of life functionally, example
As carried out with user chatting, the activity such as game.But, machine person to person's is mutual also relatively more passive, the most even needs user
It is guided, just can carry out the mutual of a deep step.
In terms of language teaching, currently mainly or by true man impart knowledge to students and reach learning goals.Also have by audio-visual money
The imitation of material is learnt by oneself, or is learnt by the software in the electronic equipments such as computer.These learnings method all have respective asking
Topic.True man's teaching can take the time resource of teacher, and the requirement to its specialty degree is the highest.And by audio-visual, software
Carrying out learning by oneself and imitate, the self awareness for learner requires higher, and does not has the feedback of learning effect, and student cannot learn
The learning level of oneself.
Summary of the invention
It is an object of the invention to the deficiency in terms of language teaching in solution prior art, it is proposed that a kind of intelligent robot
Speech follow read learning method.The method comprises the following steps:
Exporting multi-modal teaching data step, robot is according to the language content output multimode to be learnt selected by user
State teaching data;
Receive Multimodal Learning data step, receive user's imitation that described teaching data is carried out and send multi-modal
Learning data, and resolve;
Multilevel iudge step, it is judged that whether the teaching data of output mates with the learning data of reception;
Output result step, if it does, multi-modal output user imitates the result passed through.
Intelligent robot speech follow read learning method according to the present invention, the method comprises the following steps:
In output result step, if it does not match, multi-modal output user imitates the result do not passed through, wait user
Again imitate.
Intelligent robot speech follow read learning method according to the present invention, its multi-modal teaching data of output includes:
Output text data, output voice data and output pronunciation mouth shape image.
Intelligent robot speech follow read learning method according to the present invention, in the case of output pronunciation mouth shape image, compares
Judge that the shape of the mouth as one speaks feature of the user in image is compared, if unanimously by step with the standard shape of the mouth as one speaks feature sending corresponding voice
Then go to export result step, if it is inconsistent, repeat the broadcasting of learning content and with multi-modal form delivery outlet
The imitative result do not passed through of pattern.
Intelligent robot speech follow read learning method according to the present invention, imitates at multi-modal output user speech and does not passes through
Result or after the shape of the mouth as one speaks imitates the result do not passed through, generate the defeated of the pronunciation correcting user and/or the shape of the mouth as one speaks also by analyzing
Go out result.
According to another aspect of the present invention, a kind of intelligent robot speech follow read learning device, this device are additionally provided
Including with lower unit:
Exporting multi-modal teaching data unit, robot is according to the language content output multimode to be learnt selected by user
State teaching data;
Receive Multimodal Learning data cell, receive user's imitation that described teaching data is carried out and send multi-modal
Learning data, and resolve;
Comparison judgment unit, it is judged that whether the teaching data of output mates with the learning data of reception;
Output result unit, if it does, multi-modal output user imitates the result passed through.
Intelligent robot speech follow read learning device according to the present invention, in output result unit, if it does not match,
Multi-modal output user imitates the result do not passed through, and waits that user imitates again.
Intelligent robot speech follow read learning device according to the present invention, described device also includes: in order to receive user's
Facial image is to carry out the unit of recognition of face, to obtain the shape of the mouth as one speaks feature of user.
Intelligent robot speech follow read learning device according to the present invention, this device also includes:
Image matching unit, it is for special with the standard shape of the mouth as one speaks sending corresponding voice by the shape of the mouth as one speaks feature of the user in image
Levy and compare, if consistent, go to export result step, if it is inconsistent, repeat the broadcasting of learning content also
With the imitative result do not passed through of multi-modal form delivery outlet pattern.
Intelligent robot speech follow read learning device according to the present invention, imitates at multi-modal output user speech and does not passes through
Result or after the shape of the mouth as one speaks imitates the result do not passed through, described device also by analyze generate correct user pronunciation and/or
The output result of the shape of the mouth as one speaks.
The invention have benefit that: by utilizing the robot language of the present invention automatically with reading learning method, permissible
Improve the effect of learner study language, increase the interest of study learning.It addition, by the way of multi-modal output, permissible
Learner is allowed to recognize the level of learning oneself reading this language in time.
Other features and advantages of the present invention will illustrate in the following description, and, partly become from description
Obtain it is clear that or understand by implementing the present invention.The purpose of the present invention and other advantages can be by description, rights
Structure specifically noted in claim and accompanying drawing realizes and obtains.
Accompanying drawing explanation
Accompanying drawing is for providing a further understanding of the present invention, and constitutes a part for description, with the reality of the present invention
Execute example to be provided commonly for explaining the present invention, be not intended that limitation of the present invention.In the accompanying drawings:
Fig. 1 shows that employing robot according to an embodiment of the invention carries out the method flow of speech follow read teaching
Figure;
Fig. 2 shows that employing robot according to an embodiment of the invention carries out the apparatus structure of speech follow read teaching
Figure.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing, the embodiment of the present invention is made
Describe in detail further.
As it is shown in figure 1, which show a kind of robot automatically with reading the method flow diagram of language teaching.In step S101
In, the language content that first robot selects according to user exports multi-modal teaching data.After starting this language teaching function,
User can select to need the language of study, such as: English, method according to interface or other man-machine interfaces of display on display
Language, German etc..It follows that user also should select the classification corresponding with oneself language proficiency.Select too high rank, language
Speed, word difficulty have exceeded the practical level of learner, and the effect of study also can be undesirable.Therefore, in one embodiment, machine
Device people also can help learner to select learning content according to the history appraisal result of user automatically.
When receiving the instruction after have selected language content to be learnt, robot transfers this language learning content, and
Multi-modal mode is used to export.Such as, sent the voice of correspondence by audio interface, carry out the shape of the mouth as one speaks by shape of the mouth as one speaks action
Show, make corresponding action or expression even for concrete content.
Learner is heard one section that robot says and is talked about and/or see the shape of the mouth as one speaks action that robot makes, will be by sending out
This section of words are imitated by sound, i.e. with reading.Therefore, it follows that in step s 102, robot can receive learner to this section of words
Imitation and the Multimodal Learning data that send.Corresponding with the learning content received, learner exports not only by voice,
Also can make action to imitate.For robot, the speech data and the video data that are at this moment captured are exactly
The Multimodal Learning data that learner is sent.
It follows that in step s 103, multi-modal teaching data and the currently received study to aforementioned machines people output
The learning data that person sends compares, it is judged that whether the two mates.For voice data, it is judged that the pronunciation of the two, intonation, language
Whether speed mates.For video data, by what current for the learner captured shape of the mouth as one speaks motion images and robot prestored
The picture of the shape of the mouth as one speaks or other actions compares, it is judged that whether the similarity of the two is in default scope.
If the teaching data that the result of multilevel iudge is robot multi-modal to be exported and learner are according to teaching data
If carrying out the learning data coupling imitated, then method proceeds to step S105.In this step, robot system can basis
User is imitated and gives a mark by the degree joined, and the result that this imitation of multi-modal output has been passed through.
The result that multi-modal output is passed through can be to send specific language by robot to praise, and can be to learn
The study rank automatic lifting of person is encouraged, it is also possible to represents that by the way of text shows learner imitates and passes through.
If the teaching data that the result of multilevel iudge is robot multi-modal to be exported and learner are according to teaching data
Carry out the unmatched words of learning data imitated, then method proceeds to step S104.In this step, robot uses multi-modal
Mode export learner and imitate the result do not passed through of voice.In one embodiment, system can rest on current learning content,
And use prompt tone mode to wait that learner imitates again.The time waited is by system according to circumstances sets itself.If when one section
In between or continuous print imitates several times and do not passes through, robot can be by some form of output, and such as display display text, voice carry
Show etc. that carrying out suggestion learner reduces the rank of learning content.Or, it is also possible to automatically help user to drop to learner and suitably learn
Rank.
In a specific embodiment, in the case of output pronunciation mouth shape image, above-mentioned multilevel iudge step also will
The shape of the mouth as one speaks feature of the user in image is compared with the standard shape of the mouth as one speaks feature sending corresponding voice, if consistent, goes to defeated
Go out result step, if it is inconsistent, repeat the broadcasting of learning content and imitate obstructed with multi-modal form delivery outlet pattern
The result crossed.
In another preferred embodiment, the result or mouth mould do not passed through is imitated at multi-modal output user speech
After the imitative result do not passed through, system is also by analyzing the pronunciation and/or the output result of the shape of the mouth as one speaks generating correction user.User can
The pronunciation etc. of oneself is adjusted according to the content corrected, again imitate.
Owing to what the method for the present invention described realizes in computer systems.This computer system such as can be arranged
In the control core processor of robot.Such as, method described herein can be implemented as to perform to control logic
Software, it is performed by the CPU in robot control system.Function as herein described can be implemented as being stored in non-transitory to be had
Programmed instruction set in shape computer-readable medium.When implemented in this fashion, this computer program includes one group of instruction,
When the instruction of this group is run by computer, it promotes the method that computer performs to implement above-mentioned functions.FPGA can be temporary
Time or be permanently mounted in non-transitory tangible computer computer-readable recording medium, such as ROM chip, computer storage,
Disk or other storage mediums.In addition to realizing with software, logic as herein described may utilize discrete parts, integrated electricity
What road and programmable logic device (such as, field programmable gate array (FPGA) or microprocessor) were used in combination able to programme patrols
Volume, or include that any other equipment of they combination in any embodies.These type of embodiments all are intended to fall under the model of the present invention
Within enclosing.
Therefore, according to another aspect of the present invention, a kind of intelligent robot speech follow read learning device is additionally provided
200.This device 200 includes with lower unit:
Exporting multi-modal teaching data unit 201, robot is according to the language content output to be learnt selected by user
Multi-modal teaching data;
Receive Multimodal Learning data cell 202, receive user's imitation that described teaching data is carried out and send many
Mode learning data, and resolve;
Comparison judgment unit 203, it is judged that whether the teaching data of output mates with the learning data of reception;
Output result unit 204, if it does, multi-modal output user imitates the result passed through.
Intelligent robot speech follow read learning device 200 according to the present invention, in output result unit 204, if not
Coupling, the most multi-modal output user imitates the result do not passed through, waits that user imitates again.
Intelligent robot speech follow read learning device according to the present invention, it is characterised in that described device also includes:
In order to receive the facial image of user to carry out the unit of recognition of face, to obtain the shape of the mouth as one speaks feature of user.
Intelligent robot speech follow read learning device 200 according to the present invention, this device also includes:
Image matching unit, it is for special with the standard shape of the mouth as one speaks sending corresponding voice by the shape of the mouth as one speaks feature of the user in image
Levy and compare, if consistent, go to export result step, if it is inconsistent, repeat the broadcasting of learning content also
With the imitative result do not passed through of multi-modal form delivery outlet pattern.
Intelligent robot speech follow read learning device according to the present invention, imitates at multi-modal output user speech and does not passes through
Result or after the shape of the mouth as one speaks imitates the result do not passed through, described device also by analyze generate correct user pronunciation and/or
The output result of the shape of the mouth as one speaks
It should be understood that disclosed embodiment of this invention is not limited to ad hoc structure disclosed herein, processes step
Or material, and the equivalent that should extend to these features that those of ordinary skill in the related art are understood substitutes.Also should manage
Solving, term as used herein is only used for describing the purpose of specific embodiment, and is not intended to limit.
" embodiment " mentioned in description or " embodiment " mean special characteristic, the structure in conjunction with the embodiments described
Or characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that description various places throughout occurs
Execute example " or " embodiment " same embodiment might not be referred both to.
While it is disclosed that embodiment as above, but described content is only to facilitate understand the present invention and adopt
Embodiment, be not limited to the present invention.Technical staff in any the technical field of the invention, without departing from this
On the premise of spirit and scope disclosed in invention, in form and any amendment and change can be made in details implement,
But the scope of patent protection of the present invention, still must be defined in the range of standard with appending claims.
Claims (10)
1. an intelligent robot speech follow read learning method, it is characterised in that said method comprising the steps of:
Exporting multi-modal teaching data step, robot exports multi-modal religion according to the language content to be learnt selected by user
Learn data;
Receive Multimodal Learning data step, receive user's imitation that described teaching data is carried out and the Multimodal Learning that sends
Data, and resolve;
Multilevel iudge step, it is judged that whether the teaching data of output mates with the learning data of reception;
Output result step, if it does, multi-modal output user imitates the result passed through.
2. intelligent robot speech follow read learning method as claimed in claim 1, it is characterised in that described method includes following
Step:
In output result step, if it does not match, multi-modal output user imitates the result do not passed through, wait user again
Imitate.
3. intelligent robot speech follow read learning method as claimed in claim 1, it is characterised in that export multi-modal teaching number
According to including:
Output text data, output voice data and output pronunciation mouth shape image.
4. intelligent robot speech follow read learning method as claimed in claim 3, it is characterised in that
In the case of output pronunciation mouth shape image,
The shape of the mouth as one speaks feature of the user in image is compared by multilevel iudge step with the standard shape of the mouth as one speaks feature sending corresponding voice,
If consistent, go to export result step, if it is inconsistent, repeat the broadcasting of learning content and with multi-modal shape
The imitative result do not passed through of formula delivery outlet pattern.
5. the intelligent robot speech follow read learning method as according to any one of claim 1-4, it is characterised in that in multimode
State output user speech imitates the result do not passed through or after the shape of the mouth as one speaks imitates the result do not passed through, and corrects also by analyzing to generate
The pronunciation of user and/or the output result of the shape of the mouth as one speaks.
6. an intelligent robot speech follow read learning device, it is characterised in that described device includes with lower unit:
Exporting multi-modal teaching data unit, robot exports multi-modal religion according to the language content to be learnt selected by user
Learn data;
Receive Multimodal Learning data cell, receive user's imitation that described teaching data is carried out and the Multimodal Learning that sends
Data, and resolve;
Comparison judgment unit, it is judged that whether the teaching data of output mates with the learning data of reception;
Output result unit, if it does, multi-modal output user imitates the result passed through.
7. intelligent robot speech follow read learning device as claimed in claim 6, it is characterised in that at output result unit
In, if it does not match, multi-modal output user imitates the result do not passed through, wait that user imitates again.
8. intelligent robot speech follow read learning device as claimed in claim 6, it is characterised in that described device also includes:
In order to receive the facial image of user to carry out the unit of recognition of face, to obtain the shape of the mouth as one speaks feature of user.
9. intelligent robot speech follow read learning device as claimed in claim 8, it is characterised in that described device also includes:
Image matching unit, it is for entering the shape of the mouth as one speaks feature of the user in image with the standard shape of the mouth as one speaks feature sending corresponding voice
Row comparison, if consistent, goes to export result step, if it is inconsistent, repeat the broadcasting of learning content and with many
The imitative result do not passed through of mode form delivery outlet pattern.
10. the intelligent robot speech follow read learning device as according to any one of claim 6-9, it is characterised in that many
Mode output user speech imitates the result do not passed through or after the shape of the mouth as one speaks imitates the result do not passed through, described device also by point
Analysis generates pronunciation and/or the output result of the shape of the mouth as one speaks correcting user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610836278.1A CN106251717A (en) | 2016-09-21 | 2016-09-21 | Intelligent robot speech follow read learning method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610836278.1A CN106251717A (en) | 2016-09-21 | 2016-09-21 | Intelligent robot speech follow read learning method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106251717A true CN106251717A (en) | 2016-12-21 |
Family
ID=57599194
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610836278.1A Pending CN106251717A (en) | 2016-09-21 | 2016-09-21 | Intelligent robot speech follow read learning method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106251717A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106781721A (en) * | 2017-03-24 | 2017-05-31 | 北京光年无限科技有限公司 | A kind of children English exchange method and robot based on robot |
CN109830132A (en) * | 2019-03-22 | 2019-05-31 | 邱洵 | A kind of foreign language language teaching system and teaching application method |
CN111753604A (en) * | 2019-06-03 | 2020-10-09 | 广东小天才科技有限公司 | Learning equipment-based point reading method and learning equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009157733A1 (en) * | 2008-06-27 | 2009-12-30 | Yujin Robot Co., Ltd. | Interactive learning system using robot and method of operating the same in child education |
CN101739852A (en) * | 2008-11-13 | 2010-06-16 | 许罗迈 | Speech recognition-based method and device for realizing automatic oral interpretation training |
CN101807356A (en) * | 2009-02-13 | 2010-08-18 | 高荣冠 | English level positioning method |
CN203149873U (en) * | 2012-12-26 | 2013-08-21 | 陈修志 | Push-button type study and entertainment device |
CN105070118A (en) * | 2015-07-30 | 2015-11-18 | 广东小天才科技有限公司 | Pronunciation correcting method and device for language learning |
-
2016
- 2016-09-21 CN CN201610836278.1A patent/CN106251717A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009157733A1 (en) * | 2008-06-27 | 2009-12-30 | Yujin Robot Co., Ltd. | Interactive learning system using robot and method of operating the same in child education |
CN101739852A (en) * | 2008-11-13 | 2010-06-16 | 许罗迈 | Speech recognition-based method and device for realizing automatic oral interpretation training |
CN101807356A (en) * | 2009-02-13 | 2010-08-18 | 高荣冠 | English level positioning method |
CN203149873U (en) * | 2012-12-26 | 2013-08-21 | 陈修志 | Push-button type study and entertainment device |
CN105070118A (en) * | 2015-07-30 | 2015-11-18 | 广东小天才科技有限公司 | Pronunciation correcting method and device for language learning |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106781721A (en) * | 2017-03-24 | 2017-05-31 | 北京光年无限科技有限公司 | A kind of children English exchange method and robot based on robot |
CN109830132A (en) * | 2019-03-22 | 2019-05-31 | 邱洵 | A kind of foreign language language teaching system and teaching application method |
CN111753604A (en) * | 2019-06-03 | 2020-10-09 | 广东小天才科技有限公司 | Learning equipment-based point reading method and learning equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108000526B (en) | Dialogue interaction method and system for intelligent robot | |
CN109101545A (en) | Natural language processing method, apparatus, equipment and medium based on human-computer interaction | |
US10579900B2 (en) | Simple programming method and device based on image recognition | |
TWI713000B (en) | Online learning assistance method, system, equipment and computer readable recording medium | |
US20050246063A1 (en) | Robot for participating in a joint performance with a human partner | |
CN111290568A (en) | Interaction method and device and computer equipment | |
CN108733209A (en) | Man-machine interaction method, device, robot and storage medium | |
CN104021326B (en) | A kind of Teaching Methods and foreign language teaching aid | |
CN108470188A (en) | Exchange method based on image analysis and electronic equipment | |
CN107020632A (en) | A kind of control system of teaching robot | |
CN116543082B (en) | Digital person generation method and device and digital person generation system | |
CN106251717A (en) | Intelligent robot speech follow read learning method and device | |
KR102367862B1 (en) | Device and method for supporting daily tasks for ADHD childrendaily task performing support apparatus for ADHD children and method therefor | |
CN109015647A (en) | Mutual education robot system and its terminal | |
CN104252287A (en) | Interaction device and method for improving expression capability based on interaction device | |
CN110444087A (en) | A kind of intelligent language teaching machine device people | |
KR102410110B1 (en) | How to provide Korean language learning service | |
CN111339881A (en) | Baby growth monitoring method and system based on emotion recognition | |
CN107818783A (en) | A kind of mutual method and device of man-machine multi-modal on-vehicle safety sexual intercourse based on vocal print technology | |
CN106445153A (en) | Man-machine interaction method and device for intelligent robot | |
CN110099295A (en) | Voice control method for television set, device, equipment and storage medium | |
CN117635383A (en) | Virtual teacher and multi-person cooperative talent training system, method and equipment | |
CN117615182A (en) | Live broadcast and interaction dynamic switching method, system and terminal based on number of participants | |
WO2024103637A9 (en) | Dance movement generation method, computer device, and storage medium | |
CN111984161A (en) | Control method and device of intelligent robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161221 |
|
RJ01 | Rejection of invention patent application after publication |