CN106205237A - Based on movement response and the training method of the second mother tongue of drawing reaction and device - Google Patents

Based on movement response and the training method of the second mother tongue of drawing reaction and device Download PDF

Info

Publication number
CN106205237A
CN106205237A CN201610792343.5A CN201610792343A CN106205237A CN 106205237 A CN106205237 A CN 106205237A CN 201610792343 A CN201610792343 A CN 201610792343A CN 106205237 A CN106205237 A CN 106205237A
Authority
CN
China
Prior art keywords
information
current
audio
handwriting
characteristic information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610792343.5A
Other languages
Chinese (zh)
Inventor
律世刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610792343.5A priority Critical patent/CN106205237A/en
Publication of CN106205237A publication Critical patent/CN106205237A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The present invention provides a kind of the second mother tongue training method and device reacted based on movement response and drawing, and method includes: gather current body action message and the current drawing handwriting information of user;And obtain current body motion characteristic information and current drawing handwriting characteristic information;After judging to determine current body motion characteristic information and current drawing handwriting characteristic information and preset standard characteristic information the match is successful, play next audio and video information.A kind of the second mother tongue training method and device reacted based on movement response and drawing that the present invention provides, by the audio and video information corresponding to language words and phrases is play, learner is made after imitating body action or imitating drawing action, to collect body action characteristic information and the drawing person's handwriting characteristic information of learner, and mate with standard feature information, thus reach learner and reacted by the emotion in muscle movement reaction, skin tactile response, painting process, internalization speech signal, to subconsciousness, simulates the learning process of the second mother tongue.

Description

Based on movement response and the training method of the second mother tongue of drawing reaction and device
Technical field
The present invention relates to educational training technical field, particularly relate to a kind of the second mother reacted based on movement response and drawing The training method of language and device.
Background technology
Existing second language learning method, is extremely difficult to the condition between speech signal and action, sense of touch, emotion The mother tongue degree of reflection.Because the acquistion process of mother tongue and conventional foreign language learning process are entirely different
Mother tongue learner particular stage (before usual 1 years old) before speaking, although mute, but can pass through Observe the action modeling of father and mother, understand some words and phrases;And after hearing the instruction of adult, although response will not be dehisced, but can use Action is made a response.The present invention is namely based on movement response and drawing reaction (drawing is also a kind of little muscle movement) simulation mother tongue Natural learning process, word and human action, sense of touch and emotion expression are set up certain relation, make speech signal internalization to The subconsciousness of habit person, reaches the degree of the second mother tongue.
Summary of the invention
The present invention provides a kind of based on movement response and the training method of the second mother tongue of drawing reaction and device, is used for solving The problem that certainly cannot reach mother tongue learning effect in language teaching in prior art.
First aspect, the present invention provides the training method of a kind of the second mother tongue reacted based on movement response and drawing, bag Include:
Obtain words and phrases information;
Select to play corresponding with described words and phrases information audio-visual according to audio and video information and the default corresponding relation of words and phrases information Information, described audio and video information is according to the pronunciation of words and phrases typing in object language, action, painting process image;
After current audio and video information is play, gather current body action message when user imitates image action and/or use The current drawing handwriting information of family drawing;
Current body motion characteristic is obtained according to described current body action message and/or described current drawing handwriting information Information and/or current drawing handwriting characteristic information;
Corresponding relation according to default audio and video information Yu standard feature information, it is judged that determine that described current body action is special After reference breath and/or the success of current drawing handwriting characteristic information and standard feature information matches, play next audio and video information.
Preferably, the broadcasting of described audio and video information uses intrinsic storage order to play or shuffle.
Preferably, described after current audio and video information is play, collection user imitates current body action during image action The current drawing handwriting information of information and/or user's picture, including:
By the motion tracking device arranged on human body, drawing person's handwriting tracking and person's handwriting identification device and camera head collection Current body action message when user imitates and/or the current drawing handwriting information of user's drawing.
Preferably, also include: according to the corresponding relation of default audio and video information Yu standard feature information, it is judged that determine described After current body motion characteristic information and/or current drawing handwriting characteristic information are unsuccessful with standard feature information matches, continue Play, or again play, or suspend the current audio and video information of broadcasting.
Preferably, described judgement determines described current body motion characteristic information and/or current drawing handwriting characteristic information Successful with standard feature information matches, including: if described current body motion characteristic information and/or current drawing handwriting characteristic letter Cease and mate difference within preset threshold range between standard feature information, then judging to determine that current body motion characteristic is believed Breath and/or current drawing handwriting characteristic information are successful with standard feature information matches.
Second aspect, the present invention provides the training devices of a kind of the second mother tongue reacted based on movement response and drawing, bag Include:
Acquisition module, is used for obtaining words and phrases information;
Playing module, selects to play for the default corresponding relation according to audio and video information and words and phrases information and believes with described words and phrases The audio and video information that breath is corresponding, described audio and video information is according to the pronunciation of words and phrases typing in object language, action, painting process shadow Picture;
Acquisition module, after playing at current audio and video information, gathers current body when user imitates image action and moves Make information and/or the current drawing handwriting information of user's drawing;
Characteristic extracting module, for obtaining according to described current body action message and/or described current drawing handwriting information Proper front body action characteristic information and/or current drawing handwriting characteristic information;
Processing module, for the corresponding relation according to the audio and video information preset with standard feature information, it is judged that determine described After current body motion characteristic information and/or current drawing handwriting characteristic information and the success of standard feature information matches, under broadcasting One audio and video information.
Preferably, described processing module, it is additionally operable to:
Corresponding relation according to default audio and video information Yu standard feature information, it is judged that determine that described current body action is special Reference breath and/or current drawing handwriting characteristic information and standard feature information matches unsuccessful after, continue to play, or again broadcast Put, or suspend the current audio and video information of broadcasting.
Preferably, described playing module includes display screen, projector, virtual reality device, augmented reality equipment, holographic one-tenth One or more of picture equipment.
Preferably, described acquisition module includes voice acquisition module and image collecting module, described voice acquisition module bag Including phonographic recorder, described image collecting module includes motion tracking device, drawing person's handwriting tracking and person's handwriting identification device and shooting dress Put.
Preferably, described processing module specifically for: if described current body motion characteristic information and/or current drawing pen Mate difference within preset threshold range between mark characteristic information with standard feature information, then judge to determine that current body moves Make characteristic information and/or current drawing handwriting characteristic information and the success of standard feature information matches.
As shown from the above technical solution, a kind of based on movement response and drawing reaction the second mother tongue that the present invention provides is instructed Practice method and device, by the image information corresponding to language words and phrases is play, make learner imitate body action or imitation is painted Collect body action characteristic information and the drawing person's handwriting characteristic information of learner after picture action, and believe with default standard feature Breath mates, thus it is anti-by the emotion in muscle movement reaction, skin tactile response, painting process to reach learner Should, internalization speech signal to subconsciousness, simulate the learning process of the second mother tongue.
Accompanying drawing explanation
The stream of the second mother tongue training method reacted based on movement response and drawing that Fig. 1 provides for one embodiment of the invention Journey schematic diagram;
The knot of the second mother tongue training devices reacted based on movement response and drawing that Fig. 2 provides for one embodiment of the invention Structure schematic diagram.
Detailed description of the invention
Below in conjunction with the accompanying drawings and embodiment, the detailed description of the invention of the present invention is described in further detail.Hereinafter implement Example is used for illustrating the present invention, but is not limited to the scope of the present invention.
Fig. 1 shows that one embodiment of the invention provides the training of a kind of the second mother tongue reacted based on movement response and drawing Method, including:
S11, acquisition words and phrases information.
In this step, it should be noted that for the default words and phrases of the learning information of object language, first object language With the audio and video information corresponding with words and phrases leaves in data base.Time to be played, first have to obtain will play is which Words and phrases, i.e. obtain words and phrases information.
S12, select to play according to audio and video information and the default corresponding relation of words and phrases information corresponding with described words and phrases information Audio and video information, described audio and video information is according to the pronunciation of words and phrases typing in object language, action, painting process image.
In this step, it should be noted that when learning second language (such as English, French, Russian, German etc.), first First to learn from the words and phrases of various language.For different words and phrases, need the pronunciation that typing is corresponding, and configure some and words and phrases The image of the action that image links together and the image of painting process.
A plurality of audio and video information can be deposited in presetting database, now need to preset an audio and video information and words and phrases information Corresponding relation.
Such as words and phrases stand up " it is " standing up ", learner hears the pronunciation of these words and phrases, makes a response with the action stood up. Therefore word " stand up " existing voice, there is again motion image.
It addition, for the visualization highlighting teaching, for each words and phrases equal configuring audio-video information in data base.
During broadcasting, can use display screen, virtual reality device, projector, augmented reality equipment, the one of holographic imaging equipment Kind or multiple play out.By selecting the audio-visual letter of broadcasting of correspondence from data base according to the default corresponding relation of audio and video information Cease broadcasting.
S13, after current audio and video information is play, gather current body action message when user imitates image action and/ Or the current drawing handwriting information of user's drawing.
In this step, it should be noted that learner is in learning process, typically can select each words and phrases condition Understanding as reflection, just can carry out the study of next words and phrases.Therefore, after current audio and video information is play, learner (user) Carry out action imitation according to image action, or carry out, according to the painting process (as teacher is drawing a house) on image, mould of painting Imitative (spending a house as imitated), by motion tracking device, person's handwriting, system follows the tracks of and identifies that device and camera head are to The live action of habit person is tracked catching, thus obtains current body action message and current drawing handwriting information.For dynamic Make capturing technology and handwriting capture and identification technology, be relatively mature technology at present, be not described in detail.
It addition, also, it should be noted learner is in learning process, system only play voice messaging (such as word and The pronunciation of sentence), learner can make some movement responses or drawing reaction according to voice messaging, and these reactions are not basis Image action goes to imitate out, but work it out according to memory action before.Now, system is passed through in advance at learner The live action of learner is caught by the motion tracking device, drawing person's handwriting tracking device and the camera head that arrange with it Catch, thus obtain current body action and drawing handwriting information.
It addition, also, it should be noted learner is in learning process, it is (permissible that system also can only play Word message It is the image including word), learner can make some actions according to Word message, and these actions are not according to image action Go to imitate out, but work it out according to memory action before.Now, system is by arranging in advance with learner Motion tracking device, drawing person's handwriting follows the tracks of device and camera head and catches the live action of learner and drawing person's handwriting Catch, thus obtain current body action message.
S14, obtain current body action according to described current body action message and/or described current drawing handwriting information Characteristic information and/or current drawing handwriting characteristic information.
In this step, it should be noted that system receives current drawing handwriting information and current body action message After, it is carried out the extraction of characteristic information.Extract for motion characteristic, handwriting characteristic extracts and 3-D view feature extraction is relatively Ripe technology, does not repeats them here.
S15, the audio and video information that basis is preset and the corresponding relation of standard feature information, it is judged that determine that described current body moves After making characteristic information and/or current drawing handwriting characteristic information and the success of standard feature information matches, play next audio-visual letter Breath.
In this step, it should be noted that need corresponding different audio and video information configuration standard in presetting database Characteristic information, in order to subsequent match.Therefore, when carrying out current drawing handwriting characteristic information and current body motion characteristic information During with the matching process of standard feature information, according to the corresponding relation of default audio and video information Yu standard feature information, get Standard feature information corresponding to current drawing handwriting characteristic information and current body motion characteristic information, then mates.
By the standard feature information matches of current drawing handwriting characteristic information Yu person's handwriting, to judge the whether drawing of learner Correctly.
By the standard feature information matches of current body motion characteristic information Yu action, to deepen learner to word or word The understanding of group.
The matching way of characteristic information exists a lot, explains the most one by one at this.To this, matching way can be: If mating difference between described current body motion characteristic information and current drawing handwriting characteristic information and standard feature information Within preset threshold range, then judge to determine current body motion characteristic information and current drawing handwriting characteristic information and standard The match is successful for characteristic information;Otherwise, then mate unsuccessful.
It addition, according to the corresponding relation of default audio and video information Yu standard feature information, it is judged that determine described current body After motion characteristic information and current drawing handwriting characteristic information are unsuccessful with standard feature information matches, continue to play, again broadcast Put, or suspend the current audio and video information of broadcasting.
Mother tongue learner particular stage (before usual 1 years old) before speaking, although mute, but can pass through Observe the action modeling of father and mother, understand some words and phrases;And after hearing the instruction of adult, although response will not be dehisced, but can use Action is made a response.The present invention is namely based on movement response and drawing reaction (drawing is also a kind of little muscle movement) simulation mother tongue Natural learning process, reach the conversion of the second mother tongue signal.
A kind of the second mother tongue training method and device reacted based on movement response and drawing that the embodiment of the present invention provides, By the audio and video information corresponding to language words and phrases is play, learner is made to collect after imitating body action or imitating drawing action The body action characteristic information of learner and drawing person's handwriting characteristic information, and mate with default standard feature information, from And reach learner and reacted by the emotion in muscle movement reaction, skin tactile response, painting process, internalization language is believed Number to subconsciousness, simulate the learning process of the second mother tongue.
Fig. 2 shows a kind of based on movement response with the second mother tongue of drawing reaction the instruction that one embodiment of the invention provides Practice device, including acquisition module 21, playing module 22, acquisition module 23, characteristic extracting module 24 and processing module 25, wherein:
Acquisition module 21, is used for obtaining words and phrases information;
Playing module 22, selects to play and described words and phrases for the default corresponding relation according to audio and video information and words and phrases information The audio and video information that information is corresponding, described audio and video information is according to the pronunciation of words and phrases typing in object language and motion image.Described Playing module include display screen, virtual reality device, projector, augmented reality equipment, holographic imaging equipment one or more.
Acquisition module 23, after playing at current audio and video information, collection user imitates current body during image action Action message and/or the current drawing handwriting information of user's drawing.Described acquisition module includes that voice acquisition module and image are adopted Collection module, described voice acquisition module include phonographic recorder, described image collecting module include motion tracking device, drawing person's handwriting with Track and person's handwriting identification device and camera head.
Characteristic extracting module 24, for according to described current body action message and/or described current drawing handwriting information Obtain current body motion characteristic information and/or current drawing handwriting characteristic information.
Processing module 25, for the corresponding relation according to the audio and video information preset with standard feature information, it is judged that determine institute After stating current body motion characteristic information and/or current drawing handwriting characteristic information and the success of standard feature information matches, play Next audio and video information;Mate unsuccessful after, continue play, again play, or suspend play current audio and video information.
During use, acquisition module 21 obtains words and phrases information, and is sent to playing module 22.Playing module 22 uses aobvious Audio and video information in data base is broadcast by display screen, virtual reality device, projector, augmented reality equipment or holographic imaging equipment Put.After learner viewing is play, imitate the action on image and drawing.During learner echopraxia, in advance can be at learner Health on be provided with motion tracking device.The echopraxia of learner and the person's handwriting of drawing are recorded by acquisition module 23 Picture.After typing, handwriting information and action message are sent to characteristic extracting module.Characteristic extracting module 24 uses characteristic of correspondence Extractive technique identifies the handwriting information and action message carries out feature extraction, and drawing person's handwriting characteristic information and the health after extracting moves It is sent to processing module as characteristic information.Processing module 25 receive drawing person's handwriting characteristic information and body action characteristic information also The standard feature information transferring correspondence from data base is mated, and makes corresponding operating finally according to matching result.
Owing to device described in the embodiment of the present invention is identical with the principle of method described in above-described embodiment, for more detailed Explanation content does not repeats them here.
Come real it should be noted that the embodiment of the present invention can be passed through hardware processor (hardware processor) Existing related function module.
A kind of the second mother tongue training method and device reacted based on movement response and drawing that the embodiment of the present invention provides, By the image information corresponding to language words and phrases is play, learner is made to collect after imitating body action or imitating drawing action The body action characteristic information of learner and drawing person's handwriting characteristic information, and mate with default standard feature information, from And reach learner and reacted by the emotion in muscle movement reaction, skin tactile response, painting process, internalization language is believed Number to subconsciousness, simulate the learning process of the second mother tongue.
Although additionally, it will be appreciated by those of skill in the art that embodiments more described herein include other embodiments Some feature included by rather than further feature, but the combination of the feature of different embodiment means to be in the present invention's Within the scope of and form different embodiments.Such as, in the following claims, embodiment required for protection appoint One of meaning can mode use in any combination.
The present invention will be described rather than limits the invention to it should be noted above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference marks that should not will be located between bracket is configured to limitations on claims.Word " comprises " and does not excludes the presence of not Arrange element in the claims or step.Word "a" or "an" before being positioned at element does not excludes the presence of multiple such Element.The present invention and can come real by means of including the hardware of some different elements by means of properly programmed computer Existing.If in the unit claim listing equipment for drying, several in these devices can be by same hardware branch Specifically embody.Word first, second and third use do not indicate that any order.These word explanations can be run after fame Claim.
One of ordinary skill in the art will appreciate that: various embodiments above only in order to technical scheme to be described, and Non-to its restriction;Although the present invention being described in detail with reference to foregoing embodiments, those of ordinary skill in the art It is understood that the technical scheme described in foregoing embodiments still can be modified by it, or to wherein part or All technical characteristic carries out equivalent;And these amendments or replacement, do not make the essence of appropriate technical solution depart from this Bright claim limited range.

Claims (10)

1. the training method of second mother tongue reacted based on movement response and drawing, it is characterised in that including:
Obtain words and phrases information;
Select to play the audio and video information corresponding with described words and phrases information with the default corresponding relation of words and phrases information according to audio and video information, Described audio and video information is according to the pronunciation of words and phrases typing in object language, action, painting process image;
After current audio and video information is play, gather current body action message when user imitates image action and/or user paints The current drawing handwriting information drawn;
Current body motion characteristic information is obtained according to described current body action message and/or described current drawing handwriting information And/or current drawing handwriting characteristic information;
Corresponding relation according to default audio and video information Yu standard feature information, it is judged that determine that described current body motion characteristic is believed After breath and/or the success of current drawing handwriting characteristic information and standard feature information matches, play next audio and video information.
Method the most according to claim 1, it is characterised in that the broadcasting of described audio and video information uses intrinsic storage order to broadcast Put or shuffle.
Method the most according to claim 1, it is characterised in that described after current audio and video information is play, gathers user's mould Current body action message during imitative image action and/or the current drawing handwriting information of user's picture, including:
User is gathered by the motion tracking device arranged on human body, drawing person's handwriting tracking and person's handwriting identification device and camera head Current body action message during imitation and/or the current drawing handwriting information of user's drawing.
Method the most according to claim 1, it is characterised in that also include: according to default audio and video information and standard feature The corresponding relation of information, it is judged that determine described current body motion characteristic information and/or current drawing handwriting characteristic information and mark After quasi-characteristic information coupling is unsuccessful, continues to play, or again play, or suspend the current audio and video information of broadcasting.
Method the most according to claim 1, it is characterised in that described judgement determines described current body motion characteristic information And/or current drawing handwriting characteristic information is successful with standard feature information matches, including: if described current body motion characteristic letter Difference is mated within preset threshold range, then between breath and/or current drawing handwriting characteristic information and standard feature information Judge to determine current body motion characteristic information and/or current drawing handwriting characteristic information and the success of standard feature information matches.
6. the training devices of second mother tongue reacted based on movement response and drawing, it is characterised in that including:
Acquisition module, is used for obtaining words and phrases information;
Playing module, selects to play and described words and phrases information pair for the default corresponding relation according to audio and video information and words and phrases information The audio and video information answered, described audio and video information is according to the pronunciation of words and phrases typing in object language, action, painting process image;
Acquisition module, after playing at current audio and video information, gathers current body action when user imitates image action and believes Breath and/or the current drawing handwriting information of user's drawing;
Characteristic extracting module, works as obtaining according to described current body action message and/or described current drawing handwriting information Front body action characteristic information and/or current drawing handwriting characteristic information;
Processing module, for the corresponding relation according to audio and video information and the standard feature information preset, it is judged that determine described currently After body action characteristic information and/or current drawing handwriting characteristic information and the success of standard feature information matches, play the next one Audio and video information.
Device the most according to claim 6, it is characterised in that described processing module, is additionally operable to:
Corresponding relation according to default audio and video information Yu standard feature information, it is judged that determine that described current body motion characteristic is believed Breath and/or current drawing handwriting characteristic information and standard feature information matches unsuccessful after, continue to play, or again play, or Suspend and play current audio and video information.
Device the most according to claim 6, it is characterised in that described playing module includes display screen, projector, virtual existing Real equipment, augmented reality equipment, holographic imaging equipment one or more.
Device the most according to claim 6, it is characterised in that described acquisition module includes that voice acquisition module and image are adopted Collection module, described voice acquisition module include phonographic recorder, described image collecting module include motion tracking device, drawing person's handwriting with Track and person's handwriting identification device and camera head.
10. according to the device described in claim 6 or 7, it is characterised in that described processing module specifically for: if described currently Difference is mated at default threshold between body action characteristic information and/or current drawing handwriting characteristic information and standard feature information Within the scope of value, then judge to determine current body motion characteristic information and/or current drawing handwriting characteristic information and standard feature Information matches success.
CN201610792343.5A 2016-08-31 2016-08-31 Based on movement response and the training method of the second mother tongue of drawing reaction and device Pending CN106205237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610792343.5A CN106205237A (en) 2016-08-31 2016-08-31 Based on movement response and the training method of the second mother tongue of drawing reaction and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610792343.5A CN106205237A (en) 2016-08-31 2016-08-31 Based on movement response and the training method of the second mother tongue of drawing reaction and device

Publications (1)

Publication Number Publication Date
CN106205237A true CN106205237A (en) 2016-12-07

Family

ID=58085486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610792343.5A Pending CN106205237A (en) 2016-08-31 2016-08-31 Based on movement response and the training method of the second mother tongue of drawing reaction and device

Country Status (1)

Country Link
CN (1) CN106205237A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106971647A (en) * 2017-02-07 2017-07-21 广东小天才科技有限公司 A kind of Oral Training method and system of combination body language
CN107256389A (en) * 2017-05-26 2017-10-17 山西农业大学 The certification recognized based on handwriting characteristic and encryption method
WO2017206861A1 (en) * 2016-05-29 2017-12-07 陈勇 Human-machine conversation platform
CN108040289A (en) * 2017-12-12 2018-05-15 天脉聚源(北京)传媒科技有限公司 A kind of method and device of video playing
CN109712480A (en) * 2018-11-29 2019-05-03 郑昕匀 A kind of word reads memory training device, data processing method and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007323268A (en) * 2006-05-31 2007-12-13 Oki Electric Ind Co Ltd Video providing device
CN201638406U (en) * 2009-06-23 2010-11-17 方凤 Infant language evaluation and expansion trainer
CN102077260A (en) * 2008-06-27 2011-05-25 悠进机器人股份公司 Interactive learning system using robot and method of operating the same in child education
CN202093690U (en) * 2011-05-04 2011-12-28 李慧 Electronic music education table for infants
CN102567626A (en) * 2011-12-09 2012-07-11 江苏矽岸信息技术有限公司 Interactive language studying system in mother language study type teaching mode
CN102737352A (en) * 2011-04-12 2012-10-17 施章祖 Personalized early education system
CN103576840A (en) * 2012-07-24 2014-02-12 上海辰戌信息科技有限公司 Stereoscopic vision based gesture body-sense control system
CN103745423A (en) * 2013-12-27 2014-04-23 浙江大学 Mouth-shape teaching system and mouth-shape teaching method
CN103794094A (en) * 2012-10-29 2014-05-14 无敌科技(西安)有限公司 System and method for memorizing words according subjective forms

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007323268A (en) * 2006-05-31 2007-12-13 Oki Electric Ind Co Ltd Video providing device
CN102077260A (en) * 2008-06-27 2011-05-25 悠进机器人股份公司 Interactive learning system using robot and method of operating the same in child education
CN201638406U (en) * 2009-06-23 2010-11-17 方凤 Infant language evaluation and expansion trainer
CN102737352A (en) * 2011-04-12 2012-10-17 施章祖 Personalized early education system
CN202093690U (en) * 2011-05-04 2011-12-28 李慧 Electronic music education table for infants
CN102567626A (en) * 2011-12-09 2012-07-11 江苏矽岸信息技术有限公司 Interactive language studying system in mother language study type teaching mode
CN103576840A (en) * 2012-07-24 2014-02-12 上海辰戌信息科技有限公司 Stereoscopic vision based gesture body-sense control system
CN103794094A (en) * 2012-10-29 2014-05-14 无敌科技(西安)有限公司 System and method for memorizing words according subjective forms
CN103745423A (en) * 2013-12-27 2014-04-23 浙江大学 Mouth-shape teaching system and mouth-shape teaching method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
邵婷: "探究全身反应法TPR在幼儿英语教学中的理性应用", 《牡丹江教育学院学报》 *
龚晓丽: "全身反应法(TPR)在儿童英语教学中的应用研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017206861A1 (en) * 2016-05-29 2017-12-07 陈勇 Human-machine conversation platform
CN106971647A (en) * 2017-02-07 2017-07-21 广东小天才科技有限公司 A kind of Oral Training method and system of combination body language
CN107256389A (en) * 2017-05-26 2017-10-17 山西农业大学 The certification recognized based on handwriting characteristic and encryption method
CN108040289A (en) * 2017-12-12 2018-05-15 天脉聚源(北京)传媒科技有限公司 A kind of method and device of video playing
CN109712480A (en) * 2018-11-29 2019-05-03 郑昕匀 A kind of word reads memory training device, data processing method and storage medium

Similar Documents

Publication Publication Date Title
CN106205237A (en) Based on movement response and the training method of the second mother tongue of drawing reaction and device
Fothergill et al. Instructing people for training gestural interactive systems
Yu et al. The role of embodied intention in early lexical acquisition
Brashear et al. American sign language recognition in game development for deaf children
CN106409030A (en) Customized foreign spoken language learning system
CN109101879B (en) Posture interaction system for VR virtual classroom teaching and implementation method
CN107633719A (en) Anthropomorphic representation artificial intelligence tutoring system and method based on multilingual man-machine interaction
CN104537925B (en) Language barrier child language training auxiliary system and method
Zarrieß et al. Pentoref: A corpus of spoken references in task-oriented dialogues
CN102063903A (en) Speech interactive training system and speech interactive training method
US20180144651A1 (en) Teaching method using pupil's own likeness as a virtual teacher
CN109817244A (en) Oral evaluation method, apparatus, equipment and storage medium
Tan et al. Can you copyme? an expression mimicking serious game
Busso et al. Recording audio-visual emotional databases from actors: a closer look
CN109712449A (en) A kind of intellectual education learning system improving child's learning initiative
CN108937969A (en) A kind of method and device for evaluating and testing cognitive state
CN110930780A (en) Virtual autism teaching system, method and equipment based on virtual reality technology
CN105872828A (en) Television interactive learning method and device
Querol-Julián et al. PechaKucha presentations to develop multimodal communicative competence in ESP and EMI live online lectures: A team-teaching proposal
CN111477055A (en) Virtual reality technology-based teacher training system and method
Naert et al. Lsf-animal: A motion capture corpus in french sign language designed for the animation of signing avatars
CN108815845B (en) The information processing method and device of human-computer interaction, computer equipment and readable medium
WO2017028272A1 (en) Early education system
Rathinavelu et al. Three dimensional articulator model for speech acquisition by children with hearing loss
CN108877836A (en) A kind of method and device for evaluating and testing speech state

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20161207