CN106558252A - By computer implemented spoken language exercise method and device - Google Patents
By computer implemented spoken language exercise method and device Download PDFInfo
- Publication number
- CN106558252A CN106558252A CN201510629522.2A CN201510629522A CN106558252A CN 106558252 A CN106558252 A CN 106558252A CN 201510629522 A CN201510629522 A CN 201510629522A CN 106558252 A CN106558252 A CN 106558252A
- Authority
- CN
- China
- Prior art keywords
- information
- user
- speech input
- session
- input information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Machine Translation (AREA)
Abstract
It is an object of the invention to provide a kind of spoken language exercise method and device realized by computer equipment.Wherein, computer equipment obtains the speech input information of user, and the speech input information corresponds to specific conversation with spoken language scene;In the conversation with spoken language scene, the response message corresponding with the speech input information is provided for the user;When this conversation end, the feedback information with regard to this session is provided for the user, the feedback information includes the session evaluation information to the user and/or session advisory information.Compared with prior art, the present invention surrounds specific conversation with spoken language scene, and the speech input information for user provides corresponding response message, so as to realize it is man-machine between one-to-one session.
Description
Technical field
The present invention relates to field of human-computer interaction, in particular, is related to one kind by computer equipment
The spoken language exercise method and device of realization.
Background technology
With the development of voice processing technology and natural language processing technique, various computer equipments
It is able to more accurately understand the language of the mankind, speech input information becomes interactive process
A kind of middle important input mode.With the development of society, foreign language learning becomes the urgent of people
Demand, oracy are an of paramount importance rings in foreign language aptitude, chastely can be pronounced, accurate
Really earth's surface is up to the important indicator for meaning measurement foreign language aptitude.
At present, the study of traditional Foreigh-language oral-speech, can only by imitating, read aloud, and other learn
Member is simulated the modes such as dialogue to temper spoken language, or recruit teachers carry out man-to-man mouth
Language is imparted knowledge to students.
However, by imitating, reading aloud, and other students be simulated the modes such as dialogue to take exercise
Spoken language, in the process, we are difficult to find oneself problem and defect on oracy,
So as to cannot targetedly improve oracy.Although recruit teachers carry out man-to-man spoken language
Teaching can targetedly practise spoken language, but this mode is relatively costly, and effect also according to
Rely the ability in instructor.
The content of the invention
It is an object of the invention to provide it is a kind of by computer equipment realize spoken language exercise method and
Device.
According to an aspect of the present invention, there is provided a kind of spoken language exercise realized by computer equipment
Method, wherein, the method is comprised the following steps:
The speech input information of-acquisition user, the speech input information correspond to specific mouth
Language session scene;
- in the conversation with spoken language scene, provide and the speech input information for the user
Corresponding response message;
- when this conversation end, the feedback information with regard to this session is provided for the user,
The feedback information includes the session evaluation information to the user and/or session advisory information.
According to another aspect of the present invention, additionally provide one kind and mouth is realized in computer equipment
The device of language exercise, wherein, the device includes:
- for obtaining the device of the speech input information of user, the speech input information is corresponding
In specific conversation with spoken language scene;
- for, in the conversation with spoken language scene, providing and the phonetic entry for the user
The device of the corresponding response message of information;
- for working as this conversation end, the feedback letter with regard to this session is provided for the user
The device of breath, the feedback information include that the session evaluation information to the user and/or session are built
View information.
According to a further aspect of the invention, a kind of spoken language exercise system is additionally provided, wherein,
The system includes user equipment and the network equipment, wherein, the network equipment includes aforementioned basis
The device that spoken language exercise is realized in computer equipment of another aspect of the present invention, the use
Family equipment includes:
- for receive user speech input information device, speech input information correspondence
In specific conversation with spoken language scene;
- for the device of information is sent and received to the network equipment, the device is specifically used
In:
- speech input information is sent to the network equipment;
- corresponding with the speech input information answering is received from the network equipment
Answer information;
- feedback information with regard to this session is received from the network equipment, it is described anti-
Feedforward information includes the session evaluation information to the user and/or session advisory information;
- for the user feedback means, the device specifically for:
- response message is provided to the user;
- feedback information with regard to this session is provided to the user.
Compared with prior art, the present invention surrounds specific conversation with spoken language scene, is that the voice of user is defeated
Enter information and corresponding response message be provided, so as to realize it is man-machine between one-to-one session.
Also, in dialog procedure, the present invention can also record and analyze user in spoken language pronunciation and
Various problems on semantic meaning representation, and after the session is completed, this session to user is commented on,
It is accustomed to using more preferable spoken language is raised on a household basis into.This allows users to self-service ground training foreign languages spoken language,
Without the need for the not enough of itself spoken language being understood in the case of one-to-one teaching and being corrected.
Description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings,
The other features, objects and advantages of the present invention will become more apparent upon:
Fig. 1 is the method flow diagram for spoken language exercise according to one embodiment of the invention;
Fig. 2 is the schematic device for spoken language exercise according to another embodiment of the present invention.
In accompanying drawing, same or analogous reference represents same or analogous part.
Specific embodiment
It should be mentioned that some are exemplary before exemplary embodiment is discussed in greater detail
Embodiment is described as process or the method described as flow chart.Although flow chart is grasped every
Be described into the process of order, but many of which operation can by concurrently, concomitantly or
Person implements simultaneously.Additionally, the order of operations can be rearranged.When its operation is completed
Shi Suoshu process can be terminated, it is also possible to have the extra step being not included in accompanying drawing
Suddenly.The process can correspond to method, function, code, subroutine, subprogram etc..
Alleged within a context " computer equipment ", also referred to as " computer ", referring to can be by fortune
Row preset program instructs to perform the predetermined process process such as numerical computations and/or logical calculated
Intelligent electronic device, which can include processor and memorizer, by computing device in memorizer
In the programmed instruction that prestores performing predetermined process process, or by ASIC, FPGA, DSP
Predetermined process process is performed on hardware, or is combined to realize by said two devices.Computer equipment
Including but not limited to server, PC, notebook computer, panel computer, smart mobile phone
Deng.
The computer equipment includes user equipment and the network equipment.Wherein, the user equipment
Including but not limited to computer, smart mobile phone, PDA etc.;The network equipment is included but is not limited to
Single network server, the server group of multiple webservers composition are based on cloud computing
The cloud being made up of a large amount of computers or the webserver of (Cloud Computing), wherein,
Cloud computing is one kind of Distributed Calculation, be made up of the loosely-coupled computer collection of a group
Super virtual computer.Wherein, the computer equipment can isolated operation realizing the present invention,
Also can access network and by with network in other computer equipments interactive operation realizing
The present invention.Wherein, the network residing for the computer equipment includes but is not limited to the Internet, wide
Domain net, Metropolitan Area Network (MAN), LAN, VPN etc..
It should be noted that the user equipment, the network equipment and network etc. are only for example, its
His existing or computer equipment for being likely to occur from now on or network are such as applicable to the present invention,
Within the scope of the present invention should being included in, and it is incorporated herein by reference.
Method (some of them are illustrated by flow process) discussed hereafter can by hardware,
Software, firmware, middleware, microcode, hardware description language or its combination in any are implementing.
When with software, firmware, middleware or microcode to implement, to the journey for implementing necessary task
Sequence code or code segment can be stored in machine or computer-readable medium (such as storage Jie
Matter) in.(one or more) processor can implement necessary task.
Concrete structure disclosed herein and function detail are only representational, and be for
The purpose of the exemplary embodiment of the description present invention.But the present invention can be by many replacement shapes
Formula is implementing, and is not interpreted as being limited only by enforcement set forth herein
Example.
Although it should be appreciated that may have been used term " first ", " second " etc. here
To describe each device, but these devices should not be limited by these terms.Using these arts
Language is just for the sake of a device and another device are made a distinction.For example, do not carrying on the back
In the case of the scope of exemplary embodiment, first device can be referred to as second device, and
And similarly second device can be referred to as first device.Term "and/or" used herein above
Including any and all combination of one of them or more listed associated items.
It should be appreciated that when a device is referred to as " connection " or during " coupled " to another device,
Which can be connected or coupled to another device, or there may be middle device.With
This is relative, be referred to as when a device " when being directly connected " or " directly coupled " to another device,
Then there is no middle device.Should explain in a comparable manner and be used for describing between device
Relation other words (for example " between being in ... " compared to " between being directly in ... ", " with ...
It is neighbouring " compared to " with ... it is directly adjacent to " etc.).
Term used herein above is not intended to limit and is shown just for the sake of description specific embodiment
Example property embodiment.Unless the context clearly dictates otherwise, odd number shape otherwise used herein above
Formula " one ", " one " also attempt to include plural number.It is to be further understood that used herein above
Term " including " and/or "comprising" specify stated feature, integer, step, operation, device
And/or the presence of component, and do not preclude the presence or addition of one or more other features, integer,
Step, operation, device, component and/or its combination.
It should further be mentioned that in some replaces realization modes, the function/action being previously mentioned can
To occur according to the order different from indicating in accompanying drawing.For example, depending on involved work(
Energy/action, the two width figures for illustrating in succession can essentially substantially simultaneously perform or sometimes can be with
In a reverse order performing.
Under common scene, the solution of the present invention is realized by the network equipment.Specifically, net
Network equipment obtains the speech input information of user input, and the speech input information is corresponding to specific
Conversation with spoken language scene;Subsequently, the network equipment is provided for the user in the conversation with spoken language scene
The response message corresponding with its speech input information;Then, the network equipment is when this session knot
Shu Shi, provides the feedback information with regard to this session for the user, and the feedback information is included to this
The session evaluation information and/or session advisory information of user.
When cooperating with user equipment, the actual friendship with user is realized such as by user equipment
Mutually, concrete speech input information such as user equipment receiving user's input sending to network sets
Standby, user equipment receives the response message of network equipment return and presents to user, user equipment
Receive the feedback information of network equipment return and present to user, the user for performing above-mentioned functions sets
It is standby to constitute spoken language exercise system with the network equipment.
However, those skilled in the art will be understood that the development with computer technology, especially
The raising of the calculating/disposal ability of the user equipmenies such as smart mobile phone, the present invention equally can by with
Family equipment realizing, at least can realize in several specific conversation with spoken language scenes by user equipment
Spoken language exercise method in the present invention.For example, user equipment is stored with one or more with regard to spy
Determine the related data of conversation with spoken language scene, so as in these conversation with spoken language scenes, user equipment
Response and the feedback of the speech input information to user can completely locally be realized.
For purposes of illustration only, realizing spoken language exercise method of the invention to enter with the network equipment herein
Row is illustrated, however, those skilled in the art will be understood that the citing such as this is merely to illustrate
The purpose of the present invention, and it is understood not to any limitation of the invention.
Additionally, those skilled in the art should also be understood that, the present invention practices to the spoken language that can be suitable for
Idiom kind is simultaneously unrestricted, and which goes for the spoken language exercise of any language, including but not limited to
English, Japanese, French, German, Chinese etc..
Below in conjunction with the accompanying drawings the present invention is described in further detail.
Fig. 1 is method flow diagram according to an embodiment of the invention, and which specifically illustrates a kind of mouth
The process of language exercising method.
As shown in figure 1, in step sl, the network equipment obtains the phonetic entry letter of user input
Breath, the speech input information correspond to specific conversation with spoken language scene.
For example, first, in user side, the speech input information of user equipment receiving user's input,
The speech input information corresponds to specific conversation with spoken language scene.Specifically such as, user can select first
Specific conversation with spoken language scene is selected, and then is input into its speech input information.Subsequently, user equipment will
The speech input information of user is sent to the network equipment, is somebody's turn to do so as to the network equipment is received from user equipment
The speech input information of user and its corresponding conversation with spoken language scene.
Preferably, the network equipment can also carry out pretreatment to the speech input information, to be somebody's turn to do
The character string information included in speech input information and pronunciation information.
For example, the network equipment is after the speech input information of receive user, to the speech input information
Pretreatment is carried out, such as noise is filtered, to generate the secondary data of the speech input information, such as obtained
The character string information for wherein including and pronunciation information.
Many times we carry out the environment of spoken language exercise can't be very quiet, even if environment compares peace
Quiet, the speech input information of user still may be disturbed by other extraneous signals of telecommunication, thus is entered
Row noise is filtered, and is conducive to effective input of speech input information;Also, it helps preferably
Carry out spoken language exercise, and the practice effect of user's spoken language exercise, it is to avoid situations such as error evaluation
Occur.
The speech input information of user is carried out after pretreatment, the network equipment can obtain the phonetic entry
The character string information included in information and pronunciation information, it is defeated that these information are considered the voice
Enter the secondary data of information.While carrying out noise and filtering, the network equipment can will be the voice defeated
Enter information to convert to form easily identified data message, meanwhile, it is the information in subsequent step
Process etc. provides data and supports.
In step s 2, the network equipment is provided and which for the user in the conversation with spoken language scene
The corresponding response message of speech input information.Specifically, in specific conversation with spoken language scene, net
Network equipment can carry out semantic analysis or Keywords matching etc. by the speech input information to user
Mode, provides the user the response message corresponding with its speech input information.
For example, the network equipment can carry out semantic analysis to the speech input information of user, such as pass through
Key word and contextual analysis, generate corresponding response message.And for example, the network equipment is according to user
Speech input information in key word, matching reply data storehouse, and coming according to predetermined phrase template
Generate corresponding response message.
Here, those skilled in the art will be understood that the speech input information of above-mentioned acquisition user
The mode of corresponding response message is only for example, and this etc. citing be merely to illustrate the present invention's
Purpose, and any limitation of the invention is understood not to, other any existing or future
Acquisition user speech input information corresponding to the mode of response message be such as applicable to this
It is bright, can be cited and be incorporated herein.
In step s3, the network equipment is provided with regard to this for the user when this conversation end
The feedback information of secondary session, the feedback information include the session evaluation information to the user and/or session
Advisory information.
For example, the network equipment can be according to the sentence number of user speech input information in this session
Amount, and the semanteme of speech input information is analyzed, generate the session evaluation to the user
Information and/or session advisory information.Wherein, session evaluation information for example to user in certain scenarios meeting
The evaluation of ability to express in words, the concrete marking such as to user's this time session performance, session recommendation letter
Conventional example sentence in breath such as this session context.These feedback informations can be with the reaction user of side
For the Grasping level of the session scene, if user can be quickly given expression to less sentence
The content oneself being intended by, then illustrate relatively good, the Yong Huke that user is grasped to the session scene
With the needs according to oneself, spend the more time in the poor session scene of Grasping level, economize on
The family time, and contribute to user and preferably grasp each session scene.
Further, when the feedback information is sent to user side, and then the feedback letter by the network equipment
When breath is present in user equipment, user equipment can provide a user with the feedback letter in a variety of forms
Breath.For example, user equipment can with voice mode to user play the marking of this session performance with
And the conventional example sentence in this session context.And for example, user equipment can be with visual pattern to this
User is presented the feedback information.The visual form of expression can have various, such as word displaying,
Diagrammatic representation etc..Further, user equipment can also show to user simultaneously and play feedback information.
After the session is completed, the present invention can provide a user with the feedback information of this session, such as meeting
Words evaluation information and/or session advisory information, user, can be conveniently by checking the defeated feedback information
Efficiently know the advantage of oneself and it is not enough where, so as to the present invention can help user's reinforcing excellent
Deficiency is put and improved, is conducive to helping user rapidly and accurately to give expression to oneself be intended by interior
Hold.
A preferred exemplary of the invention, obtains the phonetic entry of user input in step S1
After information, the network equipment can also be analyzed to the pronunciation of the speech input information, to obtain
Obtain.
Here, the network equipment is referred to pronunciation of the RP to the speech input information carrying out point
Analysis, and generate corresponding pronunciation evaluation information and/or pronunciation advisory information.Wherein, letter is evaluated in pronunciation
Breath is for example whether conformance with standard pronunciation, pronunciation advisory information such as recommendation on improvement or accordingly pronunciation are not
The RP of the word of standard.
In spoken language exercise, for the raising of user pronunciation be also it is necessary, the present invention by
The pronunciation of the speech input information at family is analyzed, and exports the side of the good aspect of user pronunciation and difference
Face, while the pronunciation for user provides improved suggestion, can preferably help user's reinforcing certainly
The strong point of oneself pronunciation, while improving not enough present in pronunciation.
Strict order is had no between response steps in the pronunciation analytical procedure and abovementioned steps S2
Relation, both can perform side by side, it is also possible to perform in any order.
Subsequently, in S3, can also comment including pronunciation in the feedback information provided by the network equipment
Valency information and/or pronunciation advisory information.
Further, an overall merit information, the overall merit can also be included in the feedback information
The dialogue-based evaluation information of information is determined with pronunciation evaluation information.For example, overall merit information can
Think the total and/or average of session evaluation information and pronunciation evaluation information.And for example, session evaluation is believed
Breath and the evaluation information that pronounces set different weights respectively, so as to overall merit information can be session
The weighting of evaluation information and pronunciation evaluation information and/or weighted mean etc..
Wherein, for pronunciation advisory information, user side can play to user.For example, if sent out
RP of the sound advisory information for all sentences in the speech input information of user, then user equipment
The pronunciation advisory information is played, user can be helped comprehensively and intuitively to perceive oneself pronunciation in terms of,
And the deficiency in terms of the tone, word speed, contribute to user and quickly improve the pronunciation level of oneself.
Alternately, only can also include in user speech input information in advisory information of pronouncing
Pronounce the RP of non-type sentence.The present invention pronounces not in choosing user speech input information
The sentence of standard, carries out feedback broadcasting with RP, and user can be helped targetedly to improve certainly
In oneself pronunciation where deficiency.
Preferably, advisory information of pronouncing can present to user simultaneously.For example, user equipment can be with
The video of reading aloud of pronunciation advisory information is presented, or as the broadcasting of sentence pronunciation is presented corresponding standard
Pronunciation mouth shape figure.While feedback plays sentence, RP shape of the mouth as one speaks figure or video are shown, can
To help user correctly to be pronounced, rather than user is only helped to improve the similarity of pronunciation,
It is to be understood that the pronunciation of many words or letter etc. is similar to, if not being aided with shape of the mouth as one speaks figure etc., user is difficult to learn
To correct pronunciation.
For example, by taking English as an example, " i:" and " i " pronunciation just like listening and be to put if depended alone
It is very indistinguishable to go to listen in word, but if shape of the mouth as one speaks figure can be provided, is being equipped with a small amount of sending out
Sound tricks of the trade word description (such as the former, sound is sent out long, and the latter's sound is like clothing like clothing, but urgency
What is promoted sends out minor), then user correctly grasps pronunciation can be easily many.
Another preferred exemplary of the invention, the voice for obtaining user input in step S1 are defeated
After entering information, the network equipment can also be carried out to the grammer of the speech input information and syntax point
Analysis, to obtain corresponding grammer evaluation information and/or grammer advisory information.
Here, the network equipment is analyzed to the grammer and syntax of the speech input information, if not being inconsistent
Syntax gauge is closed, or term is inaccurate, then generate corresponding grammer evaluation information and/or grammer suggestion
Information.Wherein, for example whether there is syntax error or word is inaccurate, grammer in grammer evaluation information
Advisory information such as recommendation on improvement or dependent parser, the example sentence of word.
In spoken language exercise, for the raising of user's grammer be also it is necessary, the present invention by
The grammer of the speech input information at family is analyzed, and exports the side of the good aspect of user's grammer and difference
Face, while providing improved suggestion for the grammer of user, can preferably help user's reinforcing certainly
The strong point of own grammer, while improving not enough present in grammer.
Strict order is had no between response steps in the syntax analysis step and abovementioned steps S2
Relation, both can perform side by side, it is also possible to perform in any order.
Subsequently, in S3, can also comment including grammer in the feedback information provided by the network equipment
Valency information and/or grammer advisory information.
Further, an overall merit information, the overall merit can also be included in the feedback information
The dialogue-based evaluation information of information and grammer evaluation information are determining.For example, overall merit information can
Think the total and/or average of session evaluation information and grammer evaluation information.And for example, session evaluation is believed
Breath and grammer evaluation information set different weights respectively, so as to overall merit information can be session
The weighting of evaluation information and grammer evaluation information and/or weighted mean etc..Wherein, for grammer is advised
Information, user side can present to user, for example, by the related illustrative sentences of grammer or word at mistake be in
User is given now.Example sentence is provided and can help understanding of user's intensification to the session scene, and can learn
Practise the session sentence under more proper syntaxs.
For example:By taking English as an example, for disjunctive question " I don't think your uncle really
Likes drama series, does he", many people can approve of uncle not like the sight for seeing TV play
During point, to answer and no should be said when approved of on yes, but time here;Specifically, can provide corresponding
Example sentence does following answer " No, he doesn't, but he still watches the programme. ".
Both of the aforesaid preferred exemplary further can also combine.
For example, another preferred exemplary of the invention, obtains user input in step S1
After speech input information, the network equipment except according to abovementioned steps S2 in the conversation with spoken language scene
It is interior, the response message corresponding with its speech input information is provided for the user, can also be to described
The pronunciation of speech input information is analyzed to obtain corresponding pronunciation evaluation information and/or pronunciation is built
View information, and the grammer and syntax of the speech input information are analyzed to obtain corresponding
Grammer evaluation information and/or grammer advisory information.Subsequently, in S3, what the network equipment was provided
Feedback information include session evaluation information and/or session advisory information, pronunciation evaluation information and/or
Pronunciation advisory information and grammer evaluation information and/or grammer advisory information.
Wherein, the response in above-mentioned pronunciation analytical procedure, syntax analysis step and abovementioned steps S2
Strict ordering relation is had no between step, three can perform side by side, it is also possible in any order
Perform.Also, above-mentioned pronunciation analytical procedure, syntax analysis step can be carried out in conversation procedure,
Can also carry out after the session is completed.
Further, an overall merit information, the overall merit can also be included in the feedback information
The dialogue-based evaluation information of information, pronunciation evaluation information and grammer evaluation information are determining.For example,
Overall merit information can be the total and/or average of three.And for example, to session evaluation information, pronunciation
Evaluation information and grammer evaluation information set different weights respectively, so as to overall merit information can be with
Weighting and/or weighted mean for three etc..
It is concrete such as, the network equipment is according to respective preset rules respectively to session evaluation information, pronunciation
Evaluation information and grammer evaluation information provide a scoring, and the average mark for then calculating three scorings is obtained
One overall score, and three scorings and overall score feedback are shown.To pronunciation, grammer and situational dialogue
Deng being scored in terms of three, oneself level in all fields can be let the user know that, contribute to using
More times are spent at poor aspect in family, or targetedly strengthen more excellent aspect;And it is total to provide one
Scoring, then contribute to user and judge oneself whether need the spoken language for proceeding the session scene to practice
Practise.
For example, user is in pronunciation evaluation information, 3 sides of grammer evaluation information and session evaluation information
Face obtains 50,70 and 90 three scorings respectively, and the general comment of overall merit information is divided into 70 points, that
What at this time user can will be apparent that knows, oneself aspect worst for the session scene is to send out
In terms of sound, this when, user just targetedly can be improved to oneself corresponding pronunciation,
And and ruling more than (60 points) other differences can be appropriate loosen.
User equipment can graphically present to use after collecting to each evaluation information
Family.Diagrammatic form is more directly perceived relative to written form, and user can be helped quickly to differentiate oneself
Practice result, without the need for the user effort time word by word and sentence by sentence check word to judge the exercise of oneself
As a result, the time of user is saved, to user with facility.
Additionally, the process shown in Fig. 1 can also shift to an earlier date end step including one:If the user
Speech input information deviate the conversation with spoken language scene, then terminate this session.Subsequently, in step
In S3, the network equipment provides feedback with regard to this session for the user in this conversation end
Information.
Wherein, when meet below at least any one when, the network equipment can determine the voice of the user
Input information deviates the conversation with spoken language scene of this session:
1) the speech input information None- identified.
Here, the network equipment may determine that the speech input information whether None- identified, if so, then
Judge that the speech input information does not meet the session scene, and terminate session.If the phonetic entry
Information cannot be identified, it is believed that the pronunciation of user exist serious problems or with current sessions feelings
Scape is unrelated, and so in actual session scene, user cannot carry out effective session at all, because
And, if the speech input information of user cannot be identified, terminate session in time, be apprised of use
The reason for family None- identified, user can be helped effectively to be practised, it helps to help user
It was found that in the raising of the aspects such as pronunciation.
2) semanteme of the speech input information is not met with the conversation with spoken language scene.
Here, the network equipment may determine that the semanteme and the session scene for analyzing the speech input information
Whether meet, if not meeting, terminate session.The judgement of the semanteme specifically can be somebody's turn to do by differentiating
The mode the such as whether key word in speech input information related to the session scene.Sentenced by semantic
It is disconnected, can inform whether the ongoing session of user meets the session scene for calling, if not being inconsistent
Conjunction then terminates session, it is to avoid the insignificant waste practice periodses of user;Only when meeting, ability
With and feed back to should speech input information response message;Thus, user preferably can take exercise
Oneself want the session of the session scene of exercise, contribute to user and quickly adapt to the session scene,
And language competence of the user in corresponding session scene can be improved.
3) the sentence quantity of the speech input information exceedes its correspondence predetermined threshold value.
Here, the network equipment may determine that whether the sentence quantity of the speech input information exceedes presetting
Threshold value, if exceeding, judges that the speech input information does not meet the session scene, and terminates session.
Judge that whether the sentence quantity of the speech input information exceed the meaning of predetermined threshold value and be, many feelings
Under scape, it would be desirable to go to express oneself using limited sentence, excessive sentence can cause him
People's is sick of, if while sentence quantity too much, illustrates that user quickly cannot express and is intended by
Content, at this point it is possible to think that session exercise is underproof, terminate session in time, can be with
Allow user to summarize, improve and perfect.
4) in the speech input information, whether words and phrases number of repetition exceedes its correspondence predetermined threshold value.
Here, whether words and phrases number of repetition exceeds during the network equipment may determine that the speech input information
Predetermined threshold value, if exceeding, judges that the speech input information does not meet the session scene, and terminates
Session.In judging the speech input information, the number of times meaning that sentence repeats is that certain sentence is more
During secondary repetition, it is believed that user is unsustainable for the session, so the sentence repeatedly, now,
Continue insignificant session can only lose time, now terminate session, can not only save user's
Time, and user can be allowed preferably to reflect on oneself the deficiency in oneself session, if after permanent
If terminating session, perhaps user has had forgotten about this deficiency, so the present invention can help use
Targetedly reflect on oneself and improve the weak point of oneself in family.
Those skilled in the art will be understood that the correspondence of the sentence quantity of above-mentioned speech input information
Predetermined threshold value is could be arranged to the corresponding predetermined threshold value of words and phrases number of repetition in speech input information
Identical or different, this can depend on concrete application.
The present invention, can be fast by analyzing whether speech input information deviates current conversation with spoken language scene
Speed judges whether the session content of user meets its scene for wishing exercise, if judging to deviate works as prosopyle
Language session scene, then terminate this session, so as to user it is known that needing in the session to oneself
Appearance is adjusted, and this can help user quickly and targetedly to carry out spoken language exercise.
Fig. 2 is schematic device in accordance with another embodiment of the present invention, and which specifically illustrates one kind
Device (hereinafter referred to as " spoken language exercise device ") for spoken language exercise.
As shown in Fig. 2 spoken language exercise device 20 is installed in the network equipment, spoken language exercise dress
Put 20 and further include speech input device 21, scenario analysis device 22 and session feedback device
23。
Specifically, speech input device 21 obtains the speech input information of user input, the voice
Input information corresponds to specific conversation with spoken language scene.
For example, first, in user side, the speech input information of user equipment receiving user's input,
The speech input information corresponds to specific conversation with spoken language scene.Specifically such as, user can select first
Specific conversation with spoken language scene is selected, and then is input into its speech input information.Subsequently, user equipment will
The speech input information of user is sent to the network equipment, so as to speech input device 21 is set from user
The standby speech input information and its corresponding conversation with spoken language scene for receiving the user.
Preferably, other specific devices in speech input device 21 or spoken language exercise device 20 are not (
Illustrate) pretreatment can also be carried out to the speech input information, to obtain in the speech input information
Comprising character string information and pronunciation information.
For example, speech input device 21 is after the speech input information of receive user, to the voice
Input information carries out pretreatment, and such as noise is filtered, to generate the secondary data of the speech input information,
The character string information for such as wherein being included and pronunciation information.
Many times we carry out the environment of spoken language exercise can't be very quiet, even if environment compares peace
Quiet, the speech input information of user still may be disturbed by other extraneous signals of telecommunication, thus is entered
Row noise is filtered, and is conducive to effective input of speech input information;Also, it helps preferably
Carry out spoken language exercise, and the practice effect of user's spoken language exercise, it is to avoid situations such as error evaluation
Occur.
The speech input information of user is carried out after pretreatment, speech input device 21 can be somebody's turn to do
The character string information included in speech input information and pronunciation information, these information are considered
The secondary data of the speech input information.While carrying out noise and filtering, the network equipment can be by
The speech input information converts to form easily identified data message, meanwhile, it is subsequent step
In information processing etc. data be provided support.
Subsequently, scenario analysis device 22 is provided and which for the user in the conversation with spoken language scene
The corresponding response message of speech input information.Specifically, in specific conversation with spoken language scene, feelings
Scape analytical equipment 22 can carry out semantic analysis or key word by the speech input information to user
The modes such as matching, provide the user the response message corresponding with its speech input information.
For example, scenario analysis device 22 can carry out semantic analysis to the speech input information of user,
Such as pass through key word and contextual analysis, generate corresponding response message.And for example, network equipment root
According to the key word in the speech input information of user, reply data storehouse is matched, and according to predetermined phrase
Template is generating corresponding response message.
Here, those skilled in the art will be understood that the speech input information of above-mentioned acquisition user
The mode of corresponding response message is only for example, and this etc. citing be merely to illustrate the present invention's
Purpose, and any limitation of the invention is understood not to, other any existing or future
Acquisition user speech input information corresponding to the mode of response message be such as applicable to this
It is bright, can be cited and be incorporated herein.
Then, session feedback device 23 is provided with regard to this for the user when this conversation end
The feedback information of secondary session, the feedback information include the session evaluation information to the user and/or session
Advisory information.
For example, session feedback device 23 can be according to user speech input information in this session
Sentence quantity, and the semanteme of speech input information is analyzed, generate the meeting to the user
Words evaluation information and/or session advisory information.Wherein, session evaluation information for example to user specific
The evaluation of ability to express in situational dialogue, the concrete marking such as to user's this time session performance, session
Conventional example sentence in advisory information such as this session context.These feedback informations can be with the anti-of side
Using family for the Grasping level of the session scene, if user can be with less sentence, quickly
The content oneself being intended by is given expression to, then illustrates that user is grasped to the session scene relatively good,
User can spend the more time in the poor session scene of Grasping level according to the needs of oneself,
User time is saved, and contributes to user and preferably grasp each session scene.
Alternately, session evaluation information and/or session advisory information can also be by scenario analysis devices
22 generating, and session feedback device 23 is merely responsible for the information transmission with user side.
Further, when the feedback information is sent to user side by session feedback device 23, and then
When the feedback information is present in user equipment, user equipment can be provided a user with a variety of forms
The feedback information.For example, user equipment can be played this session to user with voice mode and show
Marking and this time conventional example sentence in session context.And for example, user equipment can be with visualization
Form is presented the feedback information to the user.The visual form of expression can have various, such as text
Word displaying, diagrammatic representation etc..Further, user equipment can also show and play to user simultaneously
Feedback information.
After the session is completed, the present invention can provide a user with the feedback information of this session, such as meeting
Words evaluation information and/or session advisory information, user, can be conveniently by checking the defeated feedback information
Efficiently know the advantage of oneself and it is not enough where, so as to the present invention can help user's reinforcing excellent
Deficiency is put and improved, is conducive to helping user rapidly and accurately to give expression to oneself be intended by interior
Hold.
A preferred exemplary of the invention, spoken language exercise device 20 can also be including a pronunciations
Analytical equipment (not shown).Specifically, the language of user input is obtained in speech input device 21
After sound input information, pronunciation analytical equipment can be carried out to the pronunciation of the speech input information point
Analysis, to obtain corresponding pronunciation evaluation information and/or pronunciation advisory information.
Here, pronunciation analytical equipment is referred to pronunciation of the RP to the speech input information and enters
Row analysis, and generate corresponding pronunciation evaluation information and/or pronunciation advisory information.Wherein, pronunciation is commented
Valency information pronunciation advisory information such as recommendation on improvement or corresponding is sent out for example whether conformance with standard pronunciation
The RP of the non-type word of sound.
In spoken language exercise, for the raising of user pronunciation be also it is necessary, the present invention by
The pronunciation of the speech input information at family is analyzed, and exports the side of the good aspect of user pronunciation and difference
Face, while the pronunciation for user provides improved suggestion, can preferably help user's reinforcing certainly
The strong point of oneself pronunciation, while improving not enough present in pronunciation.
The pronunciation analysis operation performed by pronunciation analytical equipment and 22 institute of aforementioned scenario analysis device
Strict ordering relation is had no between the response operation of execution, both can perform side by side, it is also possible to
Perform in any order.
Subsequently, can also include in the feedback information provided by session feedback device 23 that pronunciation is evaluated
Information and/or pronunciation advisory information.
Further, an overall merit information, the overall merit can also be included in the feedback information
The dialogue-based evaluation information of information is determined with pronunciation evaluation information.For example, overall merit information can
Think the total and/or average of session evaluation information and pronunciation evaluation information.And for example, session evaluation is believed
Breath and the evaluation information that pronounces set different weights respectively, so as to overall merit information can be session
The weighting of evaluation information and pronunciation evaluation information and/or weighted mean etc..
Wherein, for pronunciation advisory information, user side can play to user.For example, if sent out
RP of the sound advisory information for all sentences in the speech input information of user, then user equipment
The pronunciation advisory information is played, user can be helped comprehensively and intuitively to perceive oneself pronunciation in terms of,
And the deficiency in terms of the tone, word speed, contribute to user and quickly improve the pronunciation level of oneself.
Alternately, only can also include in user speech input information in advisory information of pronouncing
Pronounce the RP of non-type sentence.The present invention pronounces not in choosing user speech input information
The sentence of standard, carries out feedback broadcasting with RP, and user can be helped targetedly to improve certainly
In oneself pronunciation where deficiency.
Preferably, advisory information of pronouncing can present to user simultaneously.For example, user equipment can be with
The video of reading aloud of pronunciation advisory information is presented, or as the broadcasting of sentence pronunciation is presented corresponding standard
Pronunciation mouth shape figure.While feedback plays sentence, RP shape of the mouth as one speaks figure or video are shown, can
To help user correctly to be pronounced, rather than user is only helped to improve the similarity of pronunciation,
It is to be understood that the pronunciation of many words or letter etc. is similar to, if not being aided with shape of the mouth as one speaks figure etc., user is difficult to learn
To correct pronunciation.
For example, by taking English as an example, " i:" and " i " pronunciation just like listening and be to put if depended alone
It is very indistinguishable to go to listen in word, but if shape of the mouth as one speaks figure can be provided, is being equipped with a small amount of sending out
Sound tricks of the trade word description (such as the former, sound is sent out long, and the latter's sound is like clothing like clothing, but urgency
What is promoted sends out minor), then user correctly grasps pronunciation can be easily many.
Another preferred exemplary of the invention, spoken language exercise device 20 can also include a language
Method analytical equipment (not shown).Specifically, user input is obtained in speech input device 21
After speech input information, parser device can be to the grammer of the speech input information and sentence
Method is analyzed, to obtain corresponding grammer evaluation information and/or grammer advisory information.
Here, parser device is analyzed to the grammer and syntax of the speech input information, if
Do not meet syntax gauge, or term is inaccurate, then generate corresponding grammer evaluation information and/or grammer
Advisory information.Wherein, for example whether there is syntax error or word is inaccurate in grammer evaluation information,
Grammer advisory information such as recommendation on improvement or dependent parser, the example sentence of word.
In spoken language exercise, for the raising of user's grammer be also it is necessary, the present invention by
The grammer of the speech input information at family is analyzed, and exports the side of the good aspect of user's grammer and difference
Face, while providing improved suggestion for the grammer of user, can preferably help user's reinforcing certainly
The strong point of own grammer, while improving not enough present in grammer.
Pronunciation analysis operation performed by the parser device and 22 institute of aforementioned scenario analysis device
Strict ordering relation is had no between the response operation of execution, both can perform side by side, it is also possible to
Perform in any order.
Subsequently, grammer evaluation can also be included in the feedback information provided by session feedback device 23
Information and/or grammer advisory information.
Further, an overall merit information, the overall merit can also be included in the feedback information
The dialogue-based evaluation information of information and grammer evaluation information are determining.For example, overall merit information can
Think the total and/or average of session evaluation information and grammer evaluation information.And for example, session evaluation is believed
Breath and grammer evaluation information set different weights respectively, so as to overall merit information can be session
The weighting of evaluation information and grammer evaluation information and/or weighted mean etc..Wherein, for grammer is advised
Information, user side can present to user, for example, by the related illustrative sentences of grammer or word at mistake be in
User is given now.Example sentence is provided and can help understanding of user's intensification to the session scene, and can learn
Practise the session sentence under more proper syntaxs.
For example:By taking English as an example, for disjunctive question " I don't think your uncle really
Likes drama series, does he", many people can approve of uncle not like the sight for seeing TV play
During point, to answer and no should be said when approved of on yes, but time here;Specifically, can provide corresponding
Example sentence does following answer " No, he doesn't, but he still watches the programme. ".
Both of the aforesaid preferred exemplary further can also combine.
For example, another preferred exemplary of the invention, spoken language exercise device 20 can also be wrapped
Include a pronunciation analytical equipment (not shown) and a parser device (not shown).Specifically,
After the speech input information that speech input device 21 obtains user input, scenario analysis device
22, in the conversation with spoken language scene, provide the response corresponding with its speech input information for the user
Information, pronunciation analytical equipment can be analyzed to obtain phase to the pronunciation of the speech input information
The pronunciation evaluation information answered and/or pronunciation advisory information, parser device can be defeated to the voice
The grammer and syntax for entering information is analyzed to obtain corresponding grammer evaluation information and/or grammer is built
View information.Subsequently, the feedback information provided by session feedback device 23 includes that session evaluation is believed
Breath and/or session advisory information, pronunciation evaluation information and/or pronunciation advisory information and grammer are commented
Valency information and/or grammer advisory information.
Wherein, performed by above-mentioned pronunciation analytical equipment pronunciation analysis operation, parser device institute
Between response operation performed by the parser operation of execution and aforementioned scenario analysis device 22 simultaneously
Without strict ordering relation, three can perform side by side, it is also possible to perform in any order.Also,
The grammer performed by pronunciation analysis operation, parser device performed by above-mentioned pronunciation analytical equipment
Analysis operation can be carried out in conversation procedure, it is also possible to carried out after the session is completed.
Further, an overall merit information, the overall merit can also be included in the feedback information
The dialogue-based evaluation information of information, pronunciation evaluation information and grammer evaluation information are determining.For example,
Overall merit information can be the total and/or average of three.And for example, to session evaluation information, pronunciation
Evaluation information and grammer evaluation information set different weights respectively, so as to overall merit information can be with
Weighting and/or weighted mean for three etc..
Specifically such as, session feedback device 23 is believed to session evaluation respectively according to respective preset rules
Breath, pronunciation evaluation information and grammer evaluation information provide a scoring, then calculate the flat of three scorings
An overall score is respectively obtained, and three scorings and overall score feedback are shown.To pronunciation, grammer and
Scored in terms of situational dialogue etc. three, oneself level in all fields can be let the user know that,
Contribute to user and more times are spent at poor aspect, or targetedly strengthen more excellent aspect;And
An overall score is given, is then contributed to user and is judged oneself whether need to proceed the session scene
Spoken language exercise.
For example, user is in pronunciation evaluation information, 3 sides of grammer evaluation information and session evaluation information
Face obtains 50,70 and 90 three scorings respectively, and the general comment of overall merit information is divided into 70 points, that
What at this time user can will be apparent that knows, oneself aspect worst for the session scene is to send out
In terms of sound, this when, user just targetedly can be improved to oneself corresponding pronunciation,
And and ruling more than (60 points) other differences can be appropriate loosen.
User equipment can graphically present to use after collecting to each evaluation information
Family.Diagrammatic form is more directly perceived relative to written form, and user can be helped quickly to differentiate oneself
Practice result, without the need for the user effort time word by word and sentence by sentence check word to judge the exercise of oneself
As a result, the time of user is saved, to user with facility.
Additionally, the spoken language exercise device 20 shown in Fig. 2 can also shift to an earlier date stop device (not including one
Illustrate).If the speech input information of the user deviates the conversation with spoken language scene, terminate dress in advance
Put, terminate this session.Subsequently, session feedback device 23, in this conversation end, is this
User provides the feedback information with regard to this session.
Wherein, when meet below at least any one when, stop device can determine the user's in advance
Speech input information deviates the conversation with spoken language scene of this session:
1) the speech input information None- identified.
Here, stop device may determine that the speech input information whether None- identified in advance, if so,
Then judge that the speech input information does not meet the session scene, and terminate session.If the voice is defeated
Enter information to be identified, it is believed that the pronunciation of user has serious problems or and current sessions
Scene is unrelated, and so in actual session scene, user cannot carry out effective session at all,
Thus, if the speech input information of user cannot be identified, terminate session in time, be apprised of
The reason for user's None- identified, user can be helped effectively to be practised, it helps help is used
Family finds the raising at aspects such as pronunciations.
2) semanteme of the speech input information is not met with the conversation with spoken language scene.
Here, stop device may determine that the semanteme and the session for analyzing the speech input information in advance
Whether scene meets, if not meeting, terminates session.The judgement of the semanteme specifically can be by sentencing
The mode the such as whether key word not in the speech input information related to the session scene.By semanteme
Judgement, can inform whether the ongoing session of user meets the session scene for calling, if
Do not meet, terminate session, it is to avoid the insignificant waste practice periodses of user;Only when meeting,
Just match and feed back to should speech input information response message;Thus, user can be more preferable
The session of the session scene practised is wanted oneself in exercise, contributes to user and quickly adapts to the session feelings
Scape, and language competence of the user in corresponding session scene can be improved.
3) the sentence quantity of the speech input information exceedes its correspondence predetermined threshold value.
Here, stop device may determine that whether the sentence quantity of the speech input information exceedes in advance
Predetermined threshold value, if exceeding, judges that the speech input information does not meet the session scene, and terminates
Session.Judge that whether the sentence quantity of the speech input information exceed the meaning of predetermined threshold value and be,
Under many scenes, it would be desirable to go to express oneself using limited sentence, excessive sentence can
Other people be sick of is caused, if while sentence quantity too much, illustrates that user quickly cannot express and thinks
Content to be expressed, at this point it is possible to think that the session exercise is underproof, terminates session in time,
User can be allowed to summarize, improved and perfect.
4) in the speech input information, whether words and phrases number of repetition exceedes its correspondence predetermined threshold value.
Here, stop device may determine that in the speech input information words and phrases number of repetition whether in advance
Beyond predetermined threshold value, if exceeding, judge that the speech input information does not meet the session scene, and
Terminate session.In judging the speech input information, sentence repeat number of times meaning be, certain language
When sentence is repeated several times, it is believed that user is unsustainable for the session, so the sentence repeatedly,
Now, continue insignificant session can only lose time, now terminate session, can not only save
The time of user, and user can be allowed preferably to reflect on oneself the deficiency in oneself session, if for a long time
If just terminating session afterwards, perhaps user has had forgotten about that this is not enough, so the present invention can be with
User is helped targetedly to reflect on oneself and improve the weak point of oneself.
Those skilled in the art will be understood that the correspondence of the sentence quantity of above-mentioned speech input information
Predetermined threshold value is could be arranged to the corresponding predetermined threshold value of words and phrases number of repetition in speech input information
Identical or different, this can depend on concrete application.
The present invention, can be fast by analyzing whether speech input information deviates current conversation with spoken language scene
Speed judges whether the session content of user meets its scene for wishing exercise, if judging to deviate works as prosopyle
Language session scene, then terminate this session, so as to user it is known that needing in the session to oneself
Appearance is adjusted, and this can help user quickly and targetedly to carry out spoken language exercise.
It should be noted that the present invention can be in the assembly of software and/or software with hardware by reality
Apply, for example, the present invention each device can adopt special IC (ASIC) or any other
Similar hardware device is realizing.In one embodiment, software program of the invention can be by place
Reason device performs to realize steps described above or function.Similarly, software program of the invention (bag
Include the data structure of correlation) can be stored in computer readable recording medium storing program for performing, for example, RAM
Memorizer, magnetically or optically driver or floppy disc and similar devices.In addition, some steps of the present invention
Or function can employ hardware to realize, for example, as coordinating so as to performing each step with processor
Or the circuit of function.
It is obvious to a person skilled in the art that the invention is not restricted to above-mentioned one exemplary embodiment
Details, and without departing from the spirit or essential characteristics of the present invention, can be with others
Concrete form realizes the present invention.Which point therefore, no matter from the point of view of, embodiment all should be regarded as
It is exemplary, and be it is nonrestrictive, the scope of the present invention by claims rather than on
Bright restriction is stated, it is intended that by the institute in the implication and scope of the equivalency of claim that falls
Change and be included in the present invention.Any reference in claim should not be considered as restriction institute
The claim being related to.Furthermore, it is to be understood that " including " word is not excluded for other devices or step, odd number
It is not excluded for plural number.The multiple devices stated in system claims or device can also be by a devices
Or device is realized by software or hardware.The first, the second grade word be used for represent title, and
It is not offered as any specific order.
Although above specifically shown and describe exemplary embodiment, those skilled in the art
It will be understood that in the case of the spirit and scope without departing substantially from claims, in its form
Can be varied from in terms of details.Protection sought herein is done in the dependent claims
Illustrate.
Claims (18)
1. it is a kind of by computer equipment realize spoken language exercise method, wherein, the method includes following
Step:
The speech input information of-acquisition user, the speech input information is corresponding to specific spoken
Session scene;
- in the conversation with spoken language scene, provide and the speech input information phase for the user
Corresponding response message;
- when this conversation end, the feedback information with regard to this session, institute are provided for the user
Stating feedback information includes the session evaluation information to the user and/or session advisory information.
2. the method for claim 1, wherein the feedback information is also included to the use
The pronunciation evaluation information and/or pronunciation advisory information at family;
Wherein, the method also includes:
- pronunciation of the speech input information is analyzed, letter is evaluated to obtain corresponding pronunciation
Breath and/or advisory information of pronouncing.
3. method as claimed in claim 2, wherein, the feedback information is also included to the use
The overall merit information at family, the overall merit information combine described by the session evaluation information
Sound evaluation information is determining.
4. method as claimed any one in claims 1 to 3, wherein, the feedback information is also
Including grammer evaluation information and/or grammer advisory information to the user;
Wherein, the method also includes:
- grammer and syntax of the speech input information are analyzed, to obtain corresponding grammer
Evaluation information and/or grammer advisory information.
5. method as claimed in claim 4, wherein, the feedback information is also included to the use
The overall merit information at family, the overall merit information combine institute's predicate by the session evaluation information
Method evaluation information is determining.
6. the method as any one of claim 1 to 5, wherein, the method also includes:
If-the speech input information deviates the conversation with spoken language scene, terminate this session.
7. method as claimed in claim 6, wherein, below meet during at least any one, really
The fixed speech input information deviates the conversation with spoken language scene:
- speech input information the None- identified;
The semantic and conversation with spoken language scene of-the speech input information does not meet;
The sentence quantity of-speech input information exceedes its correspondence predetermined threshold value;
In-the speech input information, whether words and phrases number of repetition exceedes its correspondence predetermined threshold value.
8. the method as any one of claim 1 to 7, wherein, it is described to obtain user's
The step of speech input information, also includes:
- pretreatment is carried out to the speech input information, wrapped with obtaining in the speech input information
The character string information for containing and pronunciation information.
9. the method as any one of claim 1 to 8, wherein, described this time session
Feedback information is presented to the user with visual pattern.
10. a kind of device that spoken language exercise is realized in computer equipment, wherein, the device includes:
- for obtaining the device of the speech input information of user, the speech input information is corresponded to
Specific conversation with spoken language scene;
- in the conversation with spoken language scene, provide for the user and phonetic entry letter
The device of the corresponding response message of manner of breathing;
- for working as this conversation end, the feedback information with regard to this session is provided for the user
Device, the feedback information includes the session evaluation information to the user and/or session recommendation letter
Breath.
11. devices as claimed in claim 10, wherein, the feedback information is also included to described
The pronunciation evaluation information and/or pronunciation advisory information of user;
Wherein, the device also includes:
- for being analyzed to the pronunciation of the speech input information, commented with obtaining corresponding pronunciation
The device of valency information and/or pronunciation advisory information.
12. devices as claimed in claim 11, wherein, the feedback information is also included to described
The overall merit information of user, the overall merit information are combined described by the session evaluation information
Pronounce evaluation information to determine.
13. devices as any one of claim 10 to 12, wherein, the feedback letter
Breath also includes the grammer evaluation information to the user and/or grammer advisory information;
Wherein, the device also includes:
- for being analyzed to the grammer and syntax of the speech input information, it is corresponding to obtain
The device of grammer evaluation information and/or grammer advisory information.
14. devices as claimed in claim 13, wherein, the feedback information is also included to described
The overall merit information of user, the overall merit information are combined described by the session evaluation information
Grammer evaluation information is determining.
15. devices as any one of claim 10 to 14, wherein, the device is also wrapped
Include:
If-deviate the conversation with spoken language scene for the speech input information, terminate this
The device of session.
16. devices as claimed in claim 15, wherein, when meet below at least any one when,
Determine that the speech input information deviates the conversation with spoken language scene:
- speech input information the None- identified;
The semantic and conversation with spoken language scene of-the speech input information does not meet;
The sentence quantity of-speech input information exceedes its correspondence predetermined threshold value;
In-the speech input information, whether words and phrases number of repetition exceedes its correspondence predetermined threshold value.
17. devices as any one of claim 10 to 16, wherein, it is described for obtaining
The device for taking the speech input information at family is additionally operable to:
- pretreatment is carried out to the speech input information, wrapped with obtaining in the speech input information
The character string information for containing and pronunciation information.
18. devices as any one of claim 10 to 17, wherein, this time meeting
The feedback information of words is presented to the user with visual pattern.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510629522.2A CN106558252B (en) | 2015-09-28 | 2015-09-28 | Spoken language practice method and device realized by computer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510629522.2A CN106558252B (en) | 2015-09-28 | 2015-09-28 | Spoken language practice method and device realized by computer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106558252A true CN106558252A (en) | 2017-04-05 |
CN106558252B CN106558252B (en) | 2020-08-21 |
Family
ID=58416638
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510629522.2A Active CN106558252B (en) | 2015-09-28 | 2015-09-28 | Spoken language practice method and device realized by computer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106558252B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107133303A (en) * | 2017-04-28 | 2017-09-05 | 百度在线网络技术(北京)有限公司 | Method and apparatus for output information |
CN107818795A (en) * | 2017-11-15 | 2018-03-20 | 苏州驰声信息科技有限公司 | The assessment method and device of a kind of Oral English Practice |
CN109035896A (en) * | 2018-08-13 | 2018-12-18 | 广东小天才科技有限公司 | A kind of Oral Training method and facility for study |
CN109166594A (en) * | 2018-07-24 | 2019-01-08 | 北京搜狗科技发展有限公司 | A kind of data processing method, device and the device for data processing |
CN109493658A (en) * | 2019-01-08 | 2019-03-19 | 上海健坤教育科技有限公司 | Situated human-computer dialogue formula spoken language interactive learning method |
CN110008328A (en) * | 2019-04-04 | 2019-07-12 | 福建奇点时空数字科技有限公司 | A kind of method and apparatus that automatic customer service is realized by human-computer interaction technology |
CN110136719A (en) * | 2018-02-02 | 2019-08-16 | 上海流利说信息技术有限公司 | A kind of method, apparatus and system for realizing Intelligent voice dialog |
CN110489756A (en) * | 2019-08-23 | 2019-11-22 | 上海乂学教育科技有限公司 | Conversational human-computer interaction spoken language evaluation system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030028378A1 (en) * | 1999-09-09 | 2003-02-06 | Katherine Grace August | Method and apparatus for interactive language instruction |
CN101042716A (en) * | 2006-07-13 | 2007-09-26 | 东莞市步步高教育电子产品有限公司 | Electric pet entertainment learning system and method thereof |
CN101366065A (en) * | 2005-11-30 | 2009-02-11 | 语文交流企业公司 | Interactive language education system and method |
CN103065626A (en) * | 2012-12-20 | 2013-04-24 | 中国科学院声学研究所 | Automatic grading method and automatic grading equipment for read questions in test of spoken English |
CN103714727A (en) * | 2012-10-06 | 2014-04-09 | 南京大五教育科技有限公司 | Man-machine interaction-based foreign language learning system and method thereof |
-
2015
- 2015-09-28 CN CN201510629522.2A patent/CN106558252B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030028378A1 (en) * | 1999-09-09 | 2003-02-06 | Katherine Grace August | Method and apparatus for interactive language instruction |
CN101366065A (en) * | 2005-11-30 | 2009-02-11 | 语文交流企业公司 | Interactive language education system and method |
CN101042716A (en) * | 2006-07-13 | 2007-09-26 | 东莞市步步高教育电子产品有限公司 | Electric pet entertainment learning system and method thereof |
CN103714727A (en) * | 2012-10-06 | 2014-04-09 | 南京大五教育科技有限公司 | Man-machine interaction-based foreign language learning system and method thereof |
CN103065626A (en) * | 2012-12-20 | 2013-04-24 | 中国科学院声学研究所 | Automatic grading method and automatic grading equipment for read questions in test of spoken English |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107133303A (en) * | 2017-04-28 | 2017-09-05 | 百度在线网络技术(北京)有限公司 | Method and apparatus for output information |
CN107818795A (en) * | 2017-11-15 | 2018-03-20 | 苏州驰声信息科技有限公司 | The assessment method and device of a kind of Oral English Practice |
CN107818795B (en) * | 2017-11-15 | 2020-11-17 | 苏州驰声信息科技有限公司 | Method and device for evaluating oral English |
CN110136719A (en) * | 2018-02-02 | 2019-08-16 | 上海流利说信息技术有限公司 | A kind of method, apparatus and system for realizing Intelligent voice dialog |
CN110136719B (en) * | 2018-02-02 | 2022-01-28 | 上海流利说信息技术有限公司 | Method, device and system for realizing intelligent voice conversation |
CN109166594A (en) * | 2018-07-24 | 2019-01-08 | 北京搜狗科技发展有限公司 | A kind of data processing method, device and the device for data processing |
CN109035896A (en) * | 2018-08-13 | 2018-12-18 | 广东小天才科技有限公司 | A kind of Oral Training method and facility for study |
CN109035896B (en) * | 2018-08-13 | 2021-11-05 | 广东小天才科技有限公司 | Oral training method and learning equipment |
CN109493658A (en) * | 2019-01-08 | 2019-03-19 | 上海健坤教育科技有限公司 | Situated human-computer dialogue formula spoken language interactive learning method |
CN110008328A (en) * | 2019-04-04 | 2019-07-12 | 福建奇点时空数字科技有限公司 | A kind of method and apparatus that automatic customer service is realized by human-computer interaction technology |
CN110489756A (en) * | 2019-08-23 | 2019-11-22 | 上海乂学教育科技有限公司 | Conversational human-computer interaction spoken language evaluation system |
Also Published As
Publication number | Publication date |
---|---|
CN106558252B (en) | 2020-08-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106558252A (en) | By computer implemented spoken language exercise method and device | |
Bibauw et al. | Discussing with a computer to practice a foreign language: Research synthesis and conceptual framework of dialogue-based CALL | |
CN111027486B (en) | Auxiliary analysis and evaluation system and method for classroom teaching effect big data of middle and primary schools | |
Macken-Horarik et al. | A grammatics ‘good enough’for school English in the 21st century: Four challenges in realising the potential | |
Abdallah et al. | Assistive technology for deaf people based on android platform | |
Long | Inside the “black box”: Methodological issues in classroom research on language learning | |
CN110489756B (en) | Conversational human-computer interactive spoken language evaluation system | |
CN108536672A (en) | Intelligent robot Training Methodology, device, computer equipment and storage medium | |
CN108563780A (en) | Course content recommends method and apparatus | |
CN108335543A (en) | A kind of English dialogue training learning system | |
US20080027731A1 (en) | Comprehensive Spoken Language Learning System | |
US9536439B1 (en) | Conveying questions with content | |
CN111833853A (en) | Voice processing method and device, electronic equipment and computer readable storage medium | |
Ashrafi et al. | Correct characteristics of the newly involved artificial intelligence methods in science and technology using statistical data sets | |
CN110808038B (en) | Mandarin evaluating method, device, equipment and storage medium | |
CN105590632B (en) | A kind of S-T teaching process analysis method based on phonetic similarity identification | |
CN110245253B (en) | Semantic interaction method and system based on environmental information | |
Naning et al. | The correlation between learning style and listening achievement of English Education Study Program students of Sriwijaya University | |
US6990476B2 (en) | Story interactive grammar teaching system and method | |
CN113849627B (en) | Training task generation method and device and computer storage medium | |
Baloian et al. | Modeling educational software for people with disabilities: theory and practice | |
US7203649B1 (en) | Aphasia therapy system | |
Yan et al. | A method for personalized C programming learning contents recommendation to enhance traditional instruction | |
JP2020086075A (en) | Learning support system and program | |
Balayan et al. | On evaluating skillville: An educational mobile game on visual perception skills |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |