CN110211577A - Terminal device and its voice interactive method - Google Patents

Terminal device and its voice interactive method Download PDF

Info

Publication number
CN110211577A
CN110211577A CN201910655031.3A CN201910655031A CN110211577A CN 110211577 A CN110211577 A CN 110211577A CN 201910655031 A CN201910655031 A CN 201910655031A CN 110211577 A CN110211577 A CN 110211577A
Authority
CN
China
Prior art keywords
terminal device
parsing result
local
semantic
semantic parsing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910655031.3A
Other languages
Chinese (zh)
Other versions
CN110211577B (en
Inventor
陈斌德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Fotile Kitchen Ware Co Ltd
Original Assignee
Ningbo Fotile Kitchen Ware Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Fotile Kitchen Ware Co Ltd filed Critical Ningbo Fotile Kitchen Ware Co Ltd
Priority to CN201910655031.3A priority Critical patent/CN110211577B/en
Publication of CN110211577A publication Critical patent/CN110211577A/en
Application granted granted Critical
Publication of CN110211577B publication Critical patent/CN110211577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/34Adaptation of a single recogniser for parallel processing, e.g. by use of multiple processors or cloud computing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Machine Translation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of terminal device and its voice interactive methods.Wherein method includes: to receive voice input;Voice input is uploaded to resolution server and carries out online semantic parsing;Receive the online semantic parsing result of the resolution server feedback;The semantic error-correction rule prestored according to terminal device, it is local to the online semantic semantic error correction of parsing result progress in the terminal device, it obtains correcting semantic parsing result;The semantic parsing result of the correction is fed back.The present invention carries out semantic error correction to online semantic parsing result by the semantic error-correction rule locally prestored in terminal device, so that semantic parsing result can be adapted for terminal device, improves the accuracy of speech analysis, facilitates terminal device and realize effective Feedback.

Description

Terminal device and its voice interactive method
Technical field
The invention belongs to interactive voice field more particularly to a kind of terminal devices and its voice interactive method.
Background technique
With intelligence gradually popularizing in home appliance, how to realize equipment and the interaction of people, become a heat Door topic.Interactive voice has become a kind of common interactive mode, this mode because not needing the manual operation of user and It is used widely, helps to promote user experience.
For existing smart machine when carrying out interactive voice, equipment end itself does not often have speech analysis function, needs Voice is uploaded to server and carries out natural language parsing, then parsing result is returned into equipment end by server.For this Mode, how is the accuracy of speech analysis, determines that equipment end can only passively receive parsing by the parsing level of server completely As a result.In general, the analysis service of server is provided by the integrator of third party or each vertical field, due to third party or The integrator in vertical field is likely to not know about the functional characteristics of equipment end and application scenarios, so parsing result is easy to appear mistake Accidentally, it is not applied for equipment end, equipment end is caused to carry out effective Feedback without normal direction user.
Summary of the invention
The technical problem to be solved by the present invention is in order to overcome the prior art can only be in server when carrying out interactive voice End parsing voice is semantic and parsing accuracy places one's entire reliance upon server is not applied for equipment end so as to cause parsing result Defect provides a kind of terminal device and its voice interactive method.
The present invention is to solve above-mentioned technical problem by the following technical programs:
A kind of voice interactive method of terminal device, comprising:
Receive voice input;
Voice input is uploaded to resolution server and carries out online semantic parsing;
Receive the online semantic parsing result of the resolution server feedback;
The semantic error-correction rule prestored according to terminal device, it is local to the online semantic parsing knot in the terminal device Fruit carries out semantic error correction, obtains correcting semantic parsing result;
The semantic parsing result of the correction is fed back.
Preferably, the voice interactive method further include:
After receiving voice input and voice input is uploaded to resolution server and carries out online semantic parsing Before, judge whether the voice input meets the local parsing condition of the terminal device:
If so, voice input is carried out local semantic parsing in the terminal device, local semantic parsing is obtained As a result;Then, local to the local semantic parsing result progress language in the terminal device according to the semantic error-correction rule Adopted error correction obtains the semantic parsing result of the correction;Then, the semantic parsing result of the correction is fed back;
If it is not, voice input, which is then uploaded to resolution server, carries out online semantic parsing.
Preferably, the semanteme error-correction rule includes common entity pair of the terminal device in different phonetic interaction scenarios Image information;
When the online semantic parsing result or the local semantic parsing result belong to content class intention, at the end The step of end equipment locally carries out semanteme error correction to the online semantic parsing result or the local semantic parsing result, packet It includes:
The original entity object information verified in the online semantic parsing result or the local semantic parsing result is It is no to be contained in the common entity object information of current speech interaction scenarios, if it is not, then being inquired from the semantic error-correction rule Current speech interaction scenarios with it is original described in original immediate common entity object information substitution of entity object information Entity object information forms the semantic parsing result of the correction.
Preferably, when the relationship between original entity object information and the common entity object information includes following It is closest when any one in situation:
Synonym;
Near synonym;
Homonym;
Similarity is more than the fuzzy matching word of default similarity threshold.
Preferably, local parsing condition includes that the local command word that the terminal device prestores is hit in the voice input, The user model that the terminal device prestores includes the different expression-forms of the local command word;
The voice interactive method further include:
When any one expression-form in the user model is hit in voice input, the voice input life is determined In corresponding local command word;
And the user model is updated according to the online semantic parsing result, it is described update include increase, delete, Local command word is modified, and at least one of language expression increase, delete, modifying the local command word.
Preferably, the voice interactive method further include:
When the online semantic parsing result or the local semantic parsing result belong to control class intention, described in control Terminal device executes control command.
Preferably, when the online semantic parsing result or the local semantic parsing result belong to content class intention, The step of feeding back to the semantic parsing result of the correction specifically includes:
Judge whether the semantic parsing result of the correction hits the scene state data that the terminal device locally prestores, institute Stating scene state data includes the dialog history stream and related to the dialog history stream distinguished according to different phonetic interaction scenarios The content-data of connection:
If so, correcting the anti-of semantic parsing result as to described from the content-data that the terminal device extracts hit Feedback.
Preferably, the scene state data are cached by the way of knowledge mapping, the pass extracted from the dialogue stream Keyword is connected with each other as the node in the knowledge mapping between associated node.
A kind of terminal device, comprising:
Module is locally stored, for prestoring semantic error-correction rule;
Voice input module, for receiving voice input;
Voice transfer module carries out online semantic parsing for voice input to be uploaded to resolution server, and Receive the online semantic parsing result of the resolution server feedback;
Semantic correction module, for locally being solved to the online semanteme in terminal device according to the semantic error-correction rule It analyses result and carries out semantic error correction, obtain correcting semantic parsing result;
As a result feedback module, for being fed back to the semantic parsing result of the correction.
Preferably, the terminal device further include:
Analysis judgment module, for judging whether the voice input meets the local parsing condition of the terminal device, If it is not, the voice transfer module is then called, if so, calling:
Local parsing module, for when voice input meets the local parsing condition, the voice to be inputted Local semantic parsing is carried out in the terminal device, obtains local semantic parsing result;
The semanteme correction module is also used to according to the semantic error-correction rule, local to described in the terminal device Ground semanteme parsing result carries out semantic error correction, obtains the semantic parsing result of the correction.
Preferably, the semanteme error-correction rule includes common entity pair of the terminal device in different phonetic interaction scenarios Image information;
When the online semantic parsing result or the local semantic parsing result belong to content class intention, the semanteme Correction module is specifically used for verifying original entity pair in the online semantic parsing result or the local semantic parsing result Whether image information is contained in the common entity object information of current speech interaction scenarios, if it is not, then advising from the semantic error correction Then in inquiry current speech interaction scenarios with original immediate common entity object information substitution of entity object information Original entity object information forms the semantic parsing result of the correction.
Preferably, when the relationship between original entity object information and the common entity object information includes following It is closest when any one in situation:
Synonym;
Near synonym;
Homonym;
Similarity is more than the fuzzy matching word of default similarity threshold.
Preferably, the module that is locally stored is also used to prestore local command word and user model, the user model packet The different expression-forms of the local command word are included, local parsing condition includes that the local command is hit in the voice input Word;
The analysis judgment module is specifically used for hitting any one expression in the user model in voice input When form, determine that corresponding local command word is hit in the voice input;
The terminal device further include:
Model modification module, for updating the user model, the update packet according to the online semantic parsing result Include increase, deletion, modification local command word, and increase, delete, modifying the local command word language express at least It is a kind of.
Preferably, the terminal device further include:
Device control module, for belonging to control in the online semantic parsing result or the local semantic parsing result When class is intended to, controls the equipment and execute control command.
Preferably, the module that is locally stored is also used to prestore scene state data, the scene state data include pressing The dialog history stream and content-data associated with the dialog history stream distinguished according to different phonetic interaction scenarios;
When the online semantic parsing result or the local semantic parsing result belong to content class intention, the result Feedback module is specifically used for judging whether the semantic parsing result of the correction hits the scene state data, if so, from institute The content-data that module extraction hit is locally stored is stated as to the feedback for correcting semantic parsing result.
Preferably, the scene state data are cached by the way of knowledge mapping, the pass extracted from the dialogue stream Keyword is connected with each other as the node in the knowledge mapping between associated node.
On the basis of common knowledge of the art, above-mentioned each optimum condition, can any combination to get each preferable reality of the present invention Example.
The positive effect of the present invention is that: the present invention passes through the semantic error-correction rule pair that locally prestores in terminal device Online semanteme parsing result carries out semantic error correction, so that semantic parsing result can be adapted for terminal device, improves voice solution The accuracy of analysis facilitates terminal device and realizes effective Feedback;In addition, the present invention further can also increase this in terminal device Ground semanteme parses function, so that online semantic parsing and local semantic parsing combine, accelerates speech analysis speed, shortens feedback Time.
Detailed description of the invention
Fig. 1 is a kind of flow chart of the voice interactive method of terminal device of the embodiment of the present invention 1;
Fig. 2 is a kind of flow chart of the voice interactive method of terminal device of the embodiment of the present invention 2;
Fig. 3 is the schematic diagram of knowledge mapping in a kind of voice interactive method of terminal device of the embodiment of the present invention 2;
Fig. 4 is a kind of flow chart of the voice interactive method of terminal device of the embodiment of the present invention 3;
Fig. 5 is a kind of schematic block diagram of terminal device of the embodiment of the present invention 4;
Fig. 6 is a kind of schematic block diagram of terminal device of the embodiment of the present invention 5;
Fig. 7 is a kind of schematic block diagram of terminal device of the embodiment of the present invention 6.
Specific embodiment
The present invention is further illustrated below by the mode of embodiment, but does not therefore limit the present invention to the reality It applies among a range.
Embodiment 1
This gives a kind of voice interactive methods of terminal device.The voice interactive method acts on terminal and sets It is standby, the human-computer interaction between user and the terminal device may be implemented.The terminal device can be any equipment, including but It is not limited to smart home device, intelligent appliance equipment, it is particularly possible to be intelligent kitchen appliance equipment (such as smoke exhaust ventilator, kitchen range).It is described Terminal device can also have speech reception module (such as Mike other than having the software and hardware structure for realizing its original function Wind array), processor, memory and the component etc. for realizing network savvy.The memory may include volatile memory, example Such as random access memory and/or cache memory, read-only memory can further include.
As shown in Figure 1, the voice interactive method of the terminal device may comprise steps of:
Step 101: receiving voice input.The languages that the present embodiment inputs the voice without limitation, can for Chinese, English, Japanese, German, French or other languages.
Step 102: voice input being uploaded to resolution server and carries out online semantic parsing.Wherein, the terminal Equipment and the resolution server can realize the transmission of data with this by network connection.The resolution server can be Cloud server or any other server with speech analysis function, can use various known speech recognition, language Sound analytic technique inputs the voice and carries out semantic parsing, to generate online semantic parsing result, and by the online language Adopted parsing result feeds back to the terminal device.
Step 103: receiving the online semantic parsing result of the resolution server feedback.Pass through the online semanteme Parsing result is understood that user is intended to.
Step 104: the semantic error-correction rule prestored according to terminal device, it is local to the online language in the terminal device Adopted parsing result carries out semantic error correction, obtains correcting semantic parsing result.Wherein, the semantic error-correction rule can be according to described The functional characteristics and application scenarios of terminal device are formulated, and can be intended to play help to user is correctly understood, can be with The common parsing result set suitable for the terminal device, everyday words (or usual word) set are provided, can be a variety of languages. The semanteme error correction may include correcting the part that the semantic error-correction rule is not met in the online semantic parsing result, obtain To the semantic parsing result for meeting the terminal function feature and application scene.
Step 105: the semantic parsing result of the correction is fed back.Feedback in the present embodiment can be there are many shape Formula such as operates the terminal device according to the semantic parsing result of the correction, changes the state of the terminal device, right The voice input is replied.In addition, directly form or text that the semantic parsing result of the correction is played with voice are shown Show the form output (in the case where the terminal device has display screen), also can be used as a kind of feedback of the present embodiment.
In the present embodiment, the semanteme error-correction rule can be before step 104, even before terminal device factory It is built into the native system of the memory of the terminal device, the caching of processor or the terminal device.The semanteme entangles Wrong rule can also be regularly updated by backstage when the terminal device is networked or be updated according to user demand.
The voice interactive method of the present embodiment can use the semantic error-correction rule to the online semantic parsing result The local semantic error correction for carrying out terminal device improves speech analysis so that semantic parsing result can be adapted for terminal device Accuracy, facilitate terminal device realize effective Feedback.It may be implemented using the voice interactive method of the present embodiment man-machine More wheel dialogues, it can also include terminal device that so-called dialogue, which is not limited to terminal device, and user carries out linguistic contact The feedback of above-mentioned form is generated to user speech.After the terminal device makes feedback, user can continue voice input, so Continue parsing semanteme using the voice interactive method of the present embodiment again afterwards, makes feedback, repeatedly repeatedly.
Embodiment 2
The present embodiment is the further improvement in embodiment 1.In the present embodiment, the online semantic parsing result institute table The user reached is intended to be broadly divided into two classes, is respectively as follows: content class and is intended to and controls class intention.So-called control class intention can be with table Show that user wants to control the terminal device, the terminal device as described in making executes certain operation (as being switched on or shutting down Or other are operated by terminal device type determines), changing into certain state, (such as dormant state, operating status are set by terminal Other states that the class that makes preparations for sowing determines).So-called content class intention can indicate that user wants to carry out certain class in the terminal device The feedback of the inquiry of information or certain specific contents, by taking the terminal device is smoke exhaust ventilator as an example, content class intention be can be It inquires some menu, carry out voice dialogue with user.
In view of the parsing that control class is intended to is typically more accurate, to accelerate feedback speed, the interactive voice of the present embodiment In method, it is intended to carry out just for the content class to the semantic error correction property of can choose of online semanteme parsing result.Such as Fig. 2 institute Show, the voice interactive method is on the basis of embodiment 1 further include:
Inserting step 1031 between step 103 and step 104: judge that the online semantic parsing result belongs to content class It is intended to or control class is intended to, if belonging to content class intention, then follow the steps 104, is intended to if belonging to control class, thens follow the steps 106。
Step 106: controlling the terminal device and execute control command.Specific control command is directly by the online semanteme Parsing result determines, such as opens the order of equipment, the order of pass hull closure.
In the present embodiment, in order to be intended to carry out semantic error correction to the content class, online semantic parsing accuracy is improved, is made It obtains semantic parsing result and is highly suitable for the terminal device, the semanteme error-correction rule may include that the terminal device exists The common entity object information of different phonetic interaction scenarios.Each interactive voice scene can correspond to a kind of specific user's meaning Figure, common entity object information may include the higher entity object of frequency of use during the specific user is intended to.
Correspondingly, step 104 can specifically include:
Step 1041: whether original entity object information of the verifying online semantic parsing result is contained in current speech In the common entity object information of interaction scenarios, if so, the explanation online semantic parsing result meets current speech interaction Scene can feed back the online semantic parsing result directly as semantic parsing result is corrected;If it is not, executing step 1042。
Step 1042: inquiring current speech interaction scenarios from the semantic error-correction rule with original entity object Original entity object information described in the immediate common entity object information substitution of information forms the semantic parsing knot of the correction Fruit.In the present embodiment, when the relationship between original entity object information and the common entity object information includes following Can be closest when any one in situation:
Synonym;
Near synonym;
Homonym;
Similarity is more than the fuzzy matching word of default similarity threshold.
Below by taking the terminal device is smoke exhaust ventilator as an example, the semantic error correction of step 104 in the present embodiment is illustrated Process:
Since smoke exhaust ventilator is related to cooking, so an interactive voice scene in the semanteme error-correction rule can be dish Spectrum inquiry scene, common entity object information may include: the various vegetables names such as Chinese cabbage, cabbage, potato, carp, crucian, The various fish names such as perch, the various seafood names such as crab, shrimp, clam.
User carries out voice input, the online semantic parsing result through obtaining after the parsing of line semanteme are as follows: user wants to look into Ask the way of " being conducive to " braised in soy sauce.
In menu query scene, commonly use entity object information in be not present " being conducive to " word, therewith it is immediate be with " carp " of " being conducive to " unisonance, so through replacing, correcting semantic parsing result should are as follows: user wants doing for inquiry braised carp in brown sauce Method.
The way of braised carp in brown sauce can be broadcasted to user speech or be shown by display screen to smoke exhaust ventilator.
Similar, cabbage and cabbage are synonym, in menu query scene, if having in online semanteme parsing result " cabbage " can be replaced then immediate common entity object information is " cabbage " therewith;
Pineapple and pineapple are near synonym, in menu query scene, if having " pineapple " in online semanteme parsing result, then Immediate common entity object information is " pineapple " therewith, can be replaced;
In menu query scene, if having " one spoon of salt " in online semanteme parsing result, then similarly spending highest Fuzzy matching word is " 5 grams of salt ", can be replaced.
In the present embodiment, in order to accelerate content feed speed, scene state data, the field can be locally prestored in equipment Scape status data may include the dialog history stream and related to the dialog history stream distinguished according to different phonetic interaction scenarios The content-data of connection.In the present embodiment, the scene state data can be before step 105, even in the terminal device It is built into before factory in the native system of the memory of the terminal device, the caching of processor or the terminal device.It is described Scene state number can also be regularly updated by backstage when the terminal device is networked or be updated according to user demand.
Step 105 can specifically include:
Step 1051: judging whether the semantic parsing result of the correction hits the scene state data, if so, executing Step 1052, if it is not, thening follow the steps 1053.Wherein, for the hit of the scene state data, it can be the hit field Certain word in some dialogue stream of some interactive voice scene in scape status data.
Step 1052: extracting the content-data of hit as to the semantic parsing result of the correction from the terminal device Feedback.The content-data being extracted can be the associated content-data of dialogue stream or associated with certain words of hit of hit Content-data.
Step 1053: being fed back by web search.
In the present embodiment, the scene state data can be cached in a manner of specifically using knowledge mapping, from the dialogue The keyword extracted in stream is connected with each other as the node in the knowledge mapping between associated node.
Several sections of dialog history streams are shown below:
A. user wants to eat stewed crucian with brown sauce, and corresponding dialogue stream is as follows:
" how crucian is cooked? "=" return: braised in soy sauce, fried, stewing
" just choosing is braised in soy sauce "=" it returns: the menu of stewed crucian with brown sauce (comprising information such as corresponding major ingredient, auxiliary material, steps)
B. user wants to eat crucian beancurd soup, and corresponding dialogue stream is as follows:
" how crucian beancurd soup is cooked? "=" returning: the menu of crucian beancurd soup (includes corresponding major ingredient, auxiliary material, step Etc. information)
C. user wants to eat the dish of spring onion and bean curd combination, and corresponding dialogue stream is as follows:
" I has spring onion and bean curd, can do what dish? "=" returning: the menu of mix small onions with beancurd (includes corresponding master The information such as material, auxiliary material, step)
Based on above-mentioned three sections of dialogue streams, generating knowledge mapping can be as shown in Figure 3, wherein crucian, it is braised in soy sauce, fried, simmer, The keywords such as bean curd are connected with each other as node, the keywords such as crucian and braised in soy sauce, frying, stewing, bean curd, the menu of stewed crucian with brown sauce, The menu of crucian beancurd soup, the menu of mix small onions with beancurd are buffered as associated content-data.
After the completion, it is assumed that user terminal device input voice " what dish bean curd can be cooked? ", then can be at end End equipment is locally matched to two menus " mix small onions with beancurd " and " crucian beancurd soup ", and " spring onion can be extracted directly from caching Mix bean curd " and the way of " crucian beancurd soup " feed back to user
The voice interactive method of the present embodiment is further perfect anti-in the semantic error correction of terminal device progress and content The detailed process of feedback is realized by the semantic error-correction rule prestored and meets the local semanteme of terminal device interactive voice scene and entangle Mistake, by accelerating feedback speed from content feed is locally extracted.
Embodiment 3
The voice interactive method of the present embodiment can also further realize the local semantic parsing of terminal device, in order to accelerate Interactive voice speed, voice input analysable for terminal device can be preferentially using local semantic parsing, if voice inputs The analysable range of terminal device is had exceeded, then using online semantic parsing.As shown in figure 4, the voice interactive method is specific It can be with further include:
Inserting step 1011 between step 101 and step 102: judge whether the voice input meets the terminal and set Standby local parsing condition: if so, 107 are thened follow the steps, if it is not, thening follow the steps 102.In the present embodiment, local parsing item Part may include that the local command word that the terminal device prestores is hit in the voice input.Each local command word can represent A kind of user's intention.The user model that the terminal device prestores includes the different expression-forms of the local command word;For Compatible more users often use saying, and model can be added with increment.When certain sayings of user are identified as in certain areas height Frequency uses, then can be added in user model by way of locally training then cloud publication.It is with local command word For starting kitchen ventilator, different expression-forms can also be " open kitchen ventilator ", " opening smoke machine ", " starting smoke machine " etc., when It so can also be different language.When any one expression-form in the user model is hit in voice input, institute is determined Corresponding local command word is hit in the input of predicate sound.Certainly, the user model can regularly update or according to user demand more Newly, specifically the user model can be updated according to the online semantic parsing result, described update includes increasing, deleting, repairing Change local command word, and at least one of language expression increase, delete, modifying the local command word.
Step 107: voice input being subjected to local semantic parsing in the terminal device, obtains local semantic parsing As a result.It is understood that user is intended to by the local semantic parsing result.With the online semantic parsing result class in embodiment 2 Seemingly, user expressed by the local semantic parsing result is intended to that two classes can also be broadly divided into, be respectively as follows: content class be intended to and It controls class to be intended to, details are not described herein.
It is identical as the online semantic local error correction of parsing result in order to guarantee the accuracy of local semantic parsing, equally may be used To carry out local error correction to the local semantic parsing result for belonging to content class intention according to the semantic error-correction rule, corrected Then semantic parsing result executes step 105.About the semantic error-correction rule and local error correction procedure referring to embodiment 2, This is repeated no more.
The voice interactive method of the present embodiment increases local semantic parsing function, local compared to online semantic parsing Semanteme parsing has faster speed, and local semantic parsing function can also be used alone in the case where not networking.
Embodiment 4
This gives a kind of terminal devices.The human-computer interaction between user may be implemented in the terminal device. The terminal device can be any equipment, including but not limited to smart home device, intelligent appliance equipment, it is particularly possible to be intelligence It can kitchen appliance equipment (such as smoke exhaust ventilator, kitchen range).The terminal device can have realize its original function software and hardware structure it It outside, can also be as shown in Figure 5, comprising: module 201, voice input module 202, voice transfer module 203, semanteme is locally stored Correction module 204 and result feedback module 205.
The module 201 that is locally stored is for prestoring semantic error-correction rule.Wherein, the semantic error-correction rule can basis The functional characteristics and application scenarios of the terminal device are formulated, and can be intended to play help to user is correctly understood, The common parsing result set suitable for the terminal device, everyday words (or usual word) set can be provided, can be a variety of Languages.It is described that the caching or the terminal device that module 201 can be the memory of the terminal device, processor is locally stored Native system in memory space.The semanteme error-correction rule can also be regular more by backstage when the terminal device is networked It is updated newly or according to user demand.
For the voice input module 202 for receiving voice input, the voice input module 202 may include microphone Array.The languages that the present embodiment inputs the voice without limitation, can for Chinese, English, Japanese, German, French or its His languages.
The voice transfer module 203, which is used to for voice input to be uploaded to resolution server, carries out online semantic solution Analysis, and receive the online semantic parsing result of the resolution server feedback.Wherein, the terminal device and the parsing take Business device can realize the transmission of data with this by network connection.The resolution server, which can be, has the function of speech analysis Cloud server or any other server, can use various known speech recognition, speech analysis technology to described Voice input carries out semantic parsing, to generate online semantic parsing result, and the online semantic parsing result is fed back to The voice transfer module 203.It is understood that user is intended to by the online semantic parsing result.
The semanteme correction module 204 is used for according to the semantic error-correction rule, local to described online in terminal device Semantic parsing result carries out semantic error correction, obtains correcting semantic parsing result.The semanteme error correction may include correct it is described The part that the semantic error-correction rule is not met in line semanteme parsing result, obtains meeting the terminal function feature and answer With the semantic parsing result of scene.
The result feedback module 205 is used to feed back the semantic parsing result of the correction.It is anti-in the present embodiment Feedback can there are many forms, are such as operated according to the semantic parsing result of the correction to the terminal device, change the end The state of end equipment replies voice input.In addition, directly the semantic parsing result of the correction is played with voice Form or text importing (the terminal device have display screen in the case where) form output, also can be used as this implementation A kind of feedback of example.
The terminal device of the present embodiment can use the semantic error-correction rule and carry out to the online semantic parsing result Local semanteme error correction improves the accuracy of speech analysis, has so that semantic parsing result can be adapted for the terminal device Help the terminal device and realizes effective Feedback.Man-machine more wheels may be implemented using the terminal device of the present embodiment to talk with, institute The dialogue of meaning is not limited to terminal device and user carries out linguistic contact, can also include that terminal device produces user speech The feedback of raw above-mentioned form.After the terminal device makes feedback, user can continue voice input, then utilize this again The terminal device of embodiment continues parsing semanteme, makes feedback, repeatedly repeatedly.
Embodiment 5
The present embodiment is the further improvement in embodiment 4.In the present embodiment, the online semantic parsing result institute table The user reached is intended to be broadly divided into two classes, is respectively as follows: content class and is intended to and controls class intention.So-called control class intention can be with table Show that user wants to control the terminal device, the terminal device as described in making executes certain operation (as being switched on or shutting down Or other are operated by terminal device type determines), changing into certain state, (such as dormant state, operating status are set by terminal Other states that the class that makes preparations for sowing determines).So-called content class intention can indicate that user wants to carry out certain class in the terminal device The feedback of the inquiry of information or certain specific contents, by taking the terminal device is smoke exhaust ventilator as an example, content class intention be can be It inquires some menu, carry out voice dialogue with user.
In view of the parsing that control class is intended to is typically more accurate, to accelerate feedback speed, the terminal device of the present embodiment In, it is intended to carry out just for the content class to the semantic error correction property of can choose of online semanteme parsing result.As shown in fig. 6, The terminal device can also include: device control module 206.Belong to content class in the online semantic parsing result to be intended to When, call the semantic correction module 204.When the online semantic parsing result belongs to control class intention, set described in calling Standby control module 206.The device control module 206 is used for when the online semantic parsing result belongs to control class intention, It controls the equipment and executes control command.
In the present embodiment, in order to be intended to carry out semantic error correction to the content class, online semantic parsing accuracy is improved, is made It obtains semantic parsing result and is highly suitable for the terminal device, the semanteme error-correction rule may include that the terminal device exists The common entity object information of different phonetic interaction scenarios.Each interactive voice scene can correspond to a kind of specific user's meaning Figure, common entity object information may include the higher entity object of frequency of use during the specific user is intended to.
When the online semantic parsing result belongs to content class intention, the semanteme correction module 204 is specifically used for testing It demonstrate,proves original entity object information in the online semantic parsing result or the local semantic parsing result and whether is contained in and work as In the common entity object information of preceding interactive voice scene, if it is not, then inquiry current speech is handed over from the semantic error-correction rule Mutual scene is believed with original entity object described in original immediate common entity object information substitution of entity object information Breath forms the semantic parsing result of the correction.In the present embodiment, when original entity object information and the common entity pair Relationship between image information can be closest when including any one in following scenario described:
Synonym;
Near synonym;
Homonym;
Similarity is more than the fuzzy matching word of default similarity threshold.
In the present embodiment, in order to accelerate content feed speed, the module 201 that is locally stored is also used to prestore scene state Data, the scene state data include according to different phonetic interaction scenarios distinguish dialog history stream and with the dialog history Flow associated content-data.The scene state number can also the terminal device network when by backstage regularly update, Or it is updated according to user demand.
When the online semantic parsing result belongs to content class intention, the result feedback module 205 is specifically used for sentencing Whether the semantic parsing result of the correction of breaking hits the scene state data, if so, mentioning from the module 201 that is locally stored Take the content-data of hit as the feedback to the semantic parsing result of the correction, if it is not, then being fed back by web search. Wherein, for the hit of the scene state data, it can be some interactive voice scene in the hit scene state data Some dialogue stream in certain word.The content-data being extracted can for hit the associated content-data of dialogue stream or with life In the associated content-data of certain word.
In the present embodiment, the scene state data are cached by the way of knowledge mapping, are extracted from the dialogue stream Keyword as the node in the knowledge mapping, be connected with each other between associated node.
The terminal device of the present embodiment is further perfect to carry out semantic error correction and content feed in the terminal device Concrete function realizes the local semantic error correction for meeting terminal device interactive voice scene by the semantic error-correction rule prestored, By accelerating feedback speed from content feed is locally extracted.
Embodiment 6
The terminal device of the present embodiment can also further realize the local semantic parsing of terminal device, in order to accelerate voice Interactive speed, voice input analysable for terminal device can be preferentially using local semantic parsing, if voice input exceeds Terminal device analysable range, then using online semantic parsing.As shown in fig. 7, the terminal device specifically can be wrapped also It includes: analysis judgment module 207, local parsing module 208 and model modification module 209.
The analysis judgment module 207 is used to judge whether the voice input to meet the local parsing of the terminal device Condition, if it is not, the voice transfer module 203 is then called, if so, calling the local parsing module 208.
The local parsing module 208 is used for when voice input meets the local parsing condition, by institute's predicate Sound input carries out local semantic parsing in the terminal device, obtains local semantic parsing result;
The semanteme correction module 204 is also used to according to the semantic error-correction rule, local to institute in the terminal device It states local semantic parsing result and carries out semantic error correction, obtain the semantic parsing result of the correction.
Specifically, the module 201 that is locally stored can be also used for prestoring local command word and user model, the user Model includes the different expression-forms of the local command word, and local parsing condition includes that the local is hit in the voice input Order word.The analysis judgment module 207 is specifically used for hitting any one table in the user model in voice input When up to form, determine that corresponding local command word is hit in the voice input.
The model modification module 209 is used to update the user model according to the online semantic parsing result, described Update include increase, delete, modification local command word, and increase, delete, modifying the local command word language express in At least one.
The terminal device of the present embodiment increases local semantic parsing function, local semantic compared to online semantic parsing Parsing has faster speed, and local semantic parsing function can also be used alone in the case where not networking.
Although specific embodiments of the present invention have been described above, it will be appreciated by those of skill in the art that these It is merely illustrative of, protection scope of the present invention is defined by the appended claims.Those skilled in the art is not carrying on the back Under the premise of from the principle and substance of the present invention, many changes and modifications may be made, but these are changed Protection scope of the present invention is each fallen with modification.

Claims (16)

1. a kind of voice interactive method of terminal device characterized by comprising
Receive voice input;
Voice input is uploaded to resolution server and carries out online semantic parsing;
Receive the online semantic parsing result of the resolution server feedback;
The semantic error-correction rule prestored according to terminal device, the terminal device it is local to the online semantic parsing result into Row semanteme error correction, obtains correcting semantic parsing result;
The semantic parsing result of the correction is fed back.
2. the voice interactive method of terminal device as described in claim 1, which is characterized in that the voice interactive method also wraps It includes:
Before being uploaded to the online semantic parsing of resolution server progress after receiving voice input and by voice input, sentence Whether the voice input of breaking meets the local parsing condition of the terminal device:
If so, voice input is carried out local semantic parsing in the terminal device, local semantic parsing result is obtained; Then, according to the semantic error-correction rule, semanteme locally is carried out to the local semantic parsing result in the terminal device and is entangled Mistake obtains the semantic parsing result of the correction;Then, the semantic parsing result of the correction is fed back;
If it is not, voice input, which is then uploaded to resolution server, carries out online semantic parsing.
3. the voice interactive method of terminal device as claimed in claim 1 or 2, which is characterized in that the semanteme error-correction rule Including the terminal device different phonetic interaction scenarios common entity object information;
When the online semantic parsing result or the local semantic parsing result belong to content class intention, set in the terminal Standby local the step of semantic error correction is carried out to the online semantic parsing result or the local semantic parsing result, comprising:
Whether the original entity object information verified in the online semantic parsing result or the local semantic parsing result wraps In common entity object information contained in current speech interaction scenarios, if it is not, then inquiry is current from the semantic error-correction rule Interactive voice scene with original entity described in original immediate common entity object information substitution of entity object information Object information forms the semantic parsing result of the correction.
4. the voice interactive method of terminal device as claimed in claim 3, which is characterized in that when original entity object is believed It is closest when relationship between breath and the common entity object information includes any one in following scenario described:
Synonym;
Near synonym;
Homonym;
Similarity is more than the fuzzy matching word of default similarity threshold.
5. the voice interactive method of terminal device as claimed in claim 2, which is characterized in that local parsing condition includes described The local command word that the terminal device prestores is hit in voice input, and the user model that the terminal device prestores includes described The different expression-forms of ground order word;
The voice interactive method further include:
When any one expression-form in the user model is hit in voice input, the voice input hit pair is determined The local command word answered;
And the user model is updated according to the online semantic parsing result, it is described to update including increase, delete, modification At least one of local command word, and the language expression increase, delete, modifying the local command word.
6. the voice interactive method of terminal device as claimed in claim 1 or 2, which is characterized in that the voice interactive method Further include:
When the online semantic parsing result or the local semantic parsing result belong to control class intention, the terminal is controlled Equipment executes control command.
7. the voice interactive method of terminal device as claimed in claim 1 or 2, which is characterized in that in the online semantic solution When analysis result or the local semantic parsing result belong to content class intention, correct what semantic parsing result was fed back to described Step specifically includes:
Judge whether the semantic parsing result of the correction hits the scene state data that the terminal device locally prestores, the field Scape status data includes the dialog history stream and associated with the dialog history stream distinguished according to different phonetic interaction scenarios Content-data:
If so, extracting the content-data of hit as to the feedback for correcting semantic parsing result from the terminal device.
8. the voice interactive method of terminal device as claimed in claim 7, which is characterized in that the scene state data use The mode of knowledge mapping caches, and from the keyword extracted in the dialogue stream as the node in the knowledge mapping, is associated Node between be connected with each other.
9. a terminal device characterized by comprising
Module is locally stored, for prestoring semantic error-correction rule;
Voice input module, for receiving voice input;
Voice transfer module carries out online semantic parsing for voice input to be uploaded to resolution server, and receives The online semantic parsing result of the resolution server feedback;
Semantic correction module, for locally parsing knot to the online semanteme in terminal device according to the semantic error-correction rule Fruit carries out semantic error correction, obtains correcting semantic parsing result;
As a result feedback module, for being fed back to the semantic parsing result of the correction.
10. terminal device as claimed in claim 9, which is characterized in that the terminal device further include:
Analysis judgment module, for judging whether the voice input meets the local parsing condition of the terminal device, if it is not, The voice transfer module is then called, if so, calling:
Local parsing module, for when voice input meets the local parsing condition, the voice to be inputted in institute It states terminal device and carries out local semantic parsing, obtain local semantic parsing result;
The semanteme correction module is also used to according to the semantic error-correction rule, local to the local language in the terminal device Adopted parsing result carries out semantic error correction, obtains the semantic parsing result of the correction.
11. the terminal device as described in claim 9 or 10, which is characterized in that the semanteme error-correction rule includes the terminal Common entity object information of the equipment in different phonetic interaction scenarios;
When the online semantic parsing result or the local semantic parsing result belong to content class intention, the semanteme error correction Module is specifically used for verifying original entity object letter in the online semantic parsing result or the local semantic parsing result Whether breath is contained in the common entity object information of current speech interaction scenarios, if it is not, then from the semantic error-correction rule Inquire current speech interaction scenarios with described in original immediate common entity object information substitution of entity object information Original entity object information forms the semantic parsing result of the correction.
12. terminal device as claimed in claim 11, which is characterized in that when original entity object information is commonly used with described It is closest when relationship between entity object information includes any one in following scenario described:
Synonym;
Near synonym;
Homonym;
Similarity is more than the fuzzy matching word of default similarity threshold.
13. terminal device as claimed in claim 10, which is characterized in that the module that is locally stored is also used to prestore local life Word and user model are enabled, the user model includes the different expression-forms of the local command word, and local parsing condition includes The local command word is hit in the voice input;
The analysis judgment module is specifically used for hitting any one expression-form in the user model in voice input When, determine that corresponding local command word is hit in the voice input;
The terminal device further include:
Model modification module, for updating the user model according to the online semantic parsing result, described update includes increasing Add, delete, modifying local command word, and increase, delete, modifying the local command word language expression at least one Kind.
14. the terminal device as described in claim 9 or 10, which is characterized in that the terminal device further include:
Device control module, for belonging to control class meaning in the online semantic parsing result or the local semantic parsing result When figure, controls the equipment and execute control command.
15. the terminal device as described in claim 9 or 10, which is characterized in that the module that is locally stored is also used to prestore field Scape status data, the scene state data include the dialog history stream distinguished according to different phonetic interaction scenarios and go through with described The associated content-data of history dialogue stream;
When the online semantic parsing result or the local semantic parsing result belong to content class intention, the result feedback Module is specifically used for judging whether the semantic parsing result of the correction hits the scene state data, if so, from described Ground memory module extracts the content-data of hit as to the feedback for correcting semantic parsing result.
16. terminal device as claimed in claim 15, which is characterized in that the scene state data use the side of knowledge mapping Formula caching, from the keyword extracted in the dialogue stream as the node in the knowledge mapping, phase between associated node It connects.
CN201910655031.3A 2019-07-19 2019-07-19 Terminal equipment and voice interaction method thereof Active CN110211577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910655031.3A CN110211577B (en) 2019-07-19 2019-07-19 Terminal equipment and voice interaction method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910655031.3A CN110211577B (en) 2019-07-19 2019-07-19 Terminal equipment and voice interaction method thereof

Publications (2)

Publication Number Publication Date
CN110211577A true CN110211577A (en) 2019-09-06
CN110211577B CN110211577B (en) 2021-06-04

Family

ID=67797917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910655031.3A Active CN110211577B (en) 2019-07-19 2019-07-19 Terminal equipment and voice interaction method thereof

Country Status (1)

Country Link
CN (1) CN110211577B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956958A (en) * 2019-12-04 2020-04-03 深圳追一科技有限公司 Searching method, searching device, terminal equipment and storage medium
CN111145757A (en) * 2020-02-18 2020-05-12 上海华镇电子科技有限公司 Vehicle-mounted voice intelligent Bluetooth integration device and method
CN111222322A (en) * 2019-12-31 2020-06-02 联想(北京)有限公司 Information processing method and electronic device
CN111554281A (en) * 2020-03-12 2020-08-18 厦门中云创电子科技有限公司 Vehicle-mounted man-machine interaction method for automatically identifying languages, vehicle-mounted terminal and storage medium
CN113190663A (en) * 2021-04-22 2021-07-30 宁波弘泰水利信息科技有限公司 Intelligent interaction method and device applied to water conservancy scene, storage medium and computer equipment
CN113763944A (en) * 2020-09-29 2021-12-07 浙江思考者科技有限公司 AI video cloud interactive system based on simulation person logic knowledge base
CN113768387A (en) * 2020-06-09 2021-12-10 珠海优特智厨科技有限公司 Batching method, batching device, storage medium and computing equipment

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1559104A1 (en) * 2002-11-01 2005-08-03 Synchro Arts Limited Methods and apparatus for use in sound replacement with automatic synchronization to images
CN103413549A (en) * 2013-07-31 2013-11-27 深圳创维-Rgb电子有限公司 Voice interaction method and system and interaction terminal
CN103594085A (en) * 2012-08-16 2014-02-19 百度在线网络技术(北京)有限公司 Method and system providing speech recognition result
CN103944983A (en) * 2014-04-14 2014-07-23 美的集团股份有限公司 Error correction method and system for voice control instruction
US8935166B2 (en) * 2011-08-19 2015-01-13 Dolbey & Company, Inc. Systems and methods for providing an electronic dictation interface
CN104978964A (en) * 2014-04-14 2015-10-14 美的集团股份有限公司 Voice control instruction error correction method and system
CN106057205A (en) * 2016-05-06 2016-10-26 北京云迹科技有限公司 Intelligent robot automatic voice interaction method
CN106534548A (en) * 2016-11-17 2017-03-22 科大讯飞股份有限公司 Voice error correction method and device
CN106992009A (en) * 2017-05-03 2017-07-28 深圳车盒子科技有限公司 Vehicle-mounted voice exchange method, system and computer-readable recording medium
CN107195303A (en) * 2017-06-16 2017-09-22 北京云知声信息技术有限公司 Method of speech processing and device
CN107688614A (en) * 2017-08-04 2018-02-13 平安科技(深圳)有限公司 It is intended to acquisition methods, electronic installation and computer-readable recording medium
CN109065054A (en) * 2018-08-31 2018-12-21 出门问问信息科技有限公司 Speech recognition error correction method, device, electronic equipment and readable storage medium storing program for executing
CN109410927A (en) * 2018-11-29 2019-03-01 北京蓦然认知科技有限公司 Offline order word parses the audio recognition method combined, device and system with cloud

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1559104A1 (en) * 2002-11-01 2005-08-03 Synchro Arts Limited Methods and apparatus for use in sound replacement with automatic synchronization to images
US8935166B2 (en) * 2011-08-19 2015-01-13 Dolbey & Company, Inc. Systems and methods for providing an electronic dictation interface
CN103594085A (en) * 2012-08-16 2014-02-19 百度在线网络技术(北京)有限公司 Method and system providing speech recognition result
CN103413549A (en) * 2013-07-31 2013-11-27 深圳创维-Rgb电子有限公司 Voice interaction method and system and interaction terminal
CN103944983A (en) * 2014-04-14 2014-07-23 美的集团股份有限公司 Error correction method and system for voice control instruction
CN104978964A (en) * 2014-04-14 2015-10-14 美的集团股份有限公司 Voice control instruction error correction method and system
CN106057205A (en) * 2016-05-06 2016-10-26 北京云迹科技有限公司 Intelligent robot automatic voice interaction method
CN106534548A (en) * 2016-11-17 2017-03-22 科大讯飞股份有限公司 Voice error correction method and device
CN106992009A (en) * 2017-05-03 2017-07-28 深圳车盒子科技有限公司 Vehicle-mounted voice exchange method, system and computer-readable recording medium
CN107195303A (en) * 2017-06-16 2017-09-22 北京云知声信息技术有限公司 Method of speech processing and device
CN107688614A (en) * 2017-08-04 2018-02-13 平安科技(深圳)有限公司 It is intended to acquisition methods, electronic installation and computer-readable recording medium
CN109065054A (en) * 2018-08-31 2018-12-21 出门问问信息科技有限公司 Speech recognition error correction method, device, electronic equipment and readable storage medium storing program for executing
CN109410927A (en) * 2018-11-29 2019-03-01 北京蓦然认知科技有限公司 Offline order word parses the audio recognition method combined, device and system with cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BAOXIANG LI: ""speech recognition error correction by using combinational measures"", 《IEEE INTERNATIONAL CONFERENCE ON NETWORK INFRASTRUCTURE AND DIGITAL CONTENT》 *
韦向峰: ""一种基于语义分析的汉语语音识别纠错方法"", 《计算机科学》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956958A (en) * 2019-12-04 2020-04-03 深圳追一科技有限公司 Searching method, searching device, terminal equipment and storage medium
CN111222322A (en) * 2019-12-31 2020-06-02 联想(北京)有限公司 Information processing method and electronic device
CN111145757A (en) * 2020-02-18 2020-05-12 上海华镇电子科技有限公司 Vehicle-mounted voice intelligent Bluetooth integration device and method
CN111554281A (en) * 2020-03-12 2020-08-18 厦门中云创电子科技有限公司 Vehicle-mounted man-machine interaction method for automatically identifying languages, vehicle-mounted terminal and storage medium
CN111554281B (en) * 2020-03-12 2023-11-07 厦门中云创电子科技有限公司 Vehicle-mounted man-machine interaction method for automatically identifying languages, vehicle-mounted terminal and storage medium
CN113768387A (en) * 2020-06-09 2021-12-10 珠海优特智厨科技有限公司 Batching method, batching device, storage medium and computing equipment
CN113763944A (en) * 2020-09-29 2021-12-07 浙江思考者科技有限公司 AI video cloud interactive system based on simulation person logic knowledge base
CN113763944B (en) * 2020-09-29 2024-06-04 浙江思考者科技有限公司 AI video cloud interaction system based on pseudo person logic knowledge base
CN113190663A (en) * 2021-04-22 2021-07-30 宁波弘泰水利信息科技有限公司 Intelligent interaction method and device applied to water conservancy scene, storage medium and computer equipment

Also Published As

Publication number Publication date
CN110211577B (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN110211577A (en) Terminal device and its voice interactive method
CN104462262B (en) A kind of method for realizing phonetic search, device and browser client
JP2020102234A (en) Method for adaptive conversation state management with filtering operator applied dynamically as part of conversational interface
EP3520335B1 (en) Control system using scoped search and conversational interface
CN106021463B (en) Method, intelligent service system and the intelligent terminal of intelligent Service are provided based on artificial intelligence
CN104050219B (en) Method and apparatus for managing conversation message
US11682393B2 (en) Method and system for context association and personalization using a wake-word in virtual personal assistants
CN106874441A (en) Intelligent answer method and apparatus
CN107146610A (en) A kind of determination method and device of user view
CN103577548B (en) Method and device for matching characters with close pronunciation
CN106601250A (en) Speech control method and device and equipment
CN110045638B (en) Cooking information recommendation method and device and storage medium
CN107507616A (en) The method to set up and device of gateway scene
TW202025139A (en) Voice interaction method, device and system
CN107995249A (en) A kind of method and apparatus of voice broadcast
US20220022289A1 (en) Method and electronic device for providing audio recipe and cooking configuration
CN107870581A (en) Cooking control method and cooking equipment
CN109660858A (en) Transmission method, device, terminal and the server of direct broadcasting room interaction data
CN110428829A (en) The method and device of cloud voice control cooking robot
CN110021299A (en) Voice interactive method, device, system and storage medium
CN105808660A (en) Robot menu system based on speech recognition
CN104166455B (en) Method and apparatus for determining the input model corresponding to target user
CN105900038B (en) A kind of electromagnetic wave production method and Intelligent bracelet
CN205072656U (en) Intelligence pronunciation steam ager
CN112800195B (en) Configuration method and system of conversation robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant