CN106598241A - Interactive data processing method and device for intelligent robot - Google Patents
Interactive data processing method and device for intelligent robot Download PDFInfo
- Publication number
- CN106598241A CN106598241A CN201611109807.4A CN201611109807A CN106598241A CN 106598241 A CN106598241 A CN 106598241A CN 201611109807 A CN201611109807 A CN 201611109807A CN 106598241 A CN106598241 A CN 106598241A
- Authority
- CN
- China
- Prior art keywords
- story
- interactive information
- modal
- stories
- modes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Information Transfer Between Computers (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to an interactive data processing method and device for an intelligent robot. The method comprises: an interactive information acquiring step of acquiring multi-mode interactive information related to a user; a story mode turning-on step of turning on a corresponding IP story mode according to the multi-mode interactive information; and a feedback output step of generating and outputting corresponding multi-mode feedback information in the IP story mode according to story parameters of an IP story. According to the method, the IP story can be introduced into the daily human-computer interaction process; the method is simple, convenient and rapid in implementing process; and in the initial development period of the industry, time and cost investment of each party can be greatly saved.
Description
Technical field
The present invention relates to robotics, specifically, are related at a kind of interaction data for intelligent robot
Reason method and device.
Background technology
With the continuous development of science and technology, the introducing of information technology, computer technology and artificial intelligence technology, machine
Industrial circle has progressively been walked out in the research of people, gradually extend to the neck such as medical treatment, health care, family, amusement and service occupation
Domain.And people for the requirement of robot also conform to the principle of simplicity the multiple mechanical action of substance be promoted to anthropomorphic question and answer, autonomy and with
The intelligent robot that other robot is interacted, man-machine interaction also just becomes the key factor for determining intelligent robot development.
The content of the invention
To solve the above problems, the invention provides a kind of interaction data processing method for intelligent robot, its bag
Include:
Interactive information obtaining step, obtains the multi-modal interactive information with regard to user;
Story mode opens step, and according to the multi-modal interactive information corresponding IP story modes are opened;
Feedback output step, under the IP story modes, according to the story parameter of IP stories, generates and exports corresponding
Multi-modal feedback information.
According to one embodiment of present invention, open in step in the story mode,
The multi-modal interactive information is parsed, user is obtained and is opened intention, opened according to the user and be intended to touch
Send out and open corresponding IP story modes;Or,
The multi-modal interactive information is parsed, interaction scenarios data are obtained, according to the interaction scenarios data master
Dynamic to open corresponding IP story modes, the interaction scenarios data include active user's mood data.
According to one embodiment of present invention, open in step in the story mode,
The multi-modal interactive information is parsed, obtains referring to the triggering of IP stories corresponding A PP for opening
Order, responds the triggering command to open corresponding IP story modes.
According to one embodiment of present invention, in the feedback output step, based on dialog model, according to IP events
The story parameter of thing generates the multi-modal feedback information, wherein, in the dialog model:
According to the story parameter of the IP stories, corresponding multi-modal feedback information is continuously generated and exported;Or,
According to the story parameter of the IP stories, the first feedback information is generated and exported, getting the friendship of user input
After mutual information, the second feedback information is regenerated and exported;Or,
It is defeated to accessed user after generating and exporting the first feedback information according to the story parameter of the IP stories
The interactive information for entering is parsed, and combines user view second feedback information of generation of parsing.
According to one embodiment of present invention, in the feedback output step,
After the IP stories terminate, IP story modes are closed;Or,
Obtain for the multi-modal feedback information input interactive information, and according to it is described input interactive information close or
Continue on the IP story modes.
Present invention also offers a kind of interaction data processing meanss for intelligent robot, it includes:
Interactive information acquisition module, it is used to obtain the multi-modal interactive information with regard to user;
Story mode opening module, it is used to open corresponding IP story modes according to the multi-modal interactive information;
Feedback output module, it is used under the IP story modes, according to the story parameter of IP stories, generates and exports
Corresponding multi-modal feedback information.
According to one embodiment of present invention, the story mode opening module is configured to:
The multi-modal interactive information is parsed, user is obtained and is opened intention, opened according to the user and be intended to touch
Send out and open corresponding IP story modes;Or,
The multi-modal interactive information is parsed, interaction scenarios data are obtained, according to the interaction scenarios data master
Dynamic to open corresponding IP story modes, the interaction scenarios data include active user's mood data.
According to one embodiment of present invention, the story mode opening module is configured to the multi-modal interactive information
Parsed, obtain, for opening the triggering command with IP stories corresponding A PP, responding the triggering command to open correspondence
IP story modes.
According to one embodiment of present invention, the feedback output module is configured to dialog model, according to the IP
The story parameter of story generates the multi-modal feedback information, wherein, in the dialog model:
The feedback output module is configured to the story parameter according to the IP stories, is continuously generated and exports corresponding many
Mode feedback information;Or,
The feedback output module is configured to the story parameter according to the IP stories, generates and export the first feedback letter
Breath, after the interactive information of user input is got, regenerates and exports the second feedback information;Or,
The feedback output module is configured to the story parameter according to the IP stories, generates and export the first feedback information
Afterwards, the interactive information of accessed user input is parsed, and combines the user view of parsing and generate the second feedback letter
Breath.
According to one embodiment of present invention, the feedback output module is configured to:
After the IP stories terminate, IP story modes are closed;Or,
Obtain for the multi-modal feedback information input interactive information, and according to it is described input interactive information close or
Continue on the IP story modes.
IP stories can be incorporated into daily people by the interaction data method for intelligent robot provided by the present invention
In machine interaction, it realizes that process is simple, convenient, in the industry early stage of development, its can greatly save the time of each side and
Cost input.
Meanwhile, the method can also cause user to participate in intelligence during man-machine interaction is carried out with intelligent robot
During the story drama time delay of robot, the enthusiasm of man-machine interaction is carried out so as to be favorably improved user, so also carried
The Consumer's Experience and user's viscosity of high intelligent robot.
Other features and advantages of the present invention will be illustrated in the following description, also, the partly change from description
Obtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages can be by description, rights
Specifically noted structure is realizing and obtain in claim and accompanying drawing.
Description of the drawings
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the accompanying drawing wanted needed for technology description to do simple introduction:
Fig. 1 is that the interaction data processing method for intelligent robot according to an embodiment of the invention realizes flow process
Schematic diagram;
Fig. 2 is the realization stream of the interaction data processing method for intelligent robot in accordance with another embodiment of the present invention
Journey schematic diagram;
Fig. 3 is the realization stream of the interaction data processing method for intelligent robot according to another embodiment of the invention
Journey schematic diagram;
Fig. 4 is the realization stream of the interaction data processing method for intelligent robot according to further embodiment of the present invention
Journey schematic diagram;
Fig. 5 is the structural representation of the interaction data processing meanss for intelligent robot according to an embodiment of the invention
Figure.
Specific embodiment
Describe embodiments of the present invention in detail below with reference to drawings and Examples, how the present invention is applied whereby
Technological means solving technical problem, and reach technique effect realize that process can fully understand and implement according to this.Need explanation
As long as not constituting conflict, each embodiment and each feature in each embodiment in the present invention can be combined with each other,
The technical scheme for being formed is within protection scope of the present invention.
Meanwhile, in the following description, many details are elaborated for illustrative purposes, to provide to of the invention real
Apply the thorough understanding of example.It will be apparent, however, to one skilled in the art, that the present invention can be without tool here
Body details or described ad hoc fashion are implementing.
In addition, can be in the department of computer science of such as one group of computer executable instructions the step of the flow process of accompanying drawing is illustrated
Perform in system, and, although show logical order in flow charts, but in some cases, can be with different from herein
Order perform shown or described step.
Be currently used for the interaction data processing method of intelligent robot be only to the interaction data of user input at
Manage to generate corresponding feedback information and export to user, so as to the dialogue realized between intelligent robot and user is interacted.So
And, it is trend of the times that IP story contents and robot product are combined, and how its typical problem for facing is by
Some IP stories are quickly introduced in human computer conversation's experience of robot product.
For the problems referred to above in the presence of prior art, the invention provides a kind of new friendship for intelligent robot
Mutual data processing method, the method can realize that existing IP stories are quickly introduced human computer conversation's experience of robot product
In.
It is pointed out that in different embodiments of the invention, the IP stories mentioned by the present invention both may refer to spy
Determine content of text, it is also possible to refer to drama content, can also refer to context of situation, or be other reasonable contents etc., the present invention
Not limited to this.
Fig. 1 show the interaction data processing method for intelligent robot provided by the present invention realize flow process illustrate
Figure.
As shown in figure 1, interaction data processing method provided by the present invention is obtained with regard to user first in step S101
Multi-modal interactive information.It is pointed out that in different embodiments of the invention, the method is acquired in step S101
To the multi-modal interactive information with regard to user both can be the interactive information that is input into by user, or by intelligent machine
The interactive information that people is arrived by relevant data acquisition equipment institute active obtaining, can also be and pass through phase by third party (such as the head of a family)
The triggering command that APP clients are input into is closed, or is the information accessed by other rational methods, the invention is not restricted to this.
After the multi-modal interactive information with regard to user is got, the method can in step s 102 according to above-mentioned multi-modal
Interactive information opens corresponding IP story modes.After corresponding IP story modes are opened, the method can exist in step s 103
Under the IP story modes, corresponding multi-modal feedback information is generated according to the story parameter of IP stories, and it is many by what is generated
Mode feedback information is exported to user.
Wherein, in the present embodiment, the method can close the IP story modes after IP stories terminate.Certainly, according to reality
Need, the method can also obtain input interaction letter of the user for the mode feedback information after multi-modal feedback information is exported
Breath, and the IP story modes are closed or continued on according to the input interactive information.
In order to clearly illustrate the reality of the interaction data processing method for intelligent robot provided by the present invention
Show principle, realize flow process and advantage, the interaction data processing method is made into one below in conjunction with different embodiments
Step ground explanation.
Embodiment one:
What Fig. 2 showed the interaction data processing method for intelligent robot that the present embodiment is provided realizes that flow process is shown
It is intended to.
As shown in Fig. 2 the interaction data processing method that the present embodiment is provided obtain in step s 201 first with regard to
The multi-modal interactive information at family.It is pointed out that in the present embodiment, step S201 implements principle and realizes process
It is identical with the content described by step S101 in above-mentioned Fig. 1, therefore here is no longer repeated the related content of step S201.
In the present embodiment, the method can in step S202 to step S201 in accessed multi-modal interactive information enter
Row parsing, so as to obtain user intention is opened.For example, the method multi-modal interactive information accessed in step s 201 is
The voice messaging " telling the happy story of a Little Bear pleasure to me " that user is input into, then by the predicate in step S202
Message breath is carried out parsing and can determine that the user currently expects the story for listening Little Bear pleasure happy, therefore the method also just can be generated
The user of active user opens and is intended to " story for listening Little Bear pleasure happy ".
After user's unlatching intention is obtained, as shown in Fig. 2 the method can be in step S203 according to gained in step S202
To user open intention and open corresponding IP story modes to trigger.Subsequently, the method also just can in step S204
Under the IP story modes opened in step S203, generated according to the story parameter of correspondence IP stories corresponding multi-modal anti-
Feedforward information is simultaneously exported to user.
For example, if the resulting user in step S202 of the method opens is intended to " story for listening Little Bear pleasure happy ",
So the method also just can be opened according to above-mentioned user in step S203 and be intended to the IP story mode happy to open Little Bear pleasure.
Under the IP story modes, the method will find pleasure in happy story parameter (such as story drama) to generate corresponding language according to Little Bear
Sound/text message is simultaneously exported to user.
However, in different embodiments of the invention, according to actual needs, the story parameter under each IP story mode was both
Can be obtained by reading relational storage, it is also possible to obtained by inquiry or study, the invention is not restricted to this.
It is also desirable to, it is noted that in different embodiments of the invention, according to actual needs, the method is in step
Different dialog models can be based in S204, the story parameter of IP stories according to determined by generates corresponding multi-modal anti-
Feedforward information is simultaneously exported to user.
In one embodiment of the invention, the method can be come in step S204 according to the story parameter of IP stories
It is continuously generated corresponding multi-modal feedback information and exports to user.Specifically, the method export a feedback information A1 after, its
User can't be waited for the input information of feedback information A1, but be directly entered Next dialog link exporting feedback letter
Breath A2.
For example, in step S204, the method outputs such as first that " you are good, dear little friend according to story parameter
Friend, I be Little Bear pleasure pleasure " voice messaging, after the output for completing above-mentioned voice messaging, the method can be according to follow-up story
And then parameter exports such as that " X point X seconds when being now the X X X month, X day, Nice to see you, and I can remember this, and this is special
Time, let us embraces " voice messaging and control intelligent robot and make the action embraced.In above process,
The method can't go to obtain the feedback that user is input into for the voice messaging of intelligent robot output.
And in another embodiment of the present invention, the method then can join in step S204 according to the story of IP stories
Number, first generates and exports the first feedback information, and subsequently it can be regenerated and export after the interactive information of user input is got
Second feedback information.In the process, the method can't be parsed to the interactive information of accessed user input, i.e.,
The interactive information that user is input into can't produce impact to the subsequent execution process of the method.
For example, in step S204, the method generates and outputs such as " I and fellows according to corresponding story parameter
Just live in figure soul music garden, you recognize my fellows " voice messaging.After the output for completing above-mentioned voice messaging, should
Method can wait user to carry out to above-mentioned voice messaging accordingly, and obtain the interaction letter that user is input into for above-mentioned voice messaging
Breath.After the interactive information of user input is got, the method can't be parsed to the interactive information, but according to follow-up
Story parameter come generate and export such as " aha he~little probably dragon picture figure, rabbit cc, monkey spirit spirit and Sciurus vulgariss are joyous.He
Each be also all headliner, we experienced especially how joyful thing together " voice messaging.
In another embodiment of the present invention, the method can be generated in step S204 according to the story parameter of IP stories
And the first feedback information is exported, subsequently it can be parsed after the interactive information of user input is got to the interactive information,
And the user view that obtains of parsing is combined generating corresponding second feedback information according to follow-up story parameter and export to user.
For example, in step S204, the method generated according to corresponding story parameter and output such as " once, I
Also seen is lovely otocyon megalotis in desert, child, otocyon megalotis, you were again by meeting " voice messaging.If obtained
To the interactive information of user input be " meeting " or " knowing ", then the method will export such as " very severe!You instruct
It is many " voice messaging;And if the interactive information of the user input for getting is " not seeing " or " not knowing ", that
Party's rule can export the voice letter such as " having no relations, I also has ignorant things, current first to tell you by me "
Breath;And if the interactive information of the user input for getting is " I also can hardly be explained ", then party's rule can export such as " good
, when you are inquisitive, otocyon megalotis is said to me, and I tells again you " voice messaging.
Wherein, in the present embodiment, the method can close the IP story modes after IP stories terminate.Certainly, according to reality
Need, the method can also obtain input interaction letter of the user for the mode feedback information after multi-modal feedback information is exported
Breath, and the IP story modes are closed or continued on according to the input interactive information.For example, if user input such as " I
Be not desired to listen " voice messaging when, then now the method also may turn off the IP story modes.
Embodiment two:
Fig. 3 show the data processing method for intelligent robot that the present embodiment is provided realize flow process illustrate
Figure.
As shown in figure 3, the data processing method that the present embodiment is provided is obtained with regard to user's first in step S301
Multi-modal interactive information.It is pointed out that in the present embodiment, step S301 implement principle and realize process with it is upper
The content stated in Fig. 1 described by step S101 is identical, therefore here is no longer repeated the related content of step S301.
In the present embodiment, the method can in step S202 to step S201 in accessed multi-modal interactive information enter
Row parsing, so as to obtain corresponding contextual data.After interaction scenarios data are obtained, the method can be in step S303 according to step
Resulting interaction scenarios data to be opening corresponding IP story modes in rapid S302, and in step s 304 in above-mentioned IP stories
Under pattern, according to the story parameter of IP stories, corresponding multi-modal feedback information is generated and exported.
It is pointed out that in the present embodiment, step S304 implements principle and realizes process with above-mentioned enforcement
Content in example one involved by step S204 is similar to, therefore here is no longer repeated the related content of step S304.
For example, the accessed multi-modal interactive information in step S301 of the method includes image information, in step
Determine that the current emotional of user (such as child) is more low or is in by carrying out parsing to above-mentioned image information in S302
During faineant state, now the method also will actively trigger corresponding IP story modes according to the contextual data, so as to
The voice messaging of such as " I tells the happy story of a Little Bear pleasure to you " is generated and exported in step s 304.
Embodiment three:
Fig. 4 show the data processing method for intelligent robot that the present embodiment is provided realize flow process illustrate
Figure.
As shown in figure 4, the data processing method that the present embodiment is provided is obtained with regard to user's first in step S401
Multi-modal interactive information.It is pointed out that in the present embodiment, step S401 implement principle and realize process with it is upper
The content stated in Fig. 1 described by step S101 is identical, therefore here is no longer repeated the related content of step S401.
In step S402, the method can be parsed to multi-modal interactive information accessed in step S401, from
And obtain for opening the triggering command with IP story corresponding As PP.The method can corresponding above-mentioned triggering command in step S403
To open corresponding IP story modes.Subsequently, the method can in step s 404 under above-mentioned IP story modes, according to IP stories
Story parameter, generate and export corresponding multi-modal feedback information.
It is pointed out that in the present embodiment, step S404 implements principle and realizes process with above-mentioned enforcement
Content in example one involved by step S204 is similar to, therefore here is no longer repeated the related content of step S404.
For example, if the uncommon intelligent robot of the head of a family tells about the happy story of Little Bear pleasure to child, then user can pass through phase
The network equipment (such as smart mobile phone) answered to intelligent robot sends corresponding command information.The method can in step S401
To get the above-mentioned command information that the head of a family is sent by the network equipment, and above-mentioned command information is carried out in step S402
Parsing obtains the triggering command of the APP for opening corresponding with happy this IP story of Little Bear.Above-mentioned contact command is obtained in parsing
Afterwards, the method just can open corresponding A PP, so as to start to tell about the happy story of Little Bear pleasure.
As can be seen that the interaction data method for intelligent robot provided by the present invention can be by from foregoing description
IP stories are incorporated in daily interactive process, and it realizes that process is simple, convenient, and in the industry early stage of development, it being capable of pole
The earth saves the time of each side and cost input.
Meanwhile, the method can also cause user to participate in intelligence during man-machine interaction is carried out with intelligent robot
During the story drama time delay of robot, the enthusiasm of man-machine interaction is carried out so as to be favorably improved user, so also carried
The Consumer's Experience and user's viscosity of high intelligent robot.
Present invention also offers a kind of data processing equipment for intelligent robot, Fig. 5 shows should in the present embodiment
The structural representation of data processing equipment.
As shown in figure 5, the data processing equipment that the present embodiment is provided is preferably included:Interactive information acquisition module 501,
Story mode opening module 502 and feedback output module 503.Wherein, interactive information acquisition module 501 is used to obtain with regard to user
Multi-modal interactive information.It may be noted that be, in different embodiments of the invention, acquired in interactive information acquisition module 501
To interactive information both can be voice messaging, or image information, can also be the phase that other network equipments send
Command information is closed, or is other appropriate messages, the invention is not restricted to this.
Interactive information acquisition module 501 can transmit the multi-modal interactive information for getting to story mode opening module
502.Story mode opening module 502 can be held after above-mentioned multi-modal interactive information is received according to the multi-modal interactive information
Open corresponding IP story modes, so as to by feedback output module 503 under above-mentioned IP story modes according to corresponding IP stories
Story parameter is generating and export corresponding multi-modal feedback information.
It is pointed out that in different embodiments of the invention, story module opening module 502 and feedback export mould
Block 503 realizes that the concrete principle and process of its respective function both can be with step S202 in above-described embodiment one to step S204
The content for being illustrated is similar to, it is also possible to similar with the content that step S302 to step S304 in above-described embodiment two is illustrated, also
Can be similar with the content that step S402 to step S404 in above-described embodiment three is illustrated, therefore here is no longer opened story module
The related content for opening module 502 and feedback output module 503 is repeated.
It should be understood that disclosed embodiment of this invention is not limited to ad hoc structure disclosed herein or processes step
Suddenly, the equivalent substitute of these features that those of ordinary skill in the related art are understood should be extended to.It should also be understood that
It is that term as used herein is only used for describing the purpose of specific embodiment, and is not intended to limit.
" one embodiment " or " embodiment " mentioned in description means special characteristic, the structure for describing in conjunction with the embodiments
Or characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that description various places throughout occurs
Apply example " or " embodiment " same embodiment might not be referred both to.
Although above-mentioned example is used to illustrate principle of the present invention in one or more applications, for the technology of this area
For personnel, in the case of the principle and thought without departing substantially from the present invention, hence it is evident that can in form, the details of usage and enforcement
It is upper various modifications may be made and without paying creative work.Therefore, the present invention is defined by the appended claims.
Claims (10)
1. a kind of interaction data processing method for intelligent robot, it is characterised in that include:
Interactive information obtaining step, obtains the multi-modal interactive information with regard to user;
Story mode opens step, and according to the multi-modal interactive information corresponding IP story modes are opened;
Feedback output step, under the IP story modes, according to the story parameter of IP stories, generates and exports corresponding multimode
State feedback information.
2. the method for claim 1, it is characterised in that open in step in the story mode,
The multi-modal interactive information is parsed, user is obtained and is opened intention, intention triggering is opened according to the user and is opened
Open corresponding IP story modes;Or,
The multi-modal interactive information is parsed, interaction scenarios data are obtained, is actively opened according to the interaction scenarios data
Corresponding IP story modes are opened, the interaction scenarios data include active user's mood data.
3. method as claimed in claim 1 or 2, it is characterised in that open in step in the story mode,
The multi-modal interactive information is parsed, obtains, for opening the triggering command with IP stories corresponding A PP, ringing
Answer the triggering command to open corresponding IP story modes.
4. the method for claim 1, it is characterised in that in the feedback output step, based on dialog model, according to
The story parameter of the IP stories generates the multi-modal feedback information, wherein, in the dialog model:
According to the story parameter of the IP stories, corresponding multi-modal feedback information is continuously generated and exported;Or,
According to the story parameter of the IP stories, the first feedback information is generated and exported, getting the interaction letter of user input
After breath, the second feedback information is regenerated and exported;Or,
According to the story parameter of the IP stories, after generating and exporting the first feedback information, to accessed user input
Interactive information is parsed, and combines user view second feedback information of generation of parsing.
5. the method as any one of Claims 1 to 4, it is characterised in that in the feedback output step,
After the IP stories terminate, IP story modes are closed;Or,
The input interactive information for the multi-modal feedback information is obtained, and is closed or is continued according to the input interactive information
Open the IP story modes.
6. a kind of interaction data processing meanss for intelligent robot, it is characterised in that include:
Interactive information acquisition module, it is used to obtain the multi-modal interactive information with regard to user;
Story mode opening module, it is used to open corresponding IP story modes according to the multi-modal interactive information;
Feedback output module, it is used under the IP story modes, according to the story parameter of IP stories, generates and export corresponding
Multi-modal feedback information.
7. device as claimed in claim 6, it is characterised in that the story mode opening module is configured to:
The multi-modal interactive information is parsed, user is obtained and is opened intention, intention triggering is opened according to the user and is opened
Open corresponding IP story modes;Or,
The multi-modal interactive information is parsed, interaction scenarios data are obtained, is actively opened according to the interaction scenarios data
Corresponding IP story modes are opened, the interaction scenarios data include active user's mood data.
8. device as claimed in claims 6 or 7, it is characterised in that the story mode opening module is configured to described many
Mode interactive information is parsed, and obtains, for opening the triggering command with IP stories corresponding A PP, responding the triggering and referring to
Make opening corresponding IP story modes.
9. device as claimed in claim 6, it is characterised in that the feedback output module is configured to dialog model, root
The multi-modal feedback information is generated according to the story parameter of the IP stories, wherein, in the dialog model:
The feedback output module is configured to the story parameter according to the IP stories, is continuously generated and exports corresponding multi-modal
Feedback information;Or,
The feedback output module is configured to the story parameter according to the IP stories, generates and export the first feedback information,
After getting the interactive information of user input, the second feedback information is regenerated and exported;Or,
The feedback output module is configured to the story parameter according to the IP stories, after generating and exporting the first feedback information,
The interactive information of accessed user input is parsed, and combines the user view of parsing and generate the second feedback information.
10. the device as any one of claim 6~9, it is characterised in that the feedback output module is configured to:
After the IP stories terminate, IP story modes are closed;Or,
The input interactive information for the multi-modal feedback information is obtained, and is closed or is continued according to the input interactive information
Open the IP story modes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611109807.4A CN106598241A (en) | 2016-12-06 | 2016-12-06 | Interactive data processing method and device for intelligent robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611109807.4A CN106598241A (en) | 2016-12-06 | 2016-12-06 | Interactive data processing method and device for intelligent robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106598241A true CN106598241A (en) | 2017-04-26 |
Family
ID=58596056
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611109807.4A Pending CN106598241A (en) | 2016-12-06 | 2016-12-06 | Interactive data processing method and device for intelligent robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106598241A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108877803A (en) * | 2018-06-08 | 2018-11-23 | 百度在线网络技术(北京)有限公司 | The method and apparatus of information for rendering |
CN109065018A (en) * | 2018-08-22 | 2018-12-21 | 北京光年无限科技有限公司 | A kind of narration data processing method and system towards intelligent robot |
CN109166572A (en) * | 2018-09-11 | 2019-01-08 | 深圳市沃特沃德股份有限公司 | The method and reading machine people that robot is read |
CN109857929A (en) * | 2018-12-29 | 2019-06-07 | 北京光年无限科技有限公司 | A kind of man-machine interaction method and device for intelligent robot |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104951077A (en) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device based on artificial intelligence and terminal equipment |
CN105975622A (en) * | 2016-05-28 | 2016-09-28 | 蔡宏铭 | Multi-role intelligent chatting method and system |
-
2016
- 2016-12-06 CN CN201611109807.4A patent/CN106598241A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104951077A (en) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device based on artificial intelligence and terminal equipment |
CN105975622A (en) * | 2016-05-28 | 2016-09-28 | 蔡宏铭 | Multi-role intelligent chatting method and system |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108877803A (en) * | 2018-06-08 | 2018-11-23 | 百度在线网络技术(北京)有限公司 | The method and apparatus of information for rendering |
CN108877803B (en) * | 2018-06-08 | 2020-03-27 | 百度在线网络技术(北京)有限公司 | Method and apparatus for presenting information |
CN109065018A (en) * | 2018-08-22 | 2018-12-21 | 北京光年无限科技有限公司 | A kind of narration data processing method and system towards intelligent robot |
CN109065018B (en) * | 2018-08-22 | 2021-09-10 | 北京光年无限科技有限公司 | Intelligent robot-oriented story data processing method and system |
CN109166572A (en) * | 2018-09-11 | 2019-01-08 | 深圳市沃特沃德股份有限公司 | The method and reading machine people that robot is read |
CN109857929A (en) * | 2018-12-29 | 2019-06-07 | 北京光年无限科技有限公司 | A kind of man-machine interaction method and device for intelligent robot |
CN109857929B (en) * | 2018-12-29 | 2021-06-15 | 北京光年无限科技有限公司 | Intelligent robot-oriented man-machine interaction method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107728780B (en) | Human-computer interaction method and device based on virtual robot | |
Glas et al. | Erica: The erato intelligent conversational android | |
CN105740948B (en) | A kind of exchange method and device towards intelligent robot | |
CN106598241A (en) | Interactive data processing method and device for intelligent robot | |
CN110400251A (en) | Method for processing video frequency, device, terminal device and storage medium | |
CN106020488A (en) | Man-machine interaction method and device for conversation system | |
CN105807933A (en) | Man-machine interaction method and apparatus used for intelligent robot | |
CN106203344A (en) | A kind of Emotion identification method and system for intelligent robot | |
CN105511608A (en) | Intelligent robot based interaction method and device, and intelligent robot | |
CN106294726A (en) | Based on the processing method and processing device that robot role is mutual | |
CN108009573B (en) | Robot emotion model generation method, emotion model and interaction method | |
CN107273477A (en) | A kind of man-machine interaction method and device for robot | |
CN105798918A (en) | Interactive method and device for intelligent robot | |
CN106531162A (en) | Man-machine interaction method and device used for intelligent robot | |
CN106503786B (en) | Multi-modal interaction method and device for intelligent robot | |
US20180158458A1 (en) | Conversational voice interface of connected devices, including toys, cars, avionics, mobile, iot and home appliances | |
CN106847274B (en) | Man-machine interaction method and device for intelligent robot | |
CN106372850A (en) | Information reminding method and device based on intelligent robot | |
CN106991123A (en) | A kind of man-machine interaction method and device towards intelligent robot | |
CN106372195A (en) | Human-computer interaction method and device for intelligent robot | |
CN106462255A (en) | A method, system and robot for generating interactive content of robot | |
Van Oijen et al. | Agent communication for believable human-like interactions between virtual characters | |
CN106354255A (en) | Man-machine interactive method and equipment facing robot product | |
JP2008107673A (en) | Conversation robot | |
CN107066288A (en) | A kind of multi-modal exchange method and device for intelligent robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170426 |
|
RJ01 | Rejection of invention patent application after publication |