CN107807734A - A kind of interaction output intent and system for intelligent robot - Google Patents

A kind of interaction output intent and system for intelligent robot Download PDF

Info

Publication number
CN107807734A
CN107807734A CN201710891490.2A CN201710891490A CN107807734A CN 107807734 A CN107807734 A CN 107807734A CN 201710891490 A CN201710891490 A CN 201710891490A CN 107807734 A CN107807734 A CN 107807734A
Authority
CN
China
Prior art keywords
facial expression
expression image
text
export
intelligent robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710891490.2A
Other languages
Chinese (zh)
Other versions
CN107807734B (en
Inventor
赵媛媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201710891490.2A priority Critical patent/CN107807734B/en
Publication of CN107807734A publication Critical patent/CN107807734A/en
Application granted granted Critical
Publication of CN107807734B publication Critical patent/CN107807734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a kind of interaction output intent and system for intelligent robot.Method includes:Obtain the personality of intelligent robot;Obtain current interactive object interactively enters data;Semantic parsing and affection computation are carried out to the data that interactively enter, obtain semantic analysis result and emotion analysis result;Text is responded according to corresponding to the semantic analysis result and emotion analysis result generation;Generate and export comprising facial expression image and/or the interaction response data for responding text, wherein, the type of the facial expression image matches with the personality, the implication of the facial expression image and the response text matches.

Description

A kind of interaction output intent and system for intelligent robot
Technical field
The present invention relates to computer realm, and in particular to a kind of interaction output intent for intelligent robot and is System.
Background technology
With the continuous development of robot technology, more and more possesses the intelligent robot quilt of autonomous interactive capability It is applied in the production and living of mankind's schedule.
In the prior art, common interactive mode is that intelligent robot obtains the input information of the mankind and parsed, Then generate and export corresponding interaction and respond, wherein, the most common text interaction robot for being namely based on textual form.
But because the daily exchange of the mankind is that a variety of AC modes mix, the exchange in social software is even more more It is kind various, therefore text interaction robot only carries out exchange with simple text and is easy to allow people to be fed up with, this just shadow significantly The Consumer's Experience of intelligent robot is rung.
The content of the invention
The invention provides a kind of interaction output intent for intelligent robot, including:
Obtain the personality of intelligent robot;
Obtain current interactive object interactively enters data;
Semantic parsing and affection computation are carried out to the data that interactively enter, obtain semantic analysis result and emotion parsing As a result;
Text is responded according to corresponding to the semantic analysis result and emotion analysis result generation;
Generate and export comprising facial expression image and/or the interaction response data for responding text, wherein, the expression figure The type of picture matches with the personality, the implication of the facial expression image and the response text matches.
In one embodiment, the personality of intelligent robot is obtained, including:
Obtain the identity information of current interactive object;
Call the personality matched with the identity information.
In one embodiment, the personality of intelligent robot is obtained, including:
Obtain current interactive environment description information and/or user mutual demand information;
The individual character matched with the current interactive environment description information and/or the user mutual demand information is called to join Number.
In one embodiment, generation includes facial expression image and/or the interaction response data for responding text, including:
Judge whether to need to export the facial expression image;
The interactive response data for including the facial expression image is generated when needing to export facial expression image.
In one embodiment, judge whether to need to export the facial expression image, including:
Interactively enter packet when containing facial expression image when described and judge to need to export the facial expression image.
In one embodiment, judge whether to need to export the facial expression image, including:
Facial expression image Response Policy is determined according to the personality, the facial expression image Response Policy includes facial expression image Respond frequency, facial expression image responds topic area and/or facial expression image emotion responds trigger policy;
Judge whether to need to export the facial expression image based on the facial expression image Response Policy.
In one embodiment, judge whether to need to export the facial expression image, including:
Current topic area and/or emotion are determined according to the semantic analysis result and/or the emotion analysis result Respond parameter;
Parameter is responded according to the topic area and/or the emotion to judge whether to need to export the facial expression image.
In one embodiment, when needing to export facial expression image, number is responded in the interaction of the generation comprising the facial expression image According to, including:
Extract text message corresponding to the facial expression image;
Compare text message corresponding to the facial expression image and the response text;
It is only defeated when the matching degree of text message corresponding to the facial expression image and the response text reaches given threshold Go out the facial expression image.
The invention also provides a kind of storage medium, it is stored with and is can be achieved as in claim 1-8 in the storage medium The program code of any one methods described.
The invention also provides a kind of intelligent robot system, the system includes:
Acquisition module is inputted, what it was configured to obtain current interactive object interactively enters data;
Output module, it is configured to export interaction response data to current interactive object;
Interaction parsing module, it is configured to:
Obtain the personality of intelligent robot;
Semantic parsing and affection computation are carried out to the data that interactively enter, obtain semantic analysis result and emotion parsing As a result;
Text is responded according to corresponding to the semantic analysis result and emotion analysis result generation;
Generation includes facial expression image and/or the interactive response data for responding text, wherein, the facial expression image Type matched with the personality of the intelligent robot, the implication of the facial expression image and the response text matches.
The method according to the invention, by way of picture output, it can be entered with larger lifting intelligent robot with user The diversity exported during row interaction, increases interactive interest, realizes the more accurate expression of dialog information, greatly improves intelligence The level that personalizes of energy robot, so as to improve the Consumer's Experience of intelligent robot.
The further feature or advantage of the present invention will illustrate in the following description.Also, the present invention Partial Feature or Advantage will be become apparent by specification, or be appreciated that by implementing the present invention.The purpose of the present invention and part Advantage can be realized or obtained by specifically noted step in specification, claims and accompanying drawing.
Brief description of the drawings
Accompanying drawing is used for providing a further understanding of the present invention, and a part for constitution instruction, the reality with the present invention Apply example to be provided commonly for explaining the present invention, be not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is method flow diagram according to an embodiment of the invention;
Fig. 2~Fig. 5 is the partial process view of method according to embodiments of the present invention;
Fig. 6 and Fig. 7 is robot system architecture's sketch according to different embodiments of the invention;
Fig. 8 is robot application schematic diagram of a scenario according to an embodiment of the invention.
Embodiment
Embodiments of the present invention are described in detail below with reference to drawings and Examples, whereby implementation personnel of the invention Can fully understand how application technology means solve technical problem to the present invention, and reach the implementation process of technique effect and according to The present invention is embodied according to above-mentioned implementation process.If it should be noted that do not form conflict, each embodiment in the present invention And each feature in each embodiment can be combined with each other, the technical scheme formed protection scope of the present invention it It is interior.
In the prior art, common interactive mode is that intelligent robot obtains the input information of the mankind and parsed, Then generate and export corresponding interaction and respond, wherein, the most common text interaction robot for being namely based on textual form.
But because the daily exchange of the mankind is that a variety of AC modes mix, the exchange in social software is even more more It is kind various, therefore text interaction robot only carries out exchange with simple text and is easy to allow people to be fed up with, this just shadow significantly The Consumer's Experience of intelligent robot is rung.
In view of the above-mentioned problems, the present invention proposes a kind of interaction output intent for intelligent robot.The present invention's In method, robot is not limited in text with interacting for user, but also includes facial expression image.
Further, facial expression image can not merely show specific semanteme, in many application scenarios, facial expression image For showing specific mood.Therefore, in the method for the invention, semantic parsing is not only carried out in interaction, is also introduced Sentiment analysis, the facial expression image specifically to be exported is determined using the dual analysis result of semantic parsing and sentiment analysis.
Further, in the normal interaction scenarios of the mankind, due to everyone hobby, the difference of custom, different people Different types of facial expression image is often chosen when expressing the same meaning.For example, children normally tend to select children's cartoon character The facial expression image of species type, and old man can tend to select the facial expression image of steady type.
Therefore, in order to further improve the level that personalizes of robot, in the present invention, assign what is personalized for robot Character trait, for the personality of the corresponding specific character trait of its formulation.During facial expression image is chosen, by selection standard Two aspects are subdivided into, one is the type of facial expression image is chosen, choose the type matched with the personality of intelligent robot Facial expression image, the character trait that personalizes of robot is embodied using the type of facial expression image;The second is facial expression image contains Justice is chosen, and chooses the facial expression image with the semantic output of the required current interactive object of response and mood output matching implication, The semanteme being intended by and mood are embodied using the concrete meaning of facial expression image.
The method according to the invention, by way of picture output, it can be entered with larger lifting intelligent robot with user The diversity exported during row interaction, increases interactive interest, realizes the more accurate expression of dialog information, greatly improves intelligence The level that personalizes of energy robot, so as to improve the Consumer's Experience of intelligent robot.
Next the detailed process of method according to embodiments of the present invention is described in detail based on accompanying drawing, in the flow chart of accompanying drawing The step of showing can perform in the computer system comprising such as one group computer executable instructions.Although in flow charts The logical order of each step is shown, but in some cases, can be to perform shown different from order herein or retouch The step of stating.
As shown in figure 1, in one embodiment, the interaction output intent of intelligent robot includes:
Obtain the personality (S110) of intelligent robot;
Obtain current interactive object interactively enters data (S120);
Semantic parsing and affection computation (S130) are carried out to interactively entering data, obtains semantic analysis result and emotion solution Analyse result;
Text (S140) is responded according to corresponding to semantic analysis result and emotion analysis result generation;
Interaction response data (S150) of the generation comprising facial expression image and/or response text, wherein, the type of facial expression image Matched with the personality of intelligent robot, the implication of facial expression image is with responding text matches;
Export the interaction response data (S160) of step S150 generations.
Further, in one embodiment, during the interaction response data comprising facial expression image is generated, root first The type of facial expression image is determined according to the personality of intelligent robot;Then filter out and contain from all facial expression images of the type The facial expression image of text is responded in justice matching.
In above process, by assigning the specific personality of intelligent robot come the output body of intelligent robot The character trait of the existing mankind, so as to lift the degree that personalizes of intelligent robot.
Further, in one embodiment, that to be simulated is preset to it according to the concrete application scene of intelligent robot Mankind's character trait.For example, the intelligent robot for interacting place in children is applied to set its simulation children's character feature.
It is not arbitrary human sexual therefore however, because different people has different interaction habits and interaction hobby The performance of lattice feature can all lift interactive experience.Determine that one of key point of the final interactive experience of intelligent robot is for intelligent machine Device people assigns suitable character trait, i.e. make intelligent robot institute simulated person class character trait be adapted to current interaction scenarios with And current interactive object.But in many application scenarios, the interaction scenarios and interactive object of intelligent robot are not one Into constant.If intelligent robot remains single character trait, when in face of some interactive objects or application scenarios Its character trait is likely to reduce interactive experience.
In view of the above-mentioned problems, in one embodiment, intelligent robot is according to different interaction scenarios, different interaction demands Itself institute simulated person class character trait is converted, so that institute simulated person class character trait is adapted to current interaction scenarios.
Specifically, in one embodiment, robot determines itself to want mould according to the identity difference of current interactive object Mankind's character trait of plan.
Specifically, as shown in Fig. 2 the identity information (S210) of current interactive object is obtained first;Then call and the body The personality (S220) of part information matches.In this manner it is possible to so that intelligent robot simulates mankind's character trait of embodiment Meet interaction habits/demand of current interactive object.For example, when current interactive object is child user, intelligent robot is adjusted With the personality of corresponding children, the character trait of one children of simulation so that child user feels oneself in interaction It is to be interacted with another children, so as to lift the interaction interest of child user and interactive experience.
Specifically, in one embodiment, robot according to the interaction demand of current interactive environment and/or interactive object not With determining itself to want simulated person's class character trait.
Specifically, as shown in figure 3, current interactive environment description information and/or user mutual demand information are obtained first (S310);Then the personality matched with current interactive environment description information and/or user mutual demand information is called (S320)。
Further, in other embodiments, the method that above two mode can also be used to combine determines personality.
Further, in one embodiment, the interaction response data that intelligent robot is exported include facial expression image and/or Respond text.It is (simple not comprising facial expression image specifically, the composition of interaction response data can be roughly classified into two kinds of situations Respond text) and include facial expression image.
For above-mentioned two situations, in one embodiment, in generation interaction response data, prime minister judges whether to need defeated Go out facial expression image;The interaction response data for including facial expression image is generated when needing to export facial expression image;When table need not be exported The interaction response data of pure response text is generated during feelings image.
In the normal interaction process of the mankind, in general, when the side in interaction both sides uses facial expression image, the opposing party It will also tend to respond using facial expression image.Therefore, in one embodiment, when judging whether to need to export facial expression image, according to The type for interactively entering data judges whether to need to export facial expression image, specifically:
Need to export facial expression image when interactively entering the packet directly judgement of when containing facial expression image.
Further, in the normal interaction process of the mankind, in general, can't all be used in each interactive term Facial expression image, to simulate this performance, it can be preset according to the mankind under general state using the frequency of facial expression image to set one Facial expression image output frequency, make the probability of intelligent robot output facial expression image match this default facial expression image output frequency Rate is so as to simulating the interbehavior of the mankind.
Therefore, in one embodiment, when judging whether to need to export facial expression image, according to the type for interactively entering data Judge whether to need to export facial expression image, specifically:
When interactively entering data and being plain text data according to default facial expression image respond frequency determine a need for it is defeated Go out facial expression image.
Further, in the normal interaction process of the mankind, because people's interaction habits of different characters are different, it uses table The frequency of feelings image is also different.For example, the active people of personality will be far above what is be not frank and open using the frequency of facial expression image People uses the frequency of facial expression image.Therefore, in one embodiment, according to intelligent robot want simulated person's class character trait come Determine that intelligent robot uses the frequency of facial expression image in interaction.
Specifically, in one embodiment, facial expression image Response Policy is determined according to the personality of intelligent robot, the table Feelings image Response Policy includes facial expression image and responds frequency;When judging whether to need to export facial expression image, based on facial expression image Response Policy judges whether to need to export facial expression image.
Further, in the normal interaction process of the mankind, the topic area interested to different people is different, pin To same topic area, the cross reaction that the people of different characters is made is also different, if is additionally depended on and worked as using facial expression image Corresponding relation between preceding topic area and current interaction person's personality.Therefore, in one embodiment, according to intelligent robot institute Simulated person's class character trait is wanted to determine that intelligent robot uses the targeted topic area of facial expression image.
Specifically, in one embodiment, facial expression image Response Policy is determined according to the personality of intelligent robot, the table Feelings image Response Policy includes facial expression image and responds topic area;When judging whether to need to export facial expression image, based on expression Image Response Policy judges whether to need to export facial expression image.
Further, in the normal interaction process of the mankind, because facial expression image can withdraw deposit the mood of people, and dissimilarity People's Emotion expression degree of lattice is different.For example, the mood that the active people of personality shows is just fiercer, it tends to use Facial expression image expresses oneself excitement excitedly mood, and is directed to same part thing, and the people being not frank and open would not so express one's feelings openly, It still can carry out flat semantic output with plain word.Therefore, in one embodiment, mould is wanted according to intelligent robot Mankind's character trait of plan determines that intelligent robot needs to use facial expression image in which emotion expression service application scenarios.
Specifically, in one embodiment, facial expression image Response Policy is determined according to the personality of intelligent robot, the table Feelings image Response Policy, which includes facial expression image emotion, responds trigger policy (can trigger expression in the case where which kind of emotion responds application scenarios Image is responded);When judging whether to need to export facial expression image, judge whether to need output table based on facial expression image Response Policy Feelings image.
Further, in other embodiments, can be any by the constituted mode of above-mentioned three kinds of facial expression image Response Policies Combination of two or three, which integrate, forms new facial expression image Response Policy constituted mode.Specifically, in one embodiment, according to The personality of intelligent robot determines facial expression image Response Policy, and the facial expression image Response Policy includes facial expression image and responds frequency Rate, facial expression image respond topic area and/or facial expression image emotion responds trigger policy;Judging whether to need to export the table During feelings image, judge whether to need to export facial expression image based on facial expression image Response Policy.
Further, due to defining that facial expression image responds topic area and/or expression in facial expression image Response Policy Image emotional semantic responds trigger policy, therefore in actual interaction, it is necessary to according to current interaction topic area and/or feelings Whether sense output demand to need to export facial expression image really.
Specifically, in one embodiment, when judging whether to need to export facial expression image, first according to semantic analysis result And/or emotion analysis result determines that current topic area and/or emotion respond parameter;Then according to topic area and/or feelings Sense responds parameter and judges whether to need to export facial expression image.
Specifically, as shown in figure 4, in one embodiment, before interaction is started, determine the personality of intelligent robot (S410) facial expression image Response Policy then, is determined according to personality, determines that facial expression image responds frequency, facial expression image is responded Topic area and facial expression image emotion respond trigger policy (S420).
In interaction, first determine whether to interactively enter the type (S430) of data.If interactively entering packet contains table Feelings image, then directly judge that current interaction output is also required to include facial expression image (S440).If interactively enter data not wrap Containing facial expression image, it is plain text, then current topic area and emotion is determined according to semantic analysis result and emotion analysis result Respond parameter (S450), be then based on step S420 determination facial expression image respond frequency, facial expression image respond topic area with And facial expression image emotion responds trigger policy, current according to the step S450 topic areas determined and emotion response parameter decision Whether interaction output needs to include facial expression image (S460).
Further, some facial expression images can inherently state specific text message (or some facial expression image sheets Body just illustrates specific text message), in this case, avoid the need for exporting simultaneously the facial expression image and its statement/ Comprising text message.Therefore, in one embodiment, when needing to export facial expression image, also output situation is subdivided into only defeated Go out facial expression image and export mixing of the facial expression image with responding text to export.When the facial expression image for needing to export can replace completely When in generation, responds text, it is not necessary to which text is responded in output, only exports facial expression image;When the facial expression image for needing to export cannot substitute When responding text, text and facial expression image are responded in output.
Specifically, as shown in figure 5, in one embodiment, intelligent robot judges whether to need to export in interaction Facial expression image (S510), if facial expression image need not be exported, text (S520) is responded in only output.
If necessary to export facial expression image, then extraction needs the facial expression image (S530) exported from facial expression image storehouse.Tool Body, in one embodiment, the type of facial expression image is determined according to the personality of intelligent robot first;Then from the type All facial expression images in filter out implication matching respond text facial expression image.
Next, text message (S540) corresponding to the facial expression image that extraction step S530 is extracted;Compare facial expression image Corresponding text message and the matching degree (S550) for responding text;When text message corresponding to facial expression image and for responding text When reaching given threshold with degree, facial expression image (S560) is only exported;When text message corresponding to facial expression image is with responding text When matching degree is not up to given threshold, exports facial expression image and respond text (S570).
Further, the method according to the invention, the invention also provides a kind of storage medium, stored in the storage medium There is the program code that method as described herein can be achieved.
Further, the method according to the invention, the invention also provides a kind of intelligent robot system.Specifically, such as Shown in Fig. 6, system includes:
Acquisition module 610 is inputted, what it was configured to obtain current interactive object interactively enters data;
Output module 620, it is configured to export interaction response data to current interactive object;
Interaction parsing module 630, it is configured to:
Obtain the personality of intelligent robot;
Semantic parsing and affection computation are carried out to interactively entering data, obtains semantic analysis result and emotion parsing knot Fruit;
Text is responded according to corresponding to semantic analysis result and the generation of emotion analysis result;
Generation includes facial expression image and/or the interaction response data for responding text, wherein, in interaction response data The type of facial expression image matches with the personality of intelligent robot, and the implication of facial expression image is with responding text matches.
Further, in one embodiment, intelligent robot system is that children-story machine interacts system.Children-story machine is With cartoon, animal appearance feature or smart machine with intellectual property IP, carried out by the AI abilities of robot based on saying The educational robot of the man-machine interaction of story demand.
Further, in one embodiment, intelligent robot system relies on Cloud Server to realize complicated data processing behaviour Make.Specifically, as shown in fig. 7, interaction parsing module 730 includes networked interactive unit 731, it passes through networked interactive unit 731 Data interaction is carried out with robot cloud server 700, so as to which the data processing operation of complexity is given into robot cloud service Device 700 is handled.
Specifically, as shown in fig. 7, in an application scenarios, interaction individual 202 is people (user);Equipment 201 can be Virgin Story machine, the smart mobile phone of the user, tablet personal computer, wearable device etc.;Robot cloud server 203 is to equipment 201, which provide data processing, supports service (for example, cloud storage, cloud computing).
Intelligent robot system is installed in equipment 201.In interactive process, it is defeated that equipment 201 obtains user mutual Enter and user mutual input is sent to server 203, server 203 inputs to user mutual and carries out semantic understanding and emotion meter Calculate, the interaction response data (comprising text and/or facial expression image is responded) of generation response user mutual input, and interaction is responded Data return to equipment 201.Equipment 201 exports interaction response data to user 202.
While it is disclosed that embodiment as above, but described content only to facilitate understand the present invention and adopt Embodiment, it is not limited to the present invention.Method of the present invention can also have other various embodiments.Without departing substantially from In the case of essence of the present invention, those skilled in the art, which work as, can make various corresponding changes or become according to the present invention Shape, but these corresponding changes or deformation should all belong to the scope of the claims of the present invention.

Claims (10)

  1. A kind of 1. interaction output intent for intelligent robot, it is characterised in that including:
    Obtain the personality of intelligent robot;
    Obtain current interactive object interactively enters data;
    Semantic parsing and affection computation are carried out to the data that interactively enter, obtain semantic analysis result and emotion parsing knot Fruit;
    Text is responded according to corresponding to the semantic analysis result and emotion analysis result generation;
    Generate and export comprising facial expression image and/or the interaction response data for responding text, wherein, the facial expression image Type matches with the personality, the implication of the facial expression image and the response text matches.
  2. 2. according to the method for claim 1, it is characterised in that the personality of intelligent robot is obtained, including:
    Obtain the identity information of current interactive object;
    Call the personality matched with the identity information.
  3. 3. according to the method for claim 1, it is characterised in that the personality of intelligent robot is obtained, including:
    Obtain current interactive environment description information and/or user mutual demand information;
    Call the personality matched with the current interactive environment description information and/or the user mutual demand information.
  4. 4. according to the method for claim 1, it is characterised in that generation is comprising facial expression image and/or described responds text Interaction response data, including:
    Judge whether to need to export the facial expression image;
    The interactive response data for including the facial expression image is generated when needing to export facial expression image.
  5. 5. according to the method for claim 4, it is characterised in that judge whether to need to export the facial expression image, including:
    Interactively enter packet when containing facial expression image when described and judge to need to export the facial expression image.
  6. 6. according to the method for claim 4, it is characterised in that judge whether to need to export the facial expression image, including:
    Facial expression image Response Policy is determined according to the personality, the facial expression image Response Policy is responded including facial expression image Frequency, facial expression image respond topic area and/or facial expression image emotion responds trigger policy;
    Judge whether to need to export the facial expression image based on the facial expression image Response Policy.
  7. 7. according to the method for claim 4, it is characterised in that judge whether to need to export the facial expression image, including:
    Determine that current topic area and/or emotion are responded according to the semantic analysis result and/or the emotion analysis result Parameter;
    Parameter is responded according to the topic area and/or the emotion to judge whether to need to export the facial expression image.
  8. 8. according to the method for claim 4, it is characterised in that when needing to export facial expression image, generation includes the expression The interactive response data of image, including:
    Extract text message corresponding to the facial expression image;
    Compare text message corresponding to the facial expression image and the response text;
    When the matching degree of text message corresponding to the facial expression image and the response text reaches given threshold, institute is only exported State facial expression image.
  9. 9. a kind of storage medium, it is characterised in that be stored with and can be achieved such as any one of claim 1-8 in the storage medium The program code of methods described.
  10. 10. a kind of intelligent robot system, it is characterised in that the system includes:
    Acquisition module is inputted, what it was configured to obtain current interactive object interactively enters data;
    Output module, it is configured to export interaction response data to current interactive object;
    Interaction parsing module, it is configured to:
    Obtain the personality of intelligent robot;
    Semantic parsing and affection computation are carried out to the data that interactively enter, obtain semantic analysis result and emotion parsing knot Fruit;
    Text is responded according to corresponding to the semantic analysis result and emotion analysis result generation;
    Generation includes facial expression image and/or the interactive response data for responding text, wherein, the class of the facial expression image Type matches with the personality of the intelligent robot, the implication of the facial expression image and the response text matches.
CN201710891490.2A 2017-09-27 2017-09-27 Interactive output method and system for intelligent robot Active CN107807734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710891490.2A CN107807734B (en) 2017-09-27 2017-09-27 Interactive output method and system for intelligent robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710891490.2A CN107807734B (en) 2017-09-27 2017-09-27 Interactive output method and system for intelligent robot

Publications (2)

Publication Number Publication Date
CN107807734A true CN107807734A (en) 2018-03-16
CN107807734B CN107807734B (en) 2021-06-15

Family

ID=61592521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710891490.2A Active CN107807734B (en) 2017-09-27 2017-09-27 Interactive output method and system for intelligent robot

Country Status (1)

Country Link
CN (1) CN107807734B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109065018A (en) * 2018-08-22 2018-12-21 北京光年无限科技有限公司 A kind of narration data processing method and system towards intelligent robot
CN109460548A (en) * 2018-09-30 2019-03-12 北京光年无限科技有限公司 A kind of narration data processing method and system towards intelligent robot
CN109815463A (en) * 2018-12-13 2019-05-28 深圳壹账通智能科技有限公司 Control method, device, computer equipment and storage medium are chosen in text editing
CN110209784A (en) * 2019-04-26 2019-09-06 腾讯科技(深圳)有限公司 Method for message interaction, computer equipment and storage medium
CN110633361A (en) * 2019-09-26 2019-12-31 联想(北京)有限公司 Input control method and device and intelligent session server
CN111309862A (en) * 2020-02-10 2020-06-19 贝壳技术有限公司 User interaction method and device with emotion, storage medium and equipment
CN111984767A (en) * 2019-05-23 2020-11-24 北京搜狗科技发展有限公司 Information recommendation method and device and electronic equipment
CN113658467A (en) * 2021-08-11 2021-11-16 岳阳天赋文化旅游有限公司 Interactive system and method for optimizing user behavior

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843381A (en) * 2016-03-18 2016-08-10 北京光年无限科技有限公司 Data processing method for realizing multi-modal interaction and multi-modal interaction system
CN106200959A (en) * 2016-07-08 2016-12-07 北京光年无限科技有限公司 Information processing method and system towards intelligent robot
CN106297789A (en) * 2016-08-19 2017-01-04 北京光年无限科技有限公司 The personalized interaction method of intelligent robot and interactive system
CN106863300A (en) * 2017-02-20 2017-06-20 北京光年无限科技有限公司 A kind of data processing method and device for intelligent robot
CN106873773A (en) * 2017-01-09 2017-06-20 北京奇虎科技有限公司 Robot interactive control method, server and robot
CN106909896A (en) * 2017-02-17 2017-06-30 竹间智能科技(上海)有限公司 Man-machine interactive system and method for work based on character personality and interpersonal relationships identification
CN206311916U (en) * 2016-05-31 2017-07-07 北京光年无限科技有限公司 A kind of intelligent robot of exportable expression

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843381A (en) * 2016-03-18 2016-08-10 北京光年无限科技有限公司 Data processing method for realizing multi-modal interaction and multi-modal interaction system
CN206311916U (en) * 2016-05-31 2017-07-07 北京光年无限科技有限公司 A kind of intelligent robot of exportable expression
CN106200959A (en) * 2016-07-08 2016-12-07 北京光年无限科技有限公司 Information processing method and system towards intelligent robot
CN106297789A (en) * 2016-08-19 2017-01-04 北京光年无限科技有限公司 The personalized interaction method of intelligent robot and interactive system
CN106873773A (en) * 2017-01-09 2017-06-20 北京奇虎科技有限公司 Robot interactive control method, server and robot
CN106909896A (en) * 2017-02-17 2017-06-30 竹间智能科技(上海)有限公司 Man-machine interactive system and method for work based on character personality and interpersonal relationships identification
CN106863300A (en) * 2017-02-20 2017-06-20 北京光年无限科技有限公司 A kind of data processing method and device for intelligent robot

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109065018A (en) * 2018-08-22 2018-12-21 北京光年无限科技有限公司 A kind of narration data processing method and system towards intelligent robot
CN109460548A (en) * 2018-09-30 2019-03-12 北京光年无限科技有限公司 A kind of narration data processing method and system towards intelligent robot
CN109460548B (en) * 2018-09-30 2022-03-15 北京光年无限科技有限公司 Intelligent robot-oriented story data processing method and system
CN109815463A (en) * 2018-12-13 2019-05-28 深圳壹账通智能科技有限公司 Control method, device, computer equipment and storage medium are chosen in text editing
CN110209784A (en) * 2019-04-26 2019-09-06 腾讯科技(深圳)有限公司 Method for message interaction, computer equipment and storage medium
CN110209784B (en) * 2019-04-26 2024-03-12 腾讯科技(深圳)有限公司 Message interaction method, computer device and storage medium
CN111984767A (en) * 2019-05-23 2020-11-24 北京搜狗科技发展有限公司 Information recommendation method and device and electronic equipment
CN110633361A (en) * 2019-09-26 2019-12-31 联想(北京)有限公司 Input control method and device and intelligent session server
CN110633361B (en) * 2019-09-26 2023-05-02 联想(北京)有限公司 Input control method and device and intelligent session server
CN111309862A (en) * 2020-02-10 2020-06-19 贝壳技术有限公司 User interaction method and device with emotion, storage medium and equipment
CN113658467A (en) * 2021-08-11 2021-11-16 岳阳天赋文化旅游有限公司 Interactive system and method for optimizing user behavior

Also Published As

Publication number Publication date
CN107807734B (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN107807734A (en) A kind of interaction output intent and system for intelligent robot
CN106503156B (en) Man-machine interaction method and device based on artificial intelligence
KR101925440B1 (en) Method for providing vr based live video chat service using conversational ai
CN105931638B (en) Intelligent robot-oriented dialogue system data processing method and device
Ning et al. General cyberspace: Cyberspace and cyber-enabled spaces
CN107632706B (en) Application data processing method and system of multi-modal virtual human
Mascarenhas et al. Social importance dynamics: A model for culturally-adaptive agents
CN107329990A (en) A kind of mood output intent and dialogue interactive system for virtual robot
CN106020488A (en) Man-machine interaction method and device for conversation system
CN108804698A (en) Man-machine interaction method, system, medium based on personage IP and equipment
CN107894831A (en) A kind of interaction output intent and system for intelligent robot
CN111385594B (en) Virtual character interaction method, device and storage medium
CN109086860B (en) Interaction method and system based on virtual human
CN106599998A (en) Method and system for adjusting response of robot based on emotion feature
CN105204631B (en) A kind of VGE Role Modeling method and many role's cooperating methods
Shank Technology and emotions
Chen et al. How interaction experience enhances customer engagement in smart speaker devices? The moderation of gendered voice and product smartness
CN107704169A (en) The method of state management and system of visual human
CN105701211A (en) Question-answering system-oriented active interaction data processing method and system
Johnson et al. Understanding aesthetics and fitness measures in evolutionary art systems
Bonito et al. The role of expectations in human-computer interaction
Chen et al. Modeling, simulation, and case analysis of COVID‐19 over network public opinion formation with individual internal factors and external information characteristics
CN110636362A (en) Image processing method, device and system and electronic equipment
CN107704448B (en) Method and system for acquiring children education resource content
Vande Berg The critical sense: Three decades of critical media studies in the wake of Samuel L. Becker's “rhetorical studies for the contemporary world”

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant