CN109670030B - Question-answer interaction method and device - Google Patents

Question-answer interaction method and device Download PDF

Info

Publication number
CN109670030B
CN109670030B CN201811644570.9A CN201811644570A CN109670030B CN 109670030 B CN109670030 B CN 109670030B CN 201811644570 A CN201811644570 A CN 201811644570A CN 109670030 B CN109670030 B CN 109670030B
Authority
CN
China
Prior art keywords
information
user
input information
user input
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811644570.9A
Other languages
Chinese (zh)
Other versions
CN109670030A (en
Inventor
邢运
范正洁
胡长建
史欣然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201811644570.9A priority Critical patent/CN109670030B/en
Publication of CN109670030A publication Critical patent/CN109670030A/en
Application granted granted Critical
Publication of CN109670030B publication Critical patent/CN109670030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a question-answer interaction method and device, wherein each time a user inputs information, whether the reduction of the emotional state of the user meets a preset condition is judged according to the user input information and historical interaction information, if the judgment result is yes, the reduction of the emotion of the user is shown, a plurality of feedback information are provided for the user to select, the hit rate of answers is improved, the conversation life is further prolonged, and the problem solving probability of an intelligent conversation system is improved.

Description

Question-answer interaction method and device
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a question-answer interaction method and device.
Background
In the intelligent customer service system, in order to solve the problem proposed by the user, the intention of the user is understood according to the problem proposed by the user, and a solution is provided according to the intention of the user. However, the inventor researches and discovers that in the prior art, a single fixed answer, namely a patterned output answer, is relatively easy to cause disappointing emotion to a user in the selection and expression modes of the answer, so that the conversation life is short, and the problem solving rate of a customer service system is low.
Disclosure of Invention
The present application aims to provide a question-answer interaction method and device to at least partially overcome the technical problems in the prior art.
In order to achieve the purpose, the application provides the following technical scheme:
a question-answer interaction method is applied to an intelligent conversation system, the intelligent conversation system can respond to received input information and provide feedback information, and the method comprises the following steps:
acquiring user input information, wherein the user input information represents that a user expects the intelligent session system to feed back the feedback information;
judging whether the reduction of the user emotional state represented by the user input information meets a first preset condition or not according to the user input information and the historical interaction information;
and if the judgment result indicates that the reduction of the emotional state of the user meets a first preset condition, outputting M feedback information of the N feedback information acquired aiming at the input information of the user, wherein M, N is a positive integer greater than 1, and M is less than or equal to N.
The above method, preferably, further comprises:
and if the judgment result represents that the emotional state of the user is promoted to meet a second preset condition, outputting 1 of N feedback information acquired aiming at the input information of the user.
Preferably, the outputting M pieces of feedback information obtained for the user input information includes:
Outputting information for expressing apology and M of the N pieces of feedback information acquired aiming at the user input information;
the information for expressing apology precedes the M feedback information.
Preferably, the determining, according to the user input information and the historical interaction information, whether the reduction of the user emotional state represented by the user input information meets a first preset condition includes:
analyzing the user input information to obtain a first emotion characteristic of the user input information, wherein the first emotion characteristic at least comprises user emotion represented by the user input information;
determining a third grading increment corresponding to the user input information according to a first grading increment corresponding to the first emotional feature and a second grading increment corresponding to a second emotional feature of each user input information in the historical interactive information; the grading increment is used for representing the change direction and the change degree of the user emotion;
and if the third scoring increment represents that the user emotion changes to negative emotion and the absolute value of the third scoring increment is larger than a preset value, determining that the reduction of the user emotion state represented by the user input information meets a first preset condition.
In the above method, preferably, the user input information includes: first information and second information;
the first information represents whether the feedback information provided by the intelligent session system is accurate or not before the user input information is acquired; the second information represents that the user expects the intelligent conversation system to feed back the feedback information;
the first emotional characteristic comprises the following steps: the first information.
Preferably, the method for analyzing the user input information to obtain the user emotion represented by the user input information includes:
processing the user input information by using a pre-trained information extraction model to extract entity information in the user input information, wherein the entity information is a predefined word representing the user emotion;
and processing the user input information and the entity information by using a pre-trained text classification model to obtain the user emotion represented by the user input information.
Preferably, the determining, according to the first scoring increment corresponding to the first emotional feature and the second scoring increment corresponding to the second emotional feature of each piece of user input information in the historical interactive information, a third scoring increment corresponding to the user input information includes:
Acquiring a user emotion key determined according to portrait information of a user and/or first user input information in the historical interaction information;
summing a first scoring increment corresponding to the first emotion characteristic, a fourth scoring increment corresponding to the user emotion key and second scoring increments corresponding to second emotion characteristics of each piece of user input information except the first piece of user input information in the historical interactive information to obtain a third scoring increment; or,
and summing the first scoring increment corresponding to the first emotion characteristic, the fourth scoring increment corresponding to the user emotion key and the second scoring increment corresponding to the second emotion characteristic of each user input information in the historical interactive information to obtain a third scoring increment.
Preferably, the outputting M pieces of feedback information obtained for the user input information includes:
if the first information representation is before the user input information is obtained, the feedback information provided by the intelligent conversation system is accurate, and a target sentence pattern is determined;
and outputting M feedback information in the N feedback information acquired aiming at the user input information according to the target sentence pattern.
A question-answer interaction device is applied to an intelligent conversation system, the intelligent conversation system can respond to received input information and provide feedback information, and the question-answer interaction device comprises:
the acquisition module is used for acquiring user input information, and the user input information represents that a user expects the intelligent session system to feed back the feedback information;
the judging module is used for judging whether the reduction of the user emotional state represented by the user input information meets a first preset condition or not according to the user input information and the historical interaction information;
and the output module is used for outputting M feedback information in the N feedback information acquired aiming at the user input information if the judgment result represents that the reduction of the user emotional state meets a first preset condition, wherein M, N is a positive integer larger than 1, and M is smaller than or equal to N.
A question-answer interaction device is applied to an intelligent conversation system, the intelligent conversation system can respond to received input information and provide feedback information, and the question-answer interaction device comprises:
a memory for storing at least one set of instructions;
a processor for invoking and executing the set of instructions in the memory, by executing the set of instructions:
Acquiring user input information, wherein the user input information represents that a user expects the intelligent session system to feed back the feedback information;
judging whether the reduction of the user emotional state represented by the user input information meets a first preset condition or not according to the user input information and the historical interaction information;
and if the judgment result indicates that the reduction of the emotional state of the user meets a first preset condition, outputting M feedback information of the N feedback information acquired aiming at the input information of the user, wherein M, N is a positive integer greater than 1, and M is less than or equal to N.
According to the scheme, the question-answer interaction method and device provided by the application have the advantages that the user judges whether the reduction of the emotional state of the user meets the preset condition or not according to the input information of the user and historical interaction information every time the user inputs information, if the judgment result is yes, the emotion of the user is reduced, a plurality of feedback information are provided for the user to select, the answer hit rate is improved, the conversation life is further prolonged, and the problem solving probability of an intelligent conversation system is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an implementation of a question-answer interaction method provided in an embodiment of the present application;
fig. 2 is a flowchart illustrating an implementation of determining whether a decrease in a user emotional state represented by user input information satisfies a first preset condition according to the user input information and historical interaction information according to the embodiment of the present application;
fig. 3 is a flowchart of an implementation of analyzing user input information to obtain a user emotion represented by the user input information according to the embodiment of the present application;
fig. 4 is a schematic structural diagram of a question-answer interaction device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another question-answering interaction device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in other sequences than those illustrated.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
The question-answer interaction method and device are applied to an intelligent conversation system, and the intelligent conversation system can respond to received input information and provide feedback information. The intelligent conversation system can be an after-sale customer service system, or can be a pre-sale customer service system, or other systems for providing services for users, and the like.
Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of a question-answer interaction method according to an embodiment of the present application, which may specifically include:
step S11: and acquiring user input information, wherein the user input information represents that the user expects the intelligent conversation system to feed back the feedback information.
The user can input information in a text mode or a voice mode, namely, the information input by the user can be text or voice.
Step S12: and judging whether the reduction of the user emotional state represented by the user input information meets a first preset condition or not according to the user input information and the historical interaction information.
After the intelligent conversation system provides feedback information aiming at the input information of the user, the feedback information possibly does not reach the expectation of the user, and the user is likely to generate negative emotion. For example, in a customer service system, if the intelligent conversation system gives irrelevant answers to the user's questions, the user is likely to have a negative emotion.
In the embodiment of the application, the user emotion can be divided into positive emotion, neutral emotion or negative emotion. Positive emotions can be subdivided into different degrees, such as low positive (e.g., less satisfied), high positive (e.g., very satisfied), etc., and negative emotions can be subdivided into different degrees, such as light negative (e.g., disappointed), severe negative (e.g., angry), etc.
In the embodiment of the application, the reduction of the user emotion state can mean that the user emotion develops towards a negative emotion direction.
Step S13: and if the judgment result indicates that the reduction of the emotional state of the user meets the first preset condition, outputting M of the N feedback information acquired aiming at the input information of the user, wherein M, N is a positive integer greater than 1, and M is less than or equal to N.
After the intelligent session system acquires the input information of the user, the user intention can be identified, and the scheme to be selected is screened out according to the user intention. In the embodiment of the application, when the reduction of the emotional state of the user meets a first preset condition, a plurality of (marked as N) feedback information to be selected with the largest relevance degree with the input information of the user are screened out, and at least two (marked as M) feedback information are selected from the N feedback information to be selected according to the sequence from high relevance degree with the input information of the user to low to be output.
According to the question-answer interaction method provided by the embodiment of the application, in the question-answer interaction process, every time a user inputs information, whether the reduction of the emotion state of the user meets the preset condition or not is judged according to the user input information and historical interaction information, if the judgment result is yes, the emotion of the user is reduced, namely, negative emotion is developed, a plurality of feedback information are provided for the user to select, the hit rate of answers is improved, the conversation life is further prolonged, and the problem solving probability of an intelligent conversation system is improved.
In an alternative embodiment, when M of the N pieces of feedback information acquired for the user input information are output, information for expressing apology may be output simultaneously with the M pieces of feedback information in order to moderate the emotion of the user. The message for apology is located before the M feedback messages.
For example, just too late, i am wrong, your meaning is XXX or is**Is there?
In an optional embodiment, after the user input information is acquired, if it is determined that the emotional state promotion of the user represented by the user input information meets the second preset condition according to the user input information and the historical interaction information, 1 of N feedback information acquired for the user input information may be output, where the 1 feedback information is the feedback information most relevant to the user input information.
The user emotion state promotion refers to the fact that user emotion develops towards the positive emotion direction. When the emotional state of the user is improved and meets the second preset condition, the user intention is correctly identified, and the feedback information provided aiming at the user input information reaches the user expectation.
In an optional embodiment, as shown in fig. 2, the flow chart for determining whether the reduction of the user emotional state represented by the user input information meets the first preset condition according to the user input information and the historical interaction information may include:
step S21: analyzing the user input information to obtain a first emotion characteristic of the user input information, wherein the first emotion characteristic at least comprises user emotion represented by the user input information.
Step S22: determining a third grading increment corresponding to the user input information according to a first grading increment corresponding to the first emotion characteristic and a second grading increment corresponding to a second emotion characteristic of each user input information in the historical interactive information; and the grading increment is used for representing the change direction and the change degree of the user emotion. The value of the scoring increment has a positive and negative score, a positive value represents that the user emotion develops towards a positive emotion direction, a negative value represents that the user emotion develops towards a negative emotion direction, the absolute value of the scoring increment represents the degree of the user emotion developing towards a certain emotion direction, and the larger the absolute value of the scoring increment, the larger the degree of the user emotion developing towards the certain direction.
When the emotional feature only comprises one item of information, for example, only comprises the emotion of the user, the score increment corresponding to the emotional feature is the score increment corresponding to the item of information, and if the emotional feature comprises a plurality of items of information, the score increment corresponding to the emotional feature is the sum or the weighted sum of the score increments corresponding to the plurality of items of information. The scoring increments corresponding to different information may be the same or different.
The first grading increment represents the change direction and the change degree of the user emotion represented by the information currently input by the user. The second grading increment corresponding to the second emotional feature of a certain user input information (for convenience of description, recorded as historical input information) in the historical interaction information represents the change direction and change degree of the user emotion represented by the historical input information. The third incremental score characterizes the overall direction and degree of change of the user's emotion throughout the dialog process so far.
The corresponding relation between the emotional features and the grading increment can be stored in the intelligent conversation system in advance. Generally, the increment of score corresponding to negative emotion is a negative value, the increment of score corresponding to positive emotion is a positive value, and the increment of score corresponding to neutral emotion is 0.
Step S23: and if the third scoring increment represents that the user emotion changes towards the negative emotion direction and the absolute value of the third scoring increment is larger than the preset value, determining that the user emotion state reduction represented by the user input information meets a first preset condition.
For convenience of calculation, the emotional features corresponding to the information input by the user each time can be added into the feature pool, and the emotional features are sorted according to the sequence of the information input by the user. When the score increment needs to be calculated by utilizing the emotional characteristics, the emotional characteristics are directly extracted from the characteristic pool.
Optionally, if the third score increment represents that the user emotion changes to the positive emotion direction, or although the third score increment represents that the user emotion changes to the negative emotion direction, the absolute value of the third score increment is less than or equal to the preset value, it may be determined that the user emotion state represented by the user input information is promoted to meet a second preset condition.
In an alternative embodiment, the user input information includes two parts, denoted as first information and second information, wherein,
the first information represents whether the feedback information provided by the intelligent conversation system is accurate or not before the user input information is acquired; the second information represents feedback information expected by the user to the intelligent session system;
Correspondingly, the first emotional characteristic also comprises: the first information. Based on this, when the score increment of the emotional feature needs to be calculated, if the first information represents that the feedback information provided by the intelligent conversation system is accurate, the score increment corresponding to the first information is a positive value, and if the first information represents that the feedback information provided by the intelligent conversation system is inaccurate, the score increment corresponding to the first information is a negative value.
In the embodiment of the application, the intelligent conversation system provides an option for representing whether the feedback information is correct or not besides providing the feedback information aiming at the information input by the user each time, and the user selects whether the feedback information provided by the intelligent conversation system is correct or not. Providing basis for the selection of the next answer.
In an optional embodiment, an implementation flowchart of analyzing the user input information to obtain a user emotion represented by the user input information, which is shown in fig. 3, may include:
step S31: and processing the user input information by using a pre-trained information extraction model to extract entity information in the user input information, wherein the entity information is a predefined word representing the user emotion.
The entity information means information having a specific meaning, for example, a person name, a place name, an institution name, a proper noun, an expletive term, or the like. In the embodiments of the present application, the entity information may include at least an expletive word.
The information extraction model can be obtained by training samples marked by entity information. The information extraction model can be selected from a CRF (Conditional Random Field) model, an LSTM (Long Short-Term Memory network) + CRF model and the like.
Step S32: and processing the user input information and the entity information by using a pre-trained text classification model to obtain the user emotion represented by the user input information. The text classification model can adopt a convolutional neural network or a long-term and short-term memory network and the like.
In the embodiment of the application, the text classification model carries out word segmentation processing on the user input information, extracts key words from word segmentation processing results, combines the extracted key words and entity information into the characteristics of the user input information, and identifies the user emotion represented by the user input information by using the characteristics.
It should be noted that, if there may be no entity information in the information input by the user, the entity information input to the text classification model is null.
According to the method for analyzing the emotion of the user input information, entity information with specific emotion significance is integrated in the emotion analysis process, and the emotion analysis result is closer to the real emotion of the user.
In an optional embodiment, the determining, according to the first score increment corresponding to the first emotional feature and the second score increment corresponding to the second emotional feature of each piece of user input information in the historical interactive information, a third score increment corresponding to the user input information includes:
and acquiring a user emotion key determined according to the portrait information and/or the first user input information in the historical interaction information of the user.
The image information of the user refers to some behavior feature information of the user, such as the calling frequency, the conversation duration, the number of conversation rounds, the problem resolution, whether or not there are expressor words in the conversation process, the frequency of expressor words, and the like. The user representation may be obtained by analyzing data of multiple sessions of the user with the intelligent conversational system (the multiple sessions are generally several sessions before the current session, where the intelligent conversational system starts a conversation with the user and ends the conversation as one session) through a pre-trained image model.
In the application, the emotion key of the user can be determined only by the portrait of the user, can also be determined only by the first input information in the historical interactive information, or can be determined by combining the portrait of the user and the first input information in the historical interactive information.
After the user portrait is established, the user portrait can be analyzed according to a certain rule, and user emotion corresponding to the portrait is determined, wherein the user emotion is the emotion key of the user. For example, the appearance of multiple modest words, and the elegant and polite expression of an expression, is considered as a positive emotional mood. The occurrence of an expletive human word can be considered a negative emotional mood. When the emotion basic tone needs to be determined through the user portrait, the user emotion corresponding to the user portrait can be directly read.
The user emotion represented by the first input information in the historical interaction information can also be used as a user emotion key.
If the emotion key of the user is determined by combining the user image and the first input information in the historical interactive information, the first emotion key can be determined based on the user image, the second emotion key can be determined according to the first input information in the historical interactive information, the third emotion key can be determined according to the first emotion key and the second emotion key, and the third emotion key is used as the emotion key of the user.
The third emotion key can be the emotion key with higher negative expression degree in the first emotion key and the second emotion key; for example, the first sentiment mood is a positive sentiment, the second sentiment mood is a negative sentiment, and the third sentiment mood may be a negative sentiment. As another example, if the first mood is a mild negative mood, the second mood is a severe negative mood, and the third mood may be a severe negative mood.
Or,
and adjusting the first emotion mood to the emotion mood characterized by the second emotion mood after the emotion direction is adjusted. For example, the first sentiment mood is a positive sentiment, the second sentiment mood is a negative sentiment, and the third sentiment mood may be a neutral sentiment. As another example, if the first emotion mood is a high positive emotion, the second emotion mood is a negative emotion, and the third emotion mood can be a low positive emotion.
After the emotion key of the user is obtained, a first scoring increment corresponding to the first emotion feature, a fourth scoring increment corresponding to the emotion key of the user and a second scoring increment corresponding to a second emotion feature of each piece of user input information except the first piece of user input information in the historical interactive information are summed to obtain a third scoring increment. Since the user emotion key may be determined based on the first user input information in the historical interaction information, the emotion of the first user input information in the historical interaction information may not be considered when calculating the third scoring increment. Of course, the emotion of the first user input information in the historical interaction information may also be continuously considered when calculating the third scoring increment.
Or,
after the emotion key of the user is obtained, a first scoring increment corresponding to the first emotion feature, a fourth scoring increment corresponding to the emotion key of the user and a second scoring increment corresponding to a second emotion feature of each user input information in the historical interactive information are summed to obtain a third scoring increment.
Optionally, if the user emotion key is a forward emotion key, a value of a score increment corresponding to the user emotion key may be greater than a score increment corresponding to information in the emotional feature, for example, if the score increment corresponding to the forward user emotion is +1, the score increment corresponding to the forward user emotion key may be + 5. And the score increment corresponding to the negative user emotion key is smaller than the score increment corresponding to the positive user emotion key, for example, the score increment corresponding to the negative user emotion is 0.
In an optional embodiment, in the process of calculating the third score increment, if a continuous score reduction occurs, for example, the score increments corresponding to the emotional features of the user input information in two consecutive times are all negative values, the score is added or subtracted, that is, after the third score increment corresponding to the user input information is determined according to the first score increment corresponding to the first emotional feature and the second score increment corresponding to the second emotional feature of each user input information in the historical interactive information, the preset score value is subtracted from the third score increment, so as to obtain a final third score increment.
If continuous points are added, points are not added according to the situation, and because the user subconsciously tends to be unintelligible to each other (doubtful attitude) in the conversation process (especially in the conversation with non-human beings), negative emotions are easier to generate than positive emotions, and a plurality of correct answers can possibly recover the initial wrong answer. If the emotional state is always on the rise from the beginning of the conversation (a satisfactory answer is given continuously for a plurality of times), the bonus point can be added in this case.
In an alternative embodiment, the target sentence pattern is determined when there is first information in the user input information that characterizes the accuracy of the feedback information provided by the intelligent dialog system prior to obtaining the user input information. The target sentence pattern may be a relatively gentle and gentle sentence pattern.
And when M of the N feedback information acquired aiming at the user input information is output, outputting M of the N feedback information acquired aiming at the user input information according to the target sentence pattern.
In the embodiment of the application, when the intelligent conversation system gives correct answers but the emotion of the user shows a negative trend, the problem occurs in the way that the intelligent conversation system gives feedback information.
For example, the user inputs information: do you sell charge targets support mini USB ports? The intelligent conversation system detects that the user intends to buy the mobile phone accessory, and directly pushes out a link answer: where to find Motorola products. And gives the option Yes or NO. Here, the intelligent conversational system has no problem in understanding the user's intention, but is too aggressive in the manner of presenting the answer, and if the user selects YES, but finds that the user emotion shows a negative trend in the following user emotion analysis, then in the following dialog, sentence optimization is performed when the answer is pushed out, for example, using "Do you. "is a sentence pattern. To improve the user's mood.
In an optional embodiment, the information processing method provided in the present application may further include:
and when the continuous K times of user input information all contain first information representing that the feedback information provided by the intelligent conversation system is wrong before the user input information is obtained, or the feedback information provided by the intelligent conversation system is irrelevant to the information input by the user, switching to the artificial seat. K is a positive integer greater than 1, for example, K may be 2 or 3, or other values.
In certain cases, for example, in the initial stage of the conversation, if the intelligent conversational system does not recognize the user's intention after the user inputs information for the first time, the information fed back by the user may be "Sorry, I don't leave", which may cause frustration to the user quickly, if the next set of conversations still does not provide the user with a satisfactory answer, and in order to alleviate the user's mood and solve the problem, the manual seat may be switched actively without waiting for the user to select the manual seat.
The value of K can be different according to different session processes. For example, if at the beginning of the session, the value of K may be smaller, for example, 2, and if the session has been performed for a period of time or has been performed for multiple sessions, the value of K may be larger, for example, 3.
In another alternative embodiment, if the number of turns of the dialog exceeds a certain number, but the user problem is not solved, the user should be actively assisted to change to a manual seat, because the user and the dialog of the intelligent dialog system are very purposeful and not chatty, even if the current user has a higher emotional state value (i.e. a score increment), only the user is high in quality but is already in a negative emotional state for the dialog.
The following illustrates some of the differences of the present application from the prior art.
In the prior art, a session scenario between a user and a traditional intelligent session system (hereinafter referred to as a traditional customer service) may be as follows:
and (3) user input: excuse me, wuold you mind helium me to repair my phone?
Traditional customer service: the plug check outer website to find the latest models.
And (3) user input: what is needed is about you saying I need to get my phone replayed.
Traditional customer service: please check outer wet to find the latest models.
And (3) user input: don't water my time, I water to talk to a live agent.
Obviously, the customer service answers fed back twice are wrong, and the answers are given in the same manner (same sentence, tone or content, etc.) regardless of what information the user enters.
Based on the scheme provided by the present application, the session scenario between the user and the intelligent session system (hereinafter referred to as "emotional customer service") of the present application may be as follows:
and (3) user input: excuse me, wuold you mind hellping me to repair my phone?
Here, emotion analysis is performed on information input by the user, the user emotion is determined to be a forward emotion, and a higher user emotion key can be given. Based on this, the emotional customer service gives the following feedback information:
emotion customer service: i'm sorry to heel that, Angela. great kinase chemical bed position to find the latest models.
Based on the embodiment of the application, the customer service replies in the same manner. And simultaneously, some information related to the user (the information is prestored in the intelligent conversation system), such as names, titles and the like, is given. The name "Angela" is given in this example, increasing the user experience. But the intelligent conversational system deduces wrong answers to the user's intention to understand the errors.
And (3) user input: don't you understand what I mean needed to get my phone replayed.
The intelligent conversation system analyzes the input of the user to obtain that the emotion of the user is negative emotion, and can be marked by a label, such as a negative label. That is, the user has a negative emotion due to a previous erroneous understanding of the user's intention.
The intelligent conversation system calculates that the emotional state decline of the user meets a first preset condition according to the emotional tone and the user input. When the answer is pushed out, a apology is firstly expressed, then the last answer is abandoned, and a plurality of related alternative answers are tried to be pushed out, the hit rate is increased, and the client is saved. Based on this, the emotional customer service gives the following answers:
emotion customer service: sorry, it's my fault, do you mean your phone or check you repeat status?
And (3) user input: repair my phone.
The intelligent conversation system analyzes the input of the user to obtain that the emotion of the user is neutral, and gives a normal label, so that the emotion of the user is recovered to some extent due to correct understanding of the intention of the user.
The intelligent conversation system calculates that the emotion state of the user is improved and meets a second condition according to the emotion key and the two user inputs, and the answer is probably hit, the normal tone is recovered, and an answer is provided with information. Based on this, the emotional customer service gives the following answers:
emotion customer service: ok, please find this site www.XXXX.com to file a repeat form.
The problem is solved and the session ends.
Based on the embodiment of the application, the process of the conversation between the user and the emotion customer service may also be as follows:
And (3) user input: excuse me, wuold you mind hellping me to repair my phone?
Emotion customer service: i'm sorry to heel that, Angela. great kinase shift check outer position to find the latest models. Yes or no?
And (3) user input: no, Don't you understand what I mean needed to get my phone replayed.
Emotion customer service: sorry, it's my fault, do you mean your phone or check you repeat houses Yes or no?
And (3) user input: yes, repair my phone.
Emotion customer service: ok, please find this site www.XXXX.com to file a repeat form.
Corresponding to the method embodiment, the present application further provides a question-answer interaction device, and a schematic structural diagram of the question-answer interaction device provided in the embodiment of the present application is shown in fig. 4, and may include:
an acquisition module 41, a judgment module 42 and an output module 43; wherein,
the obtaining module 41 is configured to obtain user input information, where the user input information indicates that the user expects the intelligent session system to feed back the feedback information.
The judging module 42 is configured to judge whether the reduction of the user emotional state represented by the user input information meets a first preset condition according to the user input information and the historical interaction information.
The output module 43 is configured to output M feedback information of the N feedback information obtained for the user input information if the judgment result indicates that the decrease of the user emotional state meets the first preset condition, M, N is a positive integer greater than 1, and M is less than or equal to N.
According to the question-answer interaction device, in the question-answer interaction process, every time a user inputs information, whether the reduction of the emotional state of the user meets the preset condition or not is judged according to the user input information and historical interaction information, if the judgment result is yes, the emotion of the user is reduced, namely negative emotion is developed, a plurality of feedback information are provided for the user to select, the answer hit rate is improved, the conversation life is further prolonged, and the problem solving probability of an intelligent conversation system is improved.
In an alternative embodiment, the output module 43 may further be configured to:
and if the judgment result indicates that the emotional state of the user is improved to meet a second preset condition, outputting 1 of the N feedback information acquired aiming at the input information of the user.
In an optional embodiment, when outputting M feedback information of the N feedback information acquired for the user input information, the output module 43 is specifically configured to:
outputting information for expressing apology and M of the N pieces of feedback information acquired aiming at the input information of the user;
The information for apology is located before the M feedback information.
In an alternative embodiment, the determining module 42 may specifically be configured to:
analyzing the user input information to obtain a first emotion characteristic of the user input information, wherein the first emotion characteristic at least comprises user emotion represented by the user input information;
determining a third grading increment corresponding to the user input information according to a first grading increment corresponding to the first emotion characteristic and a second grading increment corresponding to a second emotion characteristic of each user input information in the historical interactive information; the scoring increment is used for representing the change direction and the change degree of the user emotion;
and if the third scoring increment represents that the user emotion changes to the negative emotion, and the absolute value of the third scoring increment is larger than the preset value, determining that the user emotion state reduction represented by the user input information meets a first preset condition.
In an alternative embodiment, the user input information includes: first information and second information;
the first information represents whether the feedback information provided by the intelligent conversation system is accurate or not before the user input information is acquired; the second information represents that the user expects the intelligent conversation system to feed back the feedback information;
The first emotional characteristic also comprises: the first information.
In an optional embodiment, the determining module 42 analyzes the user input information, and when obtaining the user emotion, may specifically be configured to:
processing user input information by using a pre-trained information extraction model to extract entity information in the user input information, wherein the entity information is a predefined word representing user emotion;
and processing the user input information and the entity information by using a pre-trained text classification model to obtain the user emotion represented by the user input information.
In an optional embodiment, when determining the third score increment, the determining module 42 may specifically be configured to:
acquiring a user emotion key determined according to first user input information in portrait information and/or historical interaction information of a user;
summing a first scoring increment corresponding to the first emotion characteristic, a fourth scoring increment corresponding to the user emotion key and a second scoring increment corresponding to a second emotion characteristic of each piece of user input information except the first piece of user input information in the historical interactive information to obtain a third scoring increment; or,
and summing a first scoring increment corresponding to the first emotion characteristic, a fourth scoring increment corresponding to the user emotion key and a second scoring increment corresponding to a second emotion characteristic of each user input information in the historical interactive information to obtain a third scoring increment.
In an optional embodiment, when the output module 43 outputs M feedback information of the N feedback information acquired for the user input information, the output module may specifically be configured to:
if the first information representation is before the user input information is acquired, the feedback information provided by the intelligent conversation system is accurate, and a target sentence pattern is determined;
and outputting M feedback information in the N feedback information acquired aiming at the user input information according to the target sentence pattern.
Another schematic structural diagram of the question-answering interaction device provided in the embodiment of the present application is shown in fig. 5, and may include:
a memory 51 and a processor 52; wherein,
the memory 51 is used for storing at least one set of instructions;
the processor 52 is configured to call and execute a set of instructions in the memory 51, and by executing the set of instructions, performs the following operations:
acquiring user input information, wherein the user input information represents that a user expects the intelligent conversation system to feed back the feedback information;
judging whether the reduction of the user emotional state represented by the user input information meets a first preset condition or not according to the user input information and the historical interaction information;
and if the judgment result indicates that the reduction of the emotional state of the user meets a first preset condition, outputting M feedback information of the N feedback information acquired aiming at the input information of the user, wherein M, N is a positive integer larger than 1, and M is smaller than or equal to N.
According to the question-answer interaction device, in the question-answer interaction process, every time a user inputs information, whether the reduction of the emotion state of the user meets the preset condition or not is judged according to the user input information and historical interaction information, if the judgment result is yes, the emotion of the user is reduced, namely, negative emotion is developed, a plurality of feedback information are provided for the user to select, the answer hit rate is improved, the conversation life is further prolonged, and the problem solving probability of an intelligent conversation system is improved.
In an alternative embodiment, processor 52 may be further configured to:
and if the judgment result indicates that the emotional state of the user is improved to meet a second preset condition, outputting 1 of the N pieces of feedback information acquired aiming at the input information of the user.
In an optional embodiment, when outputting M of the N feedback information obtained for the user input information, the processor 52 is specifically configured to:
outputting information for expressing apology and M of the N pieces of feedback information acquired aiming at the input information of the user;
the information for apology is located before the M feedback information.
In an optional embodiment, when determining whether the decrease in the user emotional state represented by the user input information satisfies the first preset condition, the processor 52 may specifically be configured to:
Analyzing the user input information to obtain a first emotion characteristic of the user input information, wherein the first emotion characteristic at least comprises user emotion represented by the user input information;
determining a third grading increment corresponding to the user input information according to a first grading increment corresponding to the first emotion characteristic and a second grading increment corresponding to a second emotion characteristic of each user input information in the historical interactive information; the scoring increment is used for representing the change direction and the change degree of the user emotion;
and if the third scoring increment represents that the user emotion changes to the negative emotion, and the absolute value of the third scoring increment is larger than the preset value, determining that the user emotion state reduction represented by the user input information meets a first preset condition.
In an alternative embodiment, the user input information includes: first information and second information;
the first information represents whether the feedback information provided by the intelligent conversation system is accurate or not before the user input information is acquired; the second information represents that the user expects the intelligent conversation system to feed back the feedback information;
the first emotional characteristics further comprise: the first information.
In an alternative embodiment, the processor 52 analyzes the user input information, and when obtaining the user emotion, may specifically be configured to:
Processing user input information by using a pre-trained information extraction model to extract entity information in the user input information, wherein the entity information is a predefined word representing user emotion;
and processing the user input information and the entity information by using a pre-trained text classification model to obtain the user emotion represented by the user input information.
In an optional embodiment, when determining the third score increment, processor 52 may be specifically configured to:
acquiring a user emotion key determined according to first user input information in portrait information and/or historical interaction information of a user;
summing a first scoring increment corresponding to the first emotion characteristic, a fourth scoring increment corresponding to the user emotion key and a second scoring increment corresponding to a second emotion characteristic of each piece of user input information except the first piece of user input information in the historical interactive information to obtain a third scoring increment; or,
and summing a first scoring increment corresponding to the first emotion characteristic, a fourth scoring increment corresponding to the user emotion key and a second scoring increment corresponding to a second emotion characteristic of each user input information in the historical interactive information to obtain a third scoring increment.
In an optional embodiment, when outputting M feedback information of the N feedback information acquired for the user input information, the processor 52 may specifically be configured to:
if the first information representation is before the user input information is acquired, the feedback information provided by the intelligent conversation system is accurate, and a target sentence pattern is determined;
and outputting M feedback information in the N feedback information acquired aiming at the user input information according to the target sentence pattern.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be understood that the technical problems can be solved by combining and combining the features of the embodiments from the claims.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A question-answer interaction method is applied to an intelligent conversation system, the intelligent conversation system can respond to received input information and provide feedback information, and the question-answer interaction method is characterized by comprising the following steps:
acquiring user input information, wherein the user input information represents that a user expects the intelligent session system to feed back the feedback information;
judging whether the reduction of the user emotional state represented by the user input information meets a first preset condition or not according to the user input information and the historical interaction information;
and if the judgment result indicates that the reduction of the emotional state of the user meets a first preset condition, outputting M feedback information of the N feedback information acquired aiming at the input information of the user, wherein M, N is a positive integer greater than 1, and M is less than or equal to N.
2. The method of claim 1, further comprising:
and if the judgment result indicates that the emotional state of the user is improved to meet a second preset condition, outputting 1 of N feedback information acquired aiming at the input information of the user.
3. The method of claim 1, wherein outputting M of the N feedback information obtained for the user input information comprises:
outputting information for expressing apology and M of the N pieces of feedback information acquired aiming at the user input information;
the information for apology is located before the M feedback information.
4. The method according to claim 1, wherein the determining whether the decrease in the emotional state of the user represented by the user input information satisfies a first preset condition according to the user input information and the historical interaction information includes:
analyzing the user input information to obtain a first emotion characteristic of the user input information, wherein the first emotion characteristic at least comprises user emotion represented by the user input information;
determining a third grading increment corresponding to the user input information according to a first grading increment corresponding to the first emotional feature and a second grading increment corresponding to a second emotional feature of each user input information in the historical interactive information; the grading increment is used for representing the change direction and the change degree of the user emotion;
And if the third scoring increment represents that the user emotion changes to negative emotion and the absolute value of the third scoring increment is larger than a preset value, determining that the reduction of the user emotion state represented by the user input information meets a first preset condition.
5. The method of claim 4, wherein the user input information comprises: first information and second information;
the first information represents whether the feedback information provided by the intelligent session system is accurate or not before the user input information is acquired; the second information represents that the user expects the intelligent conversation system to feed back the feedback information;
the first emotional characteristics further comprise: the first information.
6. The method of claim 4 or 5, wherein analyzing the user input information to obtain the user emotion represented by the user input information comprises:
processing the user input information by using a pre-trained information extraction model to extract entity information in the user input information, wherein the entity information is a predefined word representing the user emotion;
and processing the user input information and the entity information by using a pre-trained text classification model to obtain the user emotion represented by the user input information.
7. The method according to claim 4 or 5, wherein the determining a third scoring increment corresponding to the user input information according to the first scoring increment corresponding to the first emotional feature and the second scoring increment corresponding to the second emotional feature of each user input information in the historical interaction information comprises:
acquiring a user emotion key determined according to portrait information of a user and/or first user input information in the historical interaction information;
summing a first scoring increment corresponding to the first emotion characteristic, a fourth scoring increment corresponding to the user emotion key and second scoring increments corresponding to second emotion characteristics of each piece of user input information except the first piece of user input information in the historical interactive information to obtain a third scoring increment; or,
and summing the first scoring increment corresponding to the first emotion characteristic, the fourth scoring increment corresponding to the user emotion key and the second scoring increment corresponding to the second emotion characteristic of each user input information in the historical interactive information to obtain a third scoring increment.
8. The method of claim 5, wherein outputting M of the N feedback information obtained for the user input information comprises:
If the first information representation is before the user input information is obtained, the feedback information provided by the intelligent conversation system is accurate, and a target sentence pattern is determined;
and outputting M feedback information in the N feedback information acquired aiming at the user input information according to the target sentence pattern.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the question-answer interaction method according to any one of claims 1 to 8.
10. A question-answer interaction device for use in an intelligent conversational system, said intelligent conversational system being capable of responding to received input information and providing feedback information, comprising:
a memory for storing at least one set of instructions;
a processor for invoking and executing the set of instructions in the memory, by executing the set of instructions:
acquiring user input information, wherein the user input information represents that a user expects the intelligent session system to feed back the feedback information;
judging whether the reduction of the user emotional state represented by the user input information meets a first preset condition or not according to the user input information and the historical interaction information;
And if the judgment result indicates that the reduction of the user emotional state meets a first preset condition, outputting M feedback information in N feedback information acquired aiming at the user input information, wherein M, N is a positive integer greater than 1, and M is less than or equal to N.
CN201811644570.9A 2018-12-30 2018-12-30 Question-answer interaction method and device Active CN109670030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811644570.9A CN109670030B (en) 2018-12-30 2018-12-30 Question-answer interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811644570.9A CN109670030B (en) 2018-12-30 2018-12-30 Question-answer interaction method and device

Publications (2)

Publication Number Publication Date
CN109670030A CN109670030A (en) 2019-04-23
CN109670030B true CN109670030B (en) 2022-06-28

Family

ID=66146978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811644570.9A Active CN109670030B (en) 2018-12-30 2018-12-30 Question-answer interaction method and device

Country Status (1)

Country Link
CN (1) CN109670030B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021225550A1 (en) * 2020-05-06 2021-11-11 Iren Yaser Deniz Emotion recognition as feedback for reinforcement learning and as an indicator of the explanation need of users
CN111985248A (en) * 2020-06-30 2020-11-24 联想(北京)有限公司 Information interaction method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678361A (en) * 2012-09-13 2014-03-26 腾讯科技(深圳)有限公司 Method and device for switching search results
CN104156359A (en) * 2013-05-13 2014-11-19 腾讯科技(深圳)有限公司 Linking information recommendation method and device
CN105183848A (en) * 2015-09-07 2015-12-23 百度在线网络技术(北京)有限公司 Human-computer chatting method and device based on artificial intelligence
JP2016071394A (en) * 2014-09-26 2016-05-09 日本電信電話株式会社 Emotional information providing device, emotional information providing method, and emotional information providing program
CN106126636A (en) * 2016-06-23 2016-11-16 北京光年无限科技有限公司 A kind of man-machine interaction method towards intelligent robot and device
CN108153169A (en) * 2017-12-07 2018-06-12 北京康力优蓝机器人科技有限公司 Guide to visitors mode switching method, system and guide to visitors robot
WO2018147193A1 (en) * 2017-02-08 2018-08-16 日本電信電話株式会社 Model learning device, estimation device, method therefor, and program
CN108491519A (en) * 2018-03-26 2018-09-04 上海智臻智能网络科技股份有限公司 Man-machine interaction method and device, storage medium, terminal
CN108553905A (en) * 2018-03-30 2018-09-21 努比亚技术有限公司 Data feedback method, terminal and computer storage media based on game application
CN109036405A (en) * 2018-07-27 2018-12-18 百度在线网络技术(北京)有限公司 Voice interactive method, device, equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678361A (en) * 2012-09-13 2014-03-26 腾讯科技(深圳)有限公司 Method and device for switching search results
CN104156359A (en) * 2013-05-13 2014-11-19 腾讯科技(深圳)有限公司 Linking information recommendation method and device
JP2016071394A (en) * 2014-09-26 2016-05-09 日本電信電話株式会社 Emotional information providing device, emotional information providing method, and emotional information providing program
CN105183848A (en) * 2015-09-07 2015-12-23 百度在线网络技术(北京)有限公司 Human-computer chatting method and device based on artificial intelligence
CN106126636A (en) * 2016-06-23 2016-11-16 北京光年无限科技有限公司 A kind of man-machine interaction method towards intelligent robot and device
WO2018147193A1 (en) * 2017-02-08 2018-08-16 日本電信電話株式会社 Model learning device, estimation device, method therefor, and program
CN108153169A (en) * 2017-12-07 2018-06-12 北京康力优蓝机器人科技有限公司 Guide to visitors mode switching method, system and guide to visitors robot
CN108491519A (en) * 2018-03-26 2018-09-04 上海智臻智能网络科技股份有限公司 Man-machine interaction method and device, storage medium, terminal
CN108553905A (en) * 2018-03-30 2018-09-21 努比亚技术有限公司 Data feedback method, terminal and computer storage media based on game application
CN109036405A (en) * 2018-07-27 2018-12-18 百度在线网络技术(北京)有限公司 Voice interactive method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
社会化问答平台中问题属性对答案域的影响;姚丹 等;《图书情报知识》;20160514(第171期);103-109 *

Also Published As

Publication number Publication date
CN109670030A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN112365894B (en) AI-based composite voice interaction method and device and computer equipment
CN107230475B (en) Voice keyword recognition method and device, terminal and server
CN106599196B (en) Artificial intelligence dialogue method and system
CN107609092B (en) Intelligent response method and device
CN106847305B (en) Method and device for processing recording data of customer service telephone
CN111078856B (en) Group chat conversation processing method and device and electronic equipment
CN109670030B (en) Question-answer interaction method and device
CN111063370B (en) Voice processing method and device
CN110347817B (en) Intelligent response method and device, storage medium and electronic equipment
CN110569344B (en) Method and device for determining standard question corresponding to dialogue text
CN112183098B (en) Session processing method and device, storage medium and electronic device
JP6952663B2 (en) Response support device and response support method
CN112632242A (en) Intelligent conversation method and device and electronic equipment
CN110489519B (en) Session method based on session prediction model and related products
CN110390109B (en) Method and device for analyzing association relation among multiple group chat messages
CN114490955A (en) Intelligent dialogue method, device, equipment and computer storage medium
CN110209792B (en) Method and system for generating dialogue color eggs
CN112182189A (en) Conversation processing method and device, electronic equipment and storage medium
CN114443821A (en) Robot conversation method and device, electronic equipment and storage medium
CN113539275B (en) Method, device and storage medium for determining speech technology
CN110516043B (en) Answer generation method and device for question-answering system
CN114003699A (en) Method and device for matching dialect, electronic equipment and storage medium
CN112243061A (en) Communication method of mobile terminal and mobile terminal
US20220188364A1 (en) Chat system and chat program
CN117556026B (en) Data generation method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant