CN117725163A - Intelligent question-answering method, device, equipment and storage medium - Google Patents

Intelligent question-answering method, device, equipment and storage medium Download PDF

Info

Publication number
CN117725163A
CN117725163A CN202310755262.8A CN202310755262A CN117725163A CN 117725163 A CN117725163 A CN 117725163A CN 202310755262 A CN202310755262 A CN 202310755262A CN 117725163 A CN117725163 A CN 117725163A
Authority
CN
China
Prior art keywords
answer
emotion
information
target
question
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310755262.8A
Other languages
Chinese (zh)
Inventor
刘喜凯
高龑
汤旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaohongshu Technology Co ltd
Original Assignee
Xiaohongshu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaohongshu Technology Co ltd filed Critical Xiaohongshu Technology Co ltd
Priority to CN202310755262.8A priority Critical patent/CN117725163A/en
Publication of CN117725163A publication Critical patent/CN117725163A/en
Pending legal-status Critical Current

Links

Landscapes

  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses an intelligent question-answering method, device, equipment and storage medium. The method comprises the following steps: carrying out emotion recognition processing on the text information of the target object to obtain emotion information of the target object; calling a generating model, and processing the text information and emotion information of the target object to obtain a candidate answer phone operation corresponding to the text information; wherein the candidate answer surgery includes emotion pacifying surgery; searching a target answer operation matched with the candidate answer operation in the answer text library, and outputting the target answer operation; wherein the reply text library comprises answer utterances of at least one question information under each emotion category. By adopting the embodiment of the application, the output target answering operation can be ensured to be more personified, and the device has the co-emotion capacity, so that the pacifying effect of the answering operation is improved.

Description

Intelligent question-answering method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer applications, and in particular, to an intelligent question-answering method, apparatus, device, and storage medium.
Background
In the chat robot scenario, the robot needs to give a suitable reply according to the input query of the target object. The traditional intelligent question-answering method only can ensure the semantic consistency of the input statement of the target object and the reply statement of the robot, hidden emotion information in the input statement is often ignored, and a more general reply statement is generated, so that replies received by the target object are often ice-cold and do not contain pacifying information.
Disclosure of Invention
The embodiment of the application provides an intelligent question-answering method, device, equipment and storage medium, which can ensure that an output target answer operation is more personified and has more co-emotion capability, thereby improving the pacifying effect of the answer operation.
In one aspect, an embodiment of the present application provides an intelligent question-answering method, which includes:
carrying out emotion recognition processing on text information of a target object to obtain emotion information of the target object;
calling a generating model, and processing the text information and the emotion information of the target object to obtain a candidate answer operation corresponding to the text information; wherein the candidate answer surgery comprises emotion pacifying surgery;
searching a target answer operation matched with the candidate answer operation in a reply text library, and outputting the target answer operation; wherein the reply text library comprises answer utterances of at least one question information under each emotion category.
In one embodiment, the method further comprises:
carrying out emotion recognition processing on the target answering operation to obtain emotion information of the target answering operation;
acquiring an expression element matched with emotion information of the target answering operation;
Splicing the expression element and the target answer operation to obtain a spliced target answer operation;
the outputting the target answer surgery includes:
and outputting the spliced target answer operation.
In one embodiment, the method further comprises:
obtaining a training sample, wherein the training sample comprises question-answer pairs, the question-answer pairs comprise training text information and reference answer questions comprising emotion pacifying questions;
carrying out emotion recognition processing on the training text information to obtain predicted emotion information;
calling an initial generation model, and processing the training text information and the predicted emotion information to obtain a predicted answer phone operation corresponding to the training text information;
and training the initial generation model according to the direction of reducing the difference between the predictive answer phone operation and the reference answer phone operation to obtain the generation model.
In one embodiment, the method further comprises:
acquiring a question-answer library, wherein the question-answer library comprises at least one question-answer pair, and each question-answer pair comprises question information and answer information corresponding to the question information;
aiming at any piece of question information, obtaining answer dialects of the any piece of question information under at least one emotion category, wherein semantic features of each answer dialects are matched with those of the answer information, each answer dialects comprise emotion pacifying dialects, and emotion pacifying dialects contained in each answer dialects are matched with the emotion category of each answer dialects;
And storing the answer phone operation of any question information under at least one emotion category into the answer text library.
In one embodiment, the method further comprises:
and when the emotion information of the target object indicates that the emotion type of the target object is a negative emotion type, triggering and executing the call generation model, and processing the text information and the emotion information of the target object to obtain a candidate answer operation corresponding to the text information.
In one embodiment, the searching the reply text library for the target answer surgery matching the candidate answer surgery includes:
obtaining the association degree between each answer operation and the candidate answer operation in the answer text library;
and determining a target answer technology matched with the candidate answer technology based on the association degree between each answer technology and the candidate answer technology.
In one embodiment, the obtaining the association degree between each answer surgery and the candidate answer surgery in the answer text library includes:
searching at least one answer phone operation under the emotion category indicated by the emotion information of the target object in the reply text library;
And obtaining the association degree between each answer operation and the candidate answer operation in the at least one answer operation.
On the other hand, the embodiment of the application provides an intelligent question-answering device, which comprises:
the emotion recognition unit is used for carrying out emotion recognition processing on the text information of the target object to obtain emotion information of the target object;
the telephone skill generating unit is used for calling a generating model, and processing the text information and the emotion information of the target object to obtain a candidate answer skill corresponding to the text information; wherein the candidate answer surgery comprises emotion pacifying surgery;
the telephone operation searching unit is used for searching a target answer telephone operation matched with the candidate answer telephone operation in the answer text library; wherein the reply text library comprises reply dialogs of at least one question information under each emotion category;
and the speaking operation output unit is used for outputting the target answering operation.
In another aspect, an embodiment of the present application provides a computer device, including a processor, a storage device, and a communication interface, where the processor, the storage device, and the communication interface are connected to each other, where the storage device is configured to store a computer program that supports the computer device to perform the method, the computer program includes program instructions, and the processor is configured to invoke the program instructions to perform the following steps:
Carrying out emotion recognition processing on text information of a target object to obtain emotion information of the target object;
calling a generating model, and processing the text information and the emotion information of the target object to obtain a candidate answer operation corresponding to the text information; wherein the candidate answer surgery comprises emotion pacifying surgery;
searching a target answer operation matched with the candidate answer operation in a reply text library, and outputting the target answer operation; wherein the reply text library comprises answer utterances of at least one question information under each emotion category.
In another aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the above-described intelligent question-answering method.
In another aspect, embodiments of the present application provide a computer program product comprising a computer program adapted to be loaded by a processor and to perform the above-described intelligent question-answering method.
According to the method, emotion recognition processing is conducted on text information of a target object to obtain emotion information of the target object, then a generation model is called, the text information and the emotion information of the target object are processed to obtain candidate answering operation corresponding to the text information, the candidate answering operation fully utilizes the emotion information of the target object, the generated candidate answering operation contains pacifying information, and co-emotion can be generated with the target object. Furthermore, the answer phone operation generated by the generation model is often mechanical, and is not as full as the emotion of the sentence replied by the real person or smooth as the semantic, so that the target answer phone operation matched with the candidate answer phone operation is searched in the reply text library, and the target answer phone operation is used as the final answer phone operation, thereby ensuring that the output target answer phone operation is more personified and has more co-emotion capability, and further improving the pacifying effect of the answer phone operation.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an intelligent question-answering scenario provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of another intelligent question-answering scenario provided by embodiments of the present application;
fig. 3 is a schematic architecture diagram of an intelligent question-answering system according to an embodiment of the present application;
fig. 4 is a schematic flow chart of an intelligent question-answering method according to an embodiment of the present application;
FIG. 5 is a flowchart of another intelligent question-answering method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an intelligent question-answering device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The intelligent question and answer is widely applied in life, and at present, in most intelligent question and answer systems, corresponding answers are usually given according to input sentences of users, the users can input sentences through voice input or touch input and other modes, and the conversion of intonation or text emotion of the users is not considered, so that the generated answers are ice-cold and do not contain pacifying information, and the experience of the users is poor.
Based on the above, the embodiment of the application provides an intelligent question-answer method, which is characterized in that emotion recognition processing is performed on text information of a target object to obtain emotion information of the target object, then a generating model is called, the text information and the emotion information of the target object are processed to obtain candidate answer questions corresponding to the text information, and the candidate answer questions make full use of the emotion information of the target object, so that the generated candidate answer questions comprise pacifying information and can generate a common emotion with the target object. Furthermore, the answer operation generated by the generated model is often more mechanical, and is not as full as the emotion or fluent as the sentence replied by the real person, so that the target answer operation matched with the candidate answer operation is searched in the reply text library, and the target answer operation is used as the final answer operation to ensure that the output target answer operation is more personified and has more co-emotion capability, thereby improving the pacifying effect of the answer operation.
The target object may refer to any object or a specific object, and is not specifically limited by the embodiment of the present application.
In the specific embodiments of the present application, the related objects may refer to users, related data about the users, such as text information, etc., when the embodiments of the present application are applied to specific products or technologies, user permission or consent is required, and the collection, use and processing of related data is required to comply with local laws and regulations and standards.
As an example, as shown in the schematic diagram of the intelligent question-answering scenario in fig. 1, the intelligent answer method provided in the embodiment of the present application may be applied to an intelligent question-answering system, where the intelligent question-answering system may include at least one intelligent customer service 101 and at least one client 102. Any client 102 may obtain an input sentence (e.g., text information of a target object) submitted by a user through a session interface, and then send the input sentence to a certain intelligent customer service 101. The intelligent customer service 101 may perform emotion recognition processing on the input sentence to obtain emotion information of the user, then call the generation model, process the input sentence and the emotion information of the user to obtain a candidate answer operation corresponding to the input sentence, search a target answer operation matched with the candidate answer operation in the answer text library, and send the target answer operation to any client 102. Any of the clients 102 described above may display the target answer in the session interface.
As an example, the intelligent question-answering method may be applied to a chat robot, which may obtain input sentences submitted by a user (e.g., text information of a target object) through a user interface, as shown in the schematic diagram of the intelligent question-answering scenario of fig. 2. And then, the chat robot can carry out emotion recognition processing on the input sentence to obtain emotion information of the user, call a generation model, process the input sentence and the emotion information of the user to obtain a candidate answer operation corresponding to the input sentence, search a target answer operation matched with the candidate answer operation in a reply text library, and display the target answer operation in a user interface.
The execution subject of the intelligent question-answering method provided in the embodiment of the present application may be a computer device, where the computer device includes, but is not limited to, at least one of a server (such as an intelligent customer service shown in fig. 1), a terminal (such as a chat robot shown in fig. 2), and the like, which can be configured to execute the method provided in the embodiment of the present application. In other words, the intelligent question-answering method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform or a content distribution platform, etc. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Referring to fig. 3, fig. 3 is a schematic architecture diagram of an intelligent question-answering system according to an embodiment of the present application. The target object can submit text information to the computer equipment, and after the computer equipment acquires the text information of the target object, the emotion recognition model can be used for carrying out emotion recognition processing on the text information of the target object to obtain emotion information of the target object. Then, the computer equipment can call the generation model to process the text information and the emotion information of the target object, and a candidate answer surgery corresponding to the text information is obtained. Further, the computer device may search the reply text library for a target answer surgery matching the candidate answer surgery through the search module, and output the target answer surgery as a final answer surgery to the target object.
In an alternative embodiment, the communication is susceptible to emotion and thus is not effective when the person is in a negative emotion process. Thus upon detecting that the target object is in a negative emotion, a candidate answer sentence containing pacifying information may be generated based on the text information to pacify the emotion of the target object, thereby achieving effective communication. Based on the above, after the emotion recognition processing is performed on the text information of the target object through the emotion recognition model to obtain the emotion information of the target object, the emotion type of the target object indicated by the emotion information of the target object can be obtained, and when the emotion type of the target object is a negative emotion type, the execution of the call generation model is triggered, the text information and the emotion information of the target object are processed, and the candidate answer operation corresponding to the text information is obtained.
When the emotion type of the target object is a front emotion type or a no emotion type, a response corresponding to the text information can be generated through a traditional intelligent question-answer method. Such as answer information corresponding to question information matched with the text information in a question-answer library mentioned later. Wherein the question information matched with the text information means: and the distance between the semantic features and the semantic features of the text information is smaller than the problem information of the preset distance threshold value. That is, the keywords included in the question information are identical to the keywords included in the text information, that is, the question indicated by the question information and the question indicated by the text information are the same question.
Optionally, when the emotion type of the target object is a positive emotion type or a no emotion type, the text information and the emotion information of the target object may be processed to obtain a candidate answer operation corresponding to the text information, and then the target answer operation matched with the candidate answer operation is searched in the answer text library. Wherein the candidate answer surgery can be adapted to the emotion information of the target object, e.g. if the emotion class of the target object is a positive emotion class, then the emotion class indicated by the emotion information of the candidate answer surgery is also a positive emotion class; if the emotion type of the target object is a no emotion type, the emotion type indicated by the emotion information of the candidate answer operation is also a no emotion type.
In an alternative embodiment, after the target answer operation is obtained, the expression element matched with the emotion information of the target answer operation may be obtained, and then the expression element and the target answer operation are spliced to obtain a spliced target answer operation, and then the spliced target answer operation is output. According to the embodiment, the emotion elements matched with the emotion information of the target answer operation are added, so that the co-emotion capacity and the display performance can be enhanced, and the personification capacity of the output answer operation is higher.
Referring to fig. 4 based on the description of fig. 3, fig. 4 is a schematic flow chart of an intelligent question-answering method provided in an embodiment of the present application, where the intelligent question-answering method may be executed by a server or a terminal computer device; the intelligent question-answering scheme as shown in fig. 4 includes, but is not limited to, steps S401 to S403, in which:
s401, carrying out emotion recognition processing on the text information of the target object to obtain emotion information of the target object.
In specific implementation, the computer device can perform emotion recognition processing on text information of the target object through the emotion recognition model to obtain emotion information of the target object. The mood information may be used to indicate the mood of the target object, such as happy, excited, peace, anger, wounded, commissioned, urgent, anxiety, frustrated, and the like.
Illustratively, the emotion recognition model may include a convolutional neural network (Convolutional Neural Networks, CNN) or HiGRU model, or the like. For example, a (Bidirectional Encoder Representations from Transformers, bert) model, which is a classification model, may be obtained by pre-training, the bert model being used to obtain emotional information of the text information.
For example, taking fig. 1 as an example, the target object may refer to any client or a user corresponding to a certain client, the user may log in to the client through an account, and the computer device may refer to a certain intelligent customer service. The target object may submit a query to the client by voice input or text input, etc. If the target object is voice information submitted to the client, the client can send the voice information to the intelligent customer service, the intelligent customer service performs text conversion on the voice information to obtain text information corresponding to the voice information, and then performs emotion recognition processing on the text information to obtain emotion information of the target object. If the target object is text information submitted to the client, the client can send the text information to the intelligent customer service, and the intelligent customer service carries out emotion recognition processing on the text information to obtain emotion information of the target object.
Taking fig. 2 as an example, the target object may refer to any user that interacts with the chat robot, and the computer device may refer to the chat robot, where the user may submit a query to the chat robot by means of voice input or text input, or the like. If the target object is voice information submitted to the chat robot, the chat robot can perform text conversion on the voice information to obtain text information corresponding to the voice information, and then perform emotion recognition processing on the text information to obtain emotion information of the target object. If the target object is text information submitted to the chat robot, the chat robot can perform emotion recognition processing on the text information to obtain emotion information of the target object.
In one implementation, the computer device may pre-establish a correspondence between the emotion indicated by the emotion information and the emotion category, and then search for the emotion category corresponding to the emotion information of the target object based on the correspondence. For example, the emotion category of emotion such as happiness, excitement, etc. may be determined as the positive emotion category; determining emotion categories of peace, calm and the like as non-emotion categories; the emotion categories of emotion such as anger, heart injury, straying, urgency, anxiety, depression, etc. are determined as negative emotion categories.
S402, calling a generating model, and processing the text information and emotion information of the target object to obtain candidate answer dialects corresponding to the text information, wherein the candidate answer dialects comprise emotion pacifying dialects.
The generating model in the embodiment of the application can generate the answer phone operation containing pacifying information, namely the candidate answer phone operation, based on the text information and the emotion information of the target object. For example, assume that the text information of the target object is "where is the tagol from all? The computer equipment carries out emotion recognition processing on the text information to obtain that the emotion of the target object is urgent, and then the computer equipment can call a generation model to process the text information and the emotion information of the target object to obtain a candidate answer operation corresponding to the text information, such as 'loving, not urgent, immediate answer for you, tagol from India', so as to realize emotion pacifying the target object while providing the answer of the problem of the target object.
In one implementation, the training manner of generating the model may include: obtaining a training sample, wherein the training sample comprises a question-answer pair, the question-answer pair comprises training text information and a reference answer operation comprising an emotion pacifying operation, carrying out emotion recognition processing on the training text information to obtain predicted emotion information, calling an initial generation model, processing the training text information and the predicted emotion information to obtain a predicted answer operation corresponding to the training text information, and training the initial generation model according to the direction of reducing the difference between the predicted answer operation and the reference answer operation to obtain a generation model.
In this embodiment, by performing a supervised training of the initial generation model, it may be ensured that the trained generation model is able to generate candidate answer techniques comprising pacifying information (i.e. emotion pacifying techniques) based on the text information and the emotion information. By way of example, the initial generation model may include an Encoder-Decoder (Encoder-Decoder) based deep learning model, an end2end model, a Long Short-Term Memory (LSTM) model, and the like.
In one implementation manner, when the emotion information of the target object indicates that the emotion type of the target object is a negative emotion type, the execution and the calling of the generation model can be triggered, and the text information and the emotion information of the target object are processed to obtain a candidate answer operation corresponding to the text information.
Specifically, when a person is in a negative emotion process, the communication is easily left and right, so that effective communication cannot be performed. Thus upon detecting that the target object is in a negative emotion, a candidate answer sentence containing pacifying information may be generated based on the text information to pacify the emotion of the target object, thereby achieving effective communication. Based on the above, after the emotion recognition processing is performed on the text information of the target object through the emotion recognition model to obtain the emotion information of the target object, the emotion type of the target object indicated by the emotion information of the target object can be obtained, and when the emotion type of the target object is a negative emotion type, the execution of the call generation model is triggered, the text information and the emotion information of the target object are processed, and the candidate answer operation corresponding to the text information is obtained.
Optionally, when the emotion type of the target object is a positive emotion type or a no emotion type, a reply corresponding to the text information may be generated by a traditional intelligent question-answering method. Such as answer information corresponding to question information matched with the text information in a question-answer library mentioned later. Wherein the question information matched with the text information means: and the distance between the semantic features and the semantic features of the text information is smaller than the problem information of the preset distance threshold value. That is, the keywords included in the question information are identical to the keywords included in the text information, that is, the question indicated by the question information and the question indicated by the text information are the same question.
Optionally, when the emotion type of the target object is a positive emotion type or a no emotion type, the text information and the emotion information of the target object may be processed to obtain a candidate answer operation corresponding to the text information, and then the target answer operation matched with the candidate answer operation is searched in the answer text library. Wherein the candidate answer surgery can be adapted to the emotion information of the target object, e.g. if the emotion class of the target object is a positive emotion class, then the emotion class indicated by the emotion information of the candidate answer surgery is also a positive emotion class; if the emotion type of the target object is a no emotion type, the emotion type indicated by the emotion information of the candidate answer operation is also a no emotion type.
S403, searching a target answer operation matched with the candidate answer operation in a reply text library, and outputting the target answer operation, wherein the reply text library comprises answer operations of at least one question information under each emotion category.
Specifically, because the answer phone operation generated by the generation model is often more mechanical and is not as full as the emotion of the sentence replied by the real person or smooth in semantic, the target answer phone operation matched with the candidate answer phone operation is searched in the reply text library, and the target answer phone operation is used as the final answer phone operation to ensure that the output target answer phone operation is more personified and has more co-emotion capability, so that the pacifying effect of the answer phone operation is improved.
In one implementation, a question-answer library may be obtained, where the question-answer library includes at least one question-answer pair, each question-answer pair includes one question information and answer information corresponding to the question information, answer dialogs of any question information under at least one emotion category are obtained for any question information, semantic features of each answer dialogs are matched with semantic features of the answer information, each answer dialogs includes an emotion pacifying, emotion pacifying included in each answer dialogs is matched with emotion category of each answer dialogs, and then answer dialogs of any question information under at least one emotion category are stored in the answer text library.
Specifically, the answer information in the answer library only contains answers of the corresponding question information, and the answer information does not contain any emotion, i.e. the emotion type of the answer information is a no emotion type.
For example, for any question information in the question-and-answer library, answer-words for the question information under at least one emotion category may be artificially generated. Answer utterances of the respective question information under at least one emotion category may be submitted to the computer device by a management object (e.g., a developer or linguistic person, etc.). For example, the answer to the question information in the negative emotion category, the answer in the positive emotion category, and the answer in the no emotion category may be artificially generated. Alternatively, a finer granularity of classification of emotion may be performed, such as a pattern of artificially generating answer words for the question information in a happy emotion, a pattern of answer words in an excited emotion, a pattern of answer words in a peace emotion, a pattern of answer words in an angry emotion, a pattern of answer words in an urgent emotion, and so on.
For example, assume that a question pair in a question and answer library includes question information of "where is tyl from? "answer information is" india ", answer to the question information under happy emotion may be" loved, tegar from india "; the answer to the question information in peace of mind may be "tagel from india"; the answer to the question information in the urgent emotion may be "loved, not urgent, you answer right away, tagol from india".
For another example, for the reply information of any question information in the question and answer library, the language gas assisted word of the reply information under at least one emotion type can be generated, and then the language gas assisted word of the reply information under any emotion type is added into the reply information, so that the answer operation of the question information under the emotion type is obtained.
Alternatively, the mood aid words of the reply information under at least one emotion category may be obtained by a mood aid word generation model. For example, the reply information may be input to the mood word generating model, the mood word generating model extracts semantic features of the reply information and semantic features of each emotion type, and then obtains the mood word of the reply information under the emotion type based on the semantic features of the reply information and the semantic features of any emotion type. The model of generating the mood assist word may be trained from labeling samples, e.g., training samples may be obtained, the training samples including training response information, and tags of the training response information, the tags of the training response information including reference mood assist words for the training response information in at least one emotion category. Then, the initial mood word-assistance model may be invoked to extract semantic features of the training response information, as well as semantic features of each emotion category, and then based on the semantic features of the training response information, and the semantic features of any emotion category, a predicted mood word-assistance for the training response information under that emotion category may be obtained. Further, the initial model for generating the mood assistance words may be trained to obtain a model for generating the mood assistance words in a direction that reduces the difference between the predicted mood assistance words of the training response information in each mood category and the reference mood assistance words of the training response information in the corresponding mood category. Through the training mode, the training-obtained language-gas-assisted word generation model can be ensured to accurately generate the language-gas-assisted word of any reply information under at least one emotion category.
Optionally, adding the mood assisted words of the reply information under any emotion category to the reply information, so as to obtain the answer operation of the question information under the emotion category, wherein the answer operation can be realized through a mood assisted word adding model, and it can be understood that the adding position of the mood assisted words to the reply information can be determined through the mood assisted word adding model. For example, the reply information and the language-aid words of the reply information under any emotion category may be input to the language-aid word adding model, the language-aid word adding model extracts semantic features of the reply information, the semantic features of the emotion category, and the semantic features of the language-aid words of the reply information under any emotion category, then based on the semantic features of the reply information, the semantic features of the emotion category, and the semantic features of the language-aid words, an adding position of the language-aid words is determined, and then based on the adding position, the language-aid words are added to the reply information, so as to obtain the answer operation of the question information under the emotion category. The model of adding the training speech aids may be obtained by labeling a sample, for example, a training sample may be obtained, the training sample including training response information, training the training speech aids for the training response information in any emotion category, and a label for the training response information, the label for the training response information may include a reference answer in that emotion category. Then, the initial language-to-word addition model may be invoked to extract semantic features of the training reply information, semantic features of the emotion classification, and semantic features of the language-to-word under the emotion classification, then determine an addition location of the language-to-word based on the semantic features of the training reply information, the semantic features of the emotion classification, and the semantic features of the language-to-word under the emotion classification, and then add the language-to-word to the reply information based on the addition location, resulting in a predictive response. Further, the initial model of the addition of the word of the language may be trained to obtain the model of the addition of the word of the language according to the direction of reducing the difference between the predictive answer and the reference answer under the emotion classification. Through the training mode, the addition position of the language-gas assisted words can be accurately determined by the language-gas assisted word addition model obtained through training.
It may be understood that the model for generating the mood assistance word and the model for adding the mood assistance word in the embodiments of the present application may be two independent neural network models, or may be integrated into one neural network model, for example, by training to obtain one neural network model, where the neural network model can accurately generate the mood assistance word of any reply information under at least one emotion category, and accurately determine the adding position of the mood assistance word, so as to obtain the answer operation of any question information under any emotion category.
In one implementation, the method for searching the reply text library for the target answer surgery matching the candidate answer surgery may include: and obtaining the association degree between each answer operation and the candidate answer operation in the reply text library, and determining the target answer operation matched with the candidate answer operation based on the association degree between each answer operation and the candidate answer operation.
Specifically, the reply text library may include answer questions of at least one question information under at least one emotion category, and the computer device may acquire a degree of association between each answer question and the candidate answer questions in the reply text library after acquiring the candidate answer questions, and then determine a target answer question matching the candidate answer questions based on the degree of association between each answer question and the candidate answer questions. For example, the answer phone operation with the highest association degree with the candidate answer phone operation may be used as the target answer phone operation matched with the candidate answer phone operation. For another example, an answer phone operation with a degree of association with the candidate answer phone operation greater than a preset degree of association threshold may be selected, and then one answer phone operation is randomly determined in the selected answer phone operation, and the answer phone operation is used as a target answer phone operation matched with the candidate answer phone operation.
Optionally, the computer device may obtain a first degree of association between the semantic features of each answer in the reply text library and the semantic features of the candidate answer, and a second degree of association between the emotion information of each answer and the emotion information of the candidate answer, and obtain the degree of association between each answer and the candidate answer based on the first degree of association and the second degree of association.
If the first association degree between the semantic features of a certain answer in the reply text library and the semantic features of the candidate answer is larger, the answer and the candidate answer are indicated to be the answer information of the same question information with high probability. If the second association between the emotion information of a certain answer in the reply text library and the emotion information of the candidate answer is larger, the answer and the candidate answer are indicated to be the answer under the same emotion category with high probability. That is, the greater the first and second degrees of association, the greater the degree of association between the answer and the candidate answer. For example, the computer device may learn the first degree of association using the second degree of association to obtain a degree of association between the answer surgery and the candidate answer surgery. Alternatively, the computer device may weight and sum the first degree of association and the second degree of association to obtain a degree of association between the answer and the candidate answer.
Optionally, the computer device may obtain a first degree of association between the semantic features of each answer in the reply text library and the semantic features of the candidate answer, determine the answer with the first degree of association greater than a preset threshold of association, further obtain a second degree of association between the determined emotion information of the answer and the emotion information of the candidate answer, and determine the answer with the second greatest degree of association as the answer with the candidate answer with the greatest degree of association.
Specifically, if the first association degree between the semantic features of a certain answer in the reply text library and the semantic features of the candidate answer is larger, then the answer and the candidate answer are indicated to be the answer information of the same question information with high probability. Therefore, the computer equipment firstly determines the answer phone operation with the first association degree larger than the preset association degree threshold, namely, searches the answer phone operation with the question information corresponding to the candidate answer phone operation as the same question information from the answer text library. And then selecting the answer phone operation with the second highest association degree from the searched answer phone operation, namely selecting the answer phone operation with the same emotion category as the candidate answer phone operation from the searched answer phone operation, and taking the selected answer phone operation as the answer phone operation with the highest association degree with the candidate answer phone operation.
In one implementation, the method for obtaining the association degree between each answer surgery and the candidate answer surgery in the answer text library may include: searching at least one answer phone operation under the emotion category indicated by the emotion information of the target object in the reply text library, and acquiring the association degree between each answer phone operation and the candidate answer phone operation in the at least one answer phone operation.
Specifically, the computer device may search at least one answer phone operation under the emotion category indicated by the emotion information of the target object in the reply text library, and then obtain a degree of association between each answer phone operation and the candidate answer phone operation in the searched at least one answer phone operation. That is, the computer device may search the reply text library for at least one answer phone operation under the emotion category indicated by the emotion information of the target object, that is, an answer phone operation under the same emotion category as the candidate answer phone operation, and then select, from the at least one searched answer phone operation, an answer phone operation in which the question information corresponding to the candidate answer phone operation is the same question information, and use the selected answer phone operation as the answer phone operation with the largest association degree with the candidate answer phone operation.
According to the method, the device and the system, emotion recognition processing is carried out on text information of a target object to obtain emotion information of the target object, then a generating model is called, the text information and the emotion information of the target object are processed to obtain candidate answer operation corresponding to the text information, the target answer operation matched with the candidate answer operation is searched in a reply text library, the target answer operation is used as a final answer operation, the output target answer operation can be ensured to be more personified, the sharing emotion capability is further achieved, and accordingly the pacifying effect of the answer operation is improved.
Based on the above description, please refer to fig. 5, fig. 5 is a flowchart of an intelligent question-answering method provided in the embodiment of the present application, where the intelligent question-answering method may be executed by a computer device such as a server or a terminal. The intelligent question-answering method as shown in fig. 5 includes, but is not limited to, steps S501 to S507, in which:
s501, carrying out emotion recognition processing on text information of a target object to obtain emotion information of the target object.
S502, calling a generating model, and processing the text information and emotion information of the target object to obtain candidate answer dialogs corresponding to the text information, wherein the candidate answer dialogs comprise emotion pacifying dialogs.
S503, searching a target answer operation matched with the candidate answer operation in a reply text library, wherein the reply text library comprises answer operations of at least one question information under each emotion category.
It can be understood that, in the embodiments of the present application, the steps S501 to S503 may be referred to the related descriptions of the steps S401 to S403 in the above embodiments, which are not repeated herein.
S504, carrying out emotion recognition processing on the target answering operation to obtain emotion information of the target answering operation.
The method for performing emotion recognition processing on the target answer surgery is the same as the method for performing emotion recognition processing on the text information of the target object in the above embodiment, and specifically, refer to the description related to performing emotion recognition processing on the text information of the target object in the above embodiment, which is not repeated in this embodiment of the present application.
S505, obtaining the emotion element matched with the emotion information of the target answering operation.
For example, the expression element may include emoji expression, homemade expression, or theme expression, etc. Wherein, the expression element matched with the emotion information of the target answer surgery can be: the emotion information of the target answer technique indicates the emotion elements of the same emotion. For example, assume that the text information of the target object is "where is the tagol coming from all? "the target answer is" loved, not urgent, you answer immediately, tagol comes from India ", if the emotion indicated by the emotion information of the target answer is identified as" happy ", it can be determined that the emotion element matching the emotion information of the target answer can be an emotion symbol expressing" happy ", such as a" smiling "face.
In one implementation, the emoji that matches the mood information of the target answer surgery may be derived by a emoji generation model. For example, emotion information of the target answer surgery may be input to an emotion element generation model, the emotion element generation model may extract semantic features of the emotion information, and then obtain an emotion element matched with the semantic features based on the semantic features. The expression element generation model may be obtained through labeling sample training, for example, a training sample may be obtained, the training sample includes training emotion information, and a label of the training emotion information, and the label of the training emotion information may include a reference expression element matched with the training emotion information. Then, the initial emotion element generation model can be called to extract semantic features of the training emotion information, and then based on the semantic features of the training emotion information, the predicted emotion element matched with the training emotion information is obtained. Further, the initial expression element generation model can be trained according to the direction of reducing the difference between the predicted expression element and the reference expression element, and the expression element generation model is obtained. Through the training mode, the expression element generation model obtained through training can be ensured to accurately generate the expression element matched with any emotion information.
S506, the expression element and the target answer operation are spliced, and the spliced target answer operation is obtained.
For example, an emoji may be placed after the last word of the target answer, as shown in fig. 1. It can be understood that the splicing manner of the emotion element and the target answer operation is not limited in this embodiment of the present application, for example, the emotion element may be placed in front of the first text of the target answer operation, or the emotion element may be placed in a position adjacent to a character with the highest matching degree with the emotion indicated by the emotion element, among characters included in the target answer operation, and so on.
In one implementation, the expression element and the target answer operation are spliced to obtain a spliced target answer operation, and the spliced target answer operation can be realized through an expression element splicing model. The expression element and the target answer technology can be input into an expression element splicing model, semantic features of the expression element are extracted by the expression element splicing model, semantic features of the target answer technology are then determined based on the semantic features of the expression element and the semantic features of the target answer technology, and then the expression element and the target answer technology are spliced based on the splicing position, so that the spliced target answer technology is obtained. The emotion element stitching model may be obtained through labeling sample training, for example, a training sample may be obtained, the training sample including training emotion elements, training answering techniques, and referencing answering techniques. Then, the initial expression element splicing model can be called to extract semantic features of training expression elements and semantic features of a training answering operation, then based on the semantic features of the training expression elements and the semantic features of the training answering operation, the splicing position of the training expression elements is determined, and then based on the splicing position, the training expression elements and the training answering operation are spliced, so that a predicted answering operation is obtained. Furthermore, the initial expression element splicing model can be trained according to the direction of reducing the difference between the predictive answer operation and the reference answer operation, so as to obtain the expression element splicing model. Through the training mode, the spliced position of the expression element can be accurately determined by the expression element spliced model obtained through training.
S507, outputting the spliced target answer operation.
In one implementation, after outputting the spliced target answer operation, feedback information submitted by the target object for the spliced target answer operation can be obtained, and then the reply text library is optimized based on the feedback information, so that the optimized reply text library can provide a stronger pacifying effect and a stronger personification answer operation.
For example, assume that the text information of the target object is "where is the tagol from all? And carrying out emotion recognition processing on the text information by the computer equipment to obtain that the emotion of the target object is urgent, and then calling a generation model by the computer equipment to process the text information and the emotion information of the target object to obtain candidate answer questions corresponding to the text information, for example, the text information is urgent, and Tagol is from India. Target answers matching the candidate answers are then looked up in a library of answer texts, e.g. "loved, not urgent, tagol from India". Further, emotion recognition processing is carried out on the target answer operation, emotion information of the target answer operation is obtained, expression elements matched with the emotion information of the target answer operation are obtained, the expression elements and the target answer operation are spliced, the spliced target answer operation is obtained, and the spliced target answer operation is output. If the feedback information submitted by the target object for the spliced target answering operation is "how urgent, i can be how urgent", based on the feedback information, it can be determined that the target object is not satisfied with the pacifying operation in the feedback information, i.e. the language-gas assisted words in the feedback information are to be optimized, then the language-gas assisted words in the target answering operation in the reply text library can be optimized, for example, the target answering operation in the reply text library is optimized to be "lovely, i.e. you answer immediately, tagory comes from India).
In the embodiment of the application, emotion recognition processing is performed on text information of a target object to obtain emotion information of the target object, a generating model is called, the text information and the emotion information of the target object are processed to obtain candidate answer technologies corresponding to the text information, the candidate answer technologies comprise emotion pacifying technologies, the target answer technologies matched with the candidate answer technologies are searched in a reply text library, the reply text library comprises answer technologies of at least one question information under each emotion category, emotion recognition processing is performed on the target answer technologies to obtain emotion information of the target answer technologies, expression elements matched with the emotion information of the target answer technologies are obtained, the expression elements and the target answer technologies are spliced to obtain spliced target answer technologies, and the spliced target answer technologies are output, so that the co-emotion capability and the display performance can be enhanced, and the personification capability of the output answer technologies is stronger.
The present embodiment also provides a computer storage medium having stored therein program instructions for implementing the corresponding method described in the above embodiments when executed.
Referring to fig. 6 again, fig. 6 is a schematic structural diagram of an intelligent question-answering device according to an embodiment of the present application.
In one implementation manner of the intelligent question-answering device, the intelligent question-answering device comprises the following structure.
An emotion recognition unit 601, configured to perform emotion recognition processing on text information of a target object, so as to obtain emotion information of the target object;
a speaking operation generating unit 602, configured to invoke a generating model, and process the text information and emotion information of the target object to obtain a candidate answering operation corresponding to the text information; wherein the candidate answer surgery comprises emotion pacifying surgery;
a speaking search unit 603, configured to search a reply text library for a target speaking answer matched with the candidate speaking answer; wherein the reply text library comprises reply dialogs of at least one question information under each emotion category;
and a speaking output unit 604, configured to output the target answer speaking.
In one embodiment, the emotion recognition unit 601 is further configured to perform emotion recognition processing on the target answer surgery to obtain emotion information of the target answer surgery;
the intelligent question-answering apparatus may further include an acquisition unit 605, and a splicing unit 606, wherein:
An obtaining unit 605, configured to obtain an expression element that matches emotion information of the target answer surgery;
the splicing unit 606 is configured to splice the expression element and the target answer phone operation to obtain a spliced target answer phone operation;
the speaking output unit 604 outputs the target answer speaking, including:
and outputting the spliced target answer operation.
In one embodiment, the intelligent question answering apparatus may further include an acquisition unit 605, and a training unit 607, wherein:
an obtaining unit 605, configured to obtain a training sample, where the training sample includes a question-answer pair, where the question-answer pair includes training text information, and a reference answer surgery including emotion pacifying surgery;
the emotion recognition unit 601 is further configured to perform emotion recognition processing on the training text information to obtain predicted emotion information;
the speaking operation generating unit 602 is further configured to invoke an initial generation model, and process the training text information and the predicted emotion information to obtain a predicted answer speaking operation corresponding to the training text information;
and a training unit 607, configured to train the initial generation model according to a direction for reducing the difference between the predicted answer phone operation and the reference answer phone operation, so as to obtain the generation model.
In one embodiment, the intelligent question answering apparatus may further include an acquisition unit 605, and a storage unit 608, wherein:
an obtaining unit 605, configured to obtain a question-answer library, where the question-answer library includes at least one question-answer pair, and each question-answer pair includes one question information and answer information corresponding to the question information;
the obtaining unit 605 is further configured to obtain, for any question information, answer dialogs of the any question information under at least one emotion category, where semantic features of each answer dialogs are matched with semantic features of the answer information, each answer dialogs include an emotion pacifying, and each emotion pacifying included in each answer dialogs is matched with the emotion category of each answer dialogs;
a storage unit 608, configured to store answer dialogs of the arbitrary question information under at least one emotion category in the answer text library.
In one embodiment, the speech surgery generating unit 602 is further configured to invoke a generating model when the emotion information of the target object indicates that the emotion category of the target object is a negative emotion category, and process the text information and the emotion information of the target object to obtain a candidate answer speech surgery corresponding to the text information.
In one embodiment, the phone search unit 603 searches the reply text library for a target phone call matching the candidate phone call, including:
obtaining the association degree between each answer operation and the candidate answer operation in the answer text library;
and taking the answer phone operation with the highest association degree with the candidate answer phone operation as a target answer phone operation matched with the candidate answer phone operation.
In one embodiment, the phone search unit 603 obtains a degree of association between each phone call answer in the reply text library and the candidate phone call answer, including:
searching at least one answer phone operation under the emotion category indicated by the emotion information of the target object in the reply text library;
and obtaining the association degree between each answer operation and the candidate answer operation in the at least one answer operation.
In this embodiment of the present application, emotion recognition processing is performed on text information of a target object through an emotion recognition unit 601, so as to obtain emotion information of the target object, then a speech operation generating unit 602 invokes a generating model, processes the text information and the emotion information of the target object, so as to obtain a candidate answer operation corresponding to the text information, and a speech operation searching unit 603 searches a reply text library for a target answer operation matched with the candidate answer operation, and uses the target answer operation as a final answer operation, so that it can be ensured that the target answer operation output by the speech operation output unit 604 is more personified, has a more co-emotion capability, and thus improves a pacifying effect of the answer operation.
Referring to fig. 7 again, fig. 7 is a schematic structural diagram of a computer device provided in an embodiment of the present application, where the computer device in the embodiment of the present application includes a power supply module and other structures, and includes a processor 701, a storage 702, and a communication interface 703. Data can be interacted among the processor 701, the storage device 702 and the communication interface 703, and the processor 701 realizes a corresponding intelligent question-answering method.
The storage 702 may include volatile memory (RAM), such as random-access memory (RAM); the storage 702 may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), a Solid State Drive (SSD), etc.; the storage 702 may also include a combination of the types of memory described above.
The processor 701 may be a central processing unit (central processing unit, CPU). The processor 701 may also be a combination of a CPU and a GPU. In the server, a plurality of CPUs and GPUs can be included as required to conduct corresponding intelligent question-answering. In one embodiment, storage 702 is used to store program instructions. The processor 701 may invoke program instructions to implement the various methods as referred to above in embodiments of the present application.
In a first possible implementation manner, the processor 701 of the computer device invokes the program instructions stored in the storage device 702, to perform emotion recognition processing on the text information of the target object, so as to obtain emotion information of the target object; calling a generating model, and processing the text information and the emotion information of the target object to obtain a candidate answer operation corresponding to the text information; wherein the candidate answer surgery comprises emotion pacifying surgery; searching a target answer operation matched with the candidate answer operation in a reply text library; wherein the reply text library comprises reply dialogs of at least one question information under each emotion category; the target answer is output through the communication interface 703.
In one embodiment, the processor 701 is further configured to perform the following operations:
carrying out emotion recognition processing on the target answering operation to obtain emotion information of the target answering operation;
acquiring an expression element matched with emotion information of the target answering operation;
splicing the expression element and the target answer operation to obtain a spliced target answer operation;
the communication interface 703 outputs the target answer, including:
And outputting the spliced target answer operation.
In one embodiment, the processor 701 is further configured to perform the following operations:
obtaining a training sample, wherein the training sample comprises question-answer pairs, the question-answer pairs comprise training text information and reference answer questions comprising emotion pacifying questions;
carrying out emotion recognition processing on the training text information to obtain predicted emotion information;
calling an initial generation model, and processing the training text information and the predicted emotion information to obtain a predicted answer phone operation corresponding to the training text information;
and training the initial generation model according to the direction of reducing the difference between the predictive answer phone operation and the reference answer phone operation to obtain the generation model.
In one embodiment, the processor 701 is further configured to perform the following operations:
acquiring a question-answer library, wherein the question-answer library comprises at least one question-answer pair, and each question-answer pair comprises question information and answer information corresponding to the question information;
aiming at any piece of question information, obtaining answer dialects of the any piece of question information under at least one emotion category, wherein semantic features of each answer dialects are matched with those of the answer information, each answer dialects comprise emotion pacifying dialects, and emotion pacifying dialects contained in each answer dialects are matched with the emotion category of each answer dialects;
And storing the answer phone operation of any question information under at least one emotion category into the answer text library.
In one embodiment, when the emotion information of the target object indicates that the emotion category of the target object is a negative emotion category, the triggering processor 701 invokes a generating model to process the text information and the emotion information of the target object, so as to obtain a candidate answer operation corresponding to the text information.
In one embodiment, when the processor 701 searches the reply text library for the target answer surgery matching the candidate answer surgery, the following operations are specifically performed:
obtaining the association degree between each answer operation and the candidate answer operation in the answer text library;
and taking the answer phone operation with the highest association degree with the candidate answer phone operation as a target answer phone operation matched with the candidate answer phone operation.
In one embodiment, the processor 701 specifically performs the following operations when obtaining the degree of association between each answer in the reply text library and the candidate answer, where the degree of association is:
searching at least one answer phone operation under the emotion category indicated by the emotion information of the target object in the reply text library;
And obtaining the association degree between each answer operation and the candidate answer operation in the at least one answer operation.
In this embodiment of the present application, the processor 701 performs emotion recognition processing on text information of a target object to obtain emotion information of the target object, and then invokes the generating model to process the text information and the emotion information of the target object to obtain a candidate answer operation corresponding to the text information, searches a target answer operation matched with the candidate answer operation in the reply text library, and uses the target answer operation as a final answer operation, so that it is ensured that the output target answer operation is more personified and has a more cosolvent capability, thereby improving a pacifying effect of the answer operation.
It will be appreciated by those skilled in the art that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The computer readable storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like. The computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
The above disclosure is only a few examples of the present application, and it is not intended to limit the scope of the claims, and those skilled in the art will understand that all or a portion of the above-described embodiments may be implemented and equivalents may be substituted for elements thereof, which are included in the scope of the present invention.

Claims (10)

1. An intelligent question-answering method is characterized by comprising the following steps:
carrying out emotion recognition processing on text information of a target object to obtain emotion information of the target object;
calling a generating model, and processing the text information and the emotion information of the target object to obtain a candidate answer operation corresponding to the text information; wherein the candidate answer surgery comprises emotion pacifying surgery;
searching a target answer operation matched with the candidate answer operation in a reply text library, and outputting the target answer operation; wherein the reply text library comprises answer utterances of at least one question information under each emotion category.
2. The method according to claim 1, wherein the method further comprises:
carrying out emotion recognition processing on the target answering operation to obtain emotion information of the target answering operation;
Acquiring an expression element matched with emotion information of the target answering operation;
splicing the expression element and the target answer operation to obtain a spliced target answer operation;
the outputting the target answer surgery includes:
and outputting the spliced target answer operation.
3. The method according to claim 1, wherein the method further comprises:
obtaining a training sample, wherein the training sample comprises question-answer pairs, the question-answer pairs comprise training text information and reference answer questions comprising emotion pacifying questions;
carrying out emotion recognition processing on the training text information to obtain predicted emotion information;
calling an initial generation model, and processing the training text information and the predicted emotion information to obtain a predicted answer phone operation corresponding to the training text information;
and training the initial generation model according to the direction of reducing the difference between the predictive answer phone operation and the reference answer phone operation to obtain the generation model.
4. The method according to claim 1, wherein the method further comprises:
acquiring a question-answer library, wherein the question-answer library comprises at least one question-answer pair, and each question-answer pair comprises question information and answer information corresponding to the question information;
Aiming at any piece of question information, obtaining answer dialects of the any piece of question information under at least one emotion category, wherein semantic features of each answer dialects are matched with those of the answer information, each answer dialects comprise emotion pacifying dialects, and emotion pacifying dialects contained in each answer dialects are matched with the emotion category of each answer dialects;
and storing the answer phone operation of any question information under at least one emotion category into the answer text library.
5. The method according to claim 1, wherein the method further comprises:
and when the emotion information of the target object indicates that the emotion type of the target object is a negative emotion type, triggering and executing the call generation model, and processing the text information and the emotion information of the target object to obtain a candidate answer operation corresponding to the text information.
6. The method of claim 1, wherein the looking up in a reply text library a target answer surgery that matches the candidate answer surgery comprises:
obtaining the association degree between each answer operation and the candidate answer operation in the answer text library;
And determining a target answer technology matched with the candidate answer technology based on the association degree between each answer technology and the candidate answer technology.
7. The method of claim 6, wherein the obtaining the degree of association between each answer in the reply text library and the candidate answer comprises:
searching at least one answer phone operation under the emotion category indicated by the emotion information of the target object in the reply text library;
and obtaining the association degree between each answer operation and the candidate answer operation in the at least one answer operation.
8. An intelligent question-answering device, characterized in that the device comprises:
the emotion recognition unit is used for carrying out emotion recognition processing on the text information of the target object to obtain emotion information of the target object;
the telephone skill generating unit is used for calling a generating model, and processing the text information and the emotion information of the target object to obtain a candidate answer skill corresponding to the text information; wherein the candidate answer surgery comprises emotion pacifying surgery;
the telephone operation searching unit is used for searching a target answer telephone operation matched with the candidate answer telephone operation in the answer text library; wherein the reply text library comprises reply dialogs of at least one question information under each emotion category;
And the speaking operation output unit is used for outputting the target answering operation.
9. A computer device comprising a processor, a storage device, and a communication interface, the processor, storage device, and communication interface being interconnected, wherein:
the storage device is used for storing a computer program, and the computer program comprises program instructions;
the processor for invoking the program instructions to perform the intelligent question-answering method according to any one of claims 1 to 7.
10. A computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, the computer program comprising program instructions for performing the intelligent question-answering method according to any one of claims 1 to 7 when executed by a processor.
CN202310755262.8A 2023-06-25 2023-06-25 Intelligent question-answering method, device, equipment and storage medium Pending CN117725163A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310755262.8A CN117725163A (en) 2023-06-25 2023-06-25 Intelligent question-answering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310755262.8A CN117725163A (en) 2023-06-25 2023-06-25 Intelligent question-answering method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117725163A true CN117725163A (en) 2024-03-19

Family

ID=90198478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310755262.8A Pending CN117725163A (en) 2023-06-25 2023-06-25 Intelligent question-answering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117725163A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118035431A (en) * 2024-04-12 2024-05-14 青岛网信信息科技有限公司 User emotion prediction method, medium and system in text customer service process
CN118151818A (en) * 2024-05-08 2024-06-07 浙江口碑网络技术有限公司 Interaction method and device based on visual content
CN118151818B (en) * 2024-05-08 2024-07-26 浙江口碑网络技术有限公司 Interaction method and device based on visual content

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118035431A (en) * 2024-04-12 2024-05-14 青岛网信信息科技有限公司 User emotion prediction method, medium and system in text customer service process
CN118151818A (en) * 2024-05-08 2024-06-07 浙江口碑网络技术有限公司 Interaction method and device based on visual content
CN118151818B (en) * 2024-05-08 2024-07-26 浙江口碑网络技术有限公司 Interaction method and device based on visual content

Similar Documents

Publication Publication Date Title
EP3559946B1 (en) Facilitating end-to-end communications with automated assistants in multiple languages
CN111930940B (en) Text emotion classification method and device, electronic equipment and storage medium
CN112417102B (en) Voice query method, device, server and readable storage medium
CN115309877B (en) Dialogue generation method, dialogue model training method and device
US11636272B2 (en) Hybrid natural language understanding
CN113987147A (en) Sample processing method and device
CN113569017B (en) Model processing method and device, electronic equipment and storage medium
CN112632242A (en) Intelligent conversation method and device and electronic equipment
US11610584B2 (en) Methods and systems for determining characteristics of a dialog between a computer and a user
CN117725163A (en) Intelligent question-answering method, device, equipment and storage medium
CN117122927A (en) NPC interaction method, device and storage medium
CN116821290A (en) Multitasking dialogue-oriented large language model training method and interaction method
Inupakutika et al. Integration of NLP and Speech-to-text Applications with Chatbots
KR20180105501A (en) Method for processing language information and electronic device thereof
Adewale et al. Pixie: a social chatbot
CN116384412B (en) Dialogue content generation method and device, computer readable storage medium and terminal
CN110931002B (en) Man-machine interaction method, device, computer equipment and storage medium
CN109002498B (en) Man-machine conversation method, device, equipment and storage medium
CN115204181A (en) Text detection method and device, electronic equipment and computer readable storage medium
CN112183114B (en) Model training and semantic integrity recognition method and device
Wantroba et al. A method for designing dialogue systems by using ontologies
CN111966840B (en) Man-machine interaction management method and management system for language teaching
CN113743126B (en) Intelligent interaction method and device based on user emotion
Dima et al. Conversational Agent Embodying a Historical Figure using Transformers.
CN118298804A (en) Speech processing method, device, equipment and medium for intelligent customer service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination