CN117216229A - Method and device for generating customer service answers - Google Patents

Method and device for generating customer service answers Download PDF

Info

Publication number
CN117216229A
CN117216229A CN202311476763.9A CN202311476763A CN117216229A CN 117216229 A CN117216229 A CN 117216229A CN 202311476763 A CN202311476763 A CN 202311476763A CN 117216229 A CN117216229 A CN 117216229A
Authority
CN
China
Prior art keywords
text
question
answer
model
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311476763.9A
Other languages
Chinese (zh)
Inventor
张�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202311476763.9A priority Critical patent/CN117216229A/en
Publication of CN117216229A publication Critical patent/CN117216229A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Machine Translation (AREA)

Abstract

The embodiment of the specification relates to a method and a device for generating customer service answers, wherein the method comprises the following steps: acquiring a question text and user portrait characteristics of a user, determining a prompt text of a large dialogue model according to the question text and the user portrait characteristics, wherein the prompt text indicates expected emotion and/or personality style types when answering a question, inputting the question text into a question-answering model to obtain a corresponding first answer text, and inputting the prompt text and the first answer text into the large dialogue model to enable the large dialogue model to convert the first answer text into a second answer text according to the style types.

Description

Method and device for generating customer service answers
Technical Field
One or more embodiments of the present disclosure relate to the field of human-computer interaction, and in particular, to a method and apparatus for generating customer service answers.
Background
In recent years, with popularization of the internet and development of technology, there is an increasing demand for rapid and convenient services. The traditional manual customer service can not meet the requirements of a large number of users at the same time, and meanwhile, the manual customer service needs a large number of human resources, so that the operation cost of enterprises is improved. The advent of customer service robots effectively eases this contradiction.
The existing customer service robot is often used for providing customer service problem consultation by offline training of a unified robot dialogue model, and has the defects that the robot answers to the problems uniformly, has no human emotion, cannot carry out dialogue beyond the delineating problem and the like. Therefore, a customer service robot solution that is more closely related to human customer service is needed.
Disclosure of Invention
One or more embodiments of the present disclosure describe a method and an apparatus for generating a customer service answer, which determine, according to different user characteristics and question characteristics, a chat style expected by the user, and return a customer service answer with a corresponding emotion and personality, so as to achieve an effect that a customer service robot thousands of people is closer to a human customer service style.
In a first aspect, a method for generating customer service answers is provided, including:
acquiring a question text proposed by a user and user portrait features of the user;
determining a prompt text of a large dialogue model according to the question text and the user portrait characteristics, wherein the prompt text indicates expected emotion and/or personality style when answering a question;
inputting the question text into a question-answering model to obtain a corresponding first answer text;
and inputting the prompt text and the first answer text into a large dialogue model, so that the large dialogue model converts the first answer text into a second answer text according to the style type.
In one possible implementation, determining the prompt text of the large dialogue model according to the question text and the user portrait features includes:
extracting the question features of the question text, and forming a first feature combination based on the question features and the user portrait features;
determining a prompt word combination according to the first feature combination by using a preset first mapping relation, wherein the first mapping relation is a mapping relation between a plurality of feature combinations and a plurality of prompt word combinations;
and determining the prompt text by using a preset first template according to the prompt word combination.
In one possible implementation, the combination of the prompt words includes a first prompt word representing the response emotion and/or a second prompt word representing the personality trait.
In one possible implementation, the user portrait features include at least: professional characteristics, hobby characteristics, personality preferences for customer service.
In one possible implementation, the problem features include at least: difficulty level of the problem, emergency level of the problem and emotion characteristics of the problem.
In one possible embodiment, the emotion characteristics of the question are determined from the question text by means of an emotion analysis model.
In one possible embodiment, the difficulty level and the emergency level are determined by the following method:
performing similarity matching on the question text and a plurality of questions in a plurality of preset question-answer pairs, and determining a question with highest similarity as a first candidate question, wherein the first candidate question has preset first difficulty level and first emergency level;
the first difficulty level and the first urgency level are determined as the difficulty level of the problem and the urgency level of the problem, respectively.
In one possible implementation manner, the question-answer model comprises a plurality of preset question-answer pairs; inputting the question text into a question-answering model to obtain a corresponding first answer text, wherein the method comprises the following steps:
performing similarity matching on the question text and a plurality of questions in the plurality of question-answering pairs, and determining the question with the highest similarity as a second candidate question;
and determining the corresponding answer of the second candidate question in the question-answer pair as a first answer text.
In one possible implementation, the prompting text and the first answer text are input into a large dialogue model, so that the large dialogue model converts the first answer text into the second answer text according to the style type, including:
inputting the prompt text into the large dialogue model, and setting a first dialogue scene for the large dialogue model;
and under the first dialogue scene, inputting the first answer text into the dialogue large model to obtain a converted second answer text.
In one possible implementation, the prompting text and the first answer text are input into a large dialogue model, so that the large dialogue model converts the first answer text into the second answer text according to the style type, including:
and after the prompt text and the first answer text are spliced, inputting the spliced prompt text and the first answer text into the large dialogue model to obtain the second answer text.
In a second aspect, an apparatus for generating a customer service answer is provided, including:
the acquisition unit is configured to acquire a question text proposed by a user and a user portrait characteristic of the user;
a prompt text generation unit configured to determine a prompt text of a large dialogue model according to the question text and the user portrait characteristics, wherein the prompt text indicates expected emotion and/or personality style types when answering a question;
the first answer generation unit is configured to input the question text into a question-answer model to obtain a corresponding first answer text;
and the second answer generation unit is configured to input the prompt text and the first answer text into a large dialogue model, so that the large dialogue model converts the first answer text into the second answer text according to the style type.
In a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
In a fourth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has executable code stored therein, and wherein the processor, when executing the executable code, implements the method of the first aspect.
According to the method and the device for generating the customer service answers, the method generates the corresponding personified dialogue text by adding expected emotion and/or personality information into the prompt text input by the model through the dialogue large model. Aiming at different users, through analyzing the portrait features and the question text of the users, the expected chat style of the users in the question and answer is obtained, and corresponding customer service answers are generated, so that the effect of thousands of people and thousands of faces of the customer service robot is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments disclosed in the present specification, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only examples of the embodiments disclosed in the present specification, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a schematic diagram of an implementation scenario of a method of generating customer service answers according to one embodiment;
FIG. 2 illustrates a flow diagram of a method of generating customer service answers, according to one embodiment;
FIG. 3 illustrates a flow diagram of a method of determining a hint text from question text and user portraits, according to one embodiment;
fig. 4 shows a schematic block diagram of an apparatus for generating customer service answers according to one embodiment.
Description of the embodiments
The following describes the scheme provided in the present specification with reference to the drawings.
As described above, the existing customer service robot is often used for providing customer service problem consultation by offline training of a unified robot dialogue model, and has the disadvantages of uniform robot, no emotion, incapability of conducting dialogues other than the delineating problem, and the like. In a practical technical scenario, different people have different conversational habits, e.g. people unfamiliar with the internet need to explain in more detail, express simpler robots, while users suffering from fraud need emotional pacifying, fast responding robots, etc.
The focus of the current customer service robots is still on solving common natural language recognition tasks such as user problem recognition. If a plurality of corresponding different models are trained for different user portrait groups based on the traditional model link, the problems that the training cost is too high, the models are inflexible and cannot adapt to new scenes exist.
The large dialogue model is a model which is constructed by using deep learning, natural language processing and other technologies and can perform man-machine dialogue. These models are typically based on neural network architecture, trained with large-scale training corpus data to achieve intelligent conversational capabilities.
In a large dialog model, prompt text (Prompt) refers to information or questions entered by a user into the model for directing the model to generate corresponding answers or responses. The prompt text may be a simple question, a context of a dialog, or a complete dialog scene, which serves to provide the model with initial information entered to guide the model in answering or generating an appropriate response. By setting different prompt texts, the model can be guided to generate different answers so as to meet the requirements of users. The user may directly input a question as a prompt text, for example, "what color is the banana? "the conversation-big model will generate a reply" banana is yellow. "; you can also input "you are a translation-proficient translator, i will input a section of english below, you need to translate it into chinese. "set up a dialogue scene for dialogue big model, tell model what scene it is currently, what role it will play, what it needs to do, then input" I am a student "again, dialogue big model will generate reply" I am is a student. "
By utilizing the flexible and changeable dialogue capability of the large dialogue model, changeable customer service answers can be generated, and meanwhile, the overall training cost is reduced. Fig. 1 illustrates a schematic diagram of an implementation scenario of a method of generating customer service answers according to one embodiment. In the example of FIG. 1, after a user presents a question, the customer service system first obtains user portrayal features of the user, such as occupation, hobbies, preferences for customer service, etc., while extracting question features, such as difficulty level, urgency level, emotional features of the content of the question, etc., from the question text. And then combining the user portrait and the problem features, and mapping the user portrait and the problem features into corresponding prompt word combinations by using a preset mapping relation, wherein the prompt words can comprise a robot type, a conversation type, a robot emotion and the like. And then the prompt word combination is filled into a preset template to generate a corresponding prompt text (prompt). At the same time, the question text is input into a traditional question-answer model which is trained offline in advance, and a general answer text of a answer to the question is obtained, wherein the general answer text is fixed and cannot be dynamically adjusted according to user portraits and question features. And then inputting the prompt text and the general answer text into a conversation large model together, so that the conversation large model converts the general answer text into a customized answer text according to the style of the customer service robot indicated by the prompt text, and the customized answer text is the answer text with pertinence generated according to the related user portrait and the question feature of the question, thereby achieving the effect of thousands of people and thousands of faces of the customer service robot.
The following describes specific implementation steps of the method for generating customer service answers in combination with specific embodiments. Fig. 2 illustrates a flow chart of a method of generating customer service answers, the method execution subject may be any platform or server or cluster of devices with computing, processing capabilities, etc., according to one embodiment. As shown in fig. 2, the method at least includes: step 202, acquiring a question text proposed by a user and user portrait features of the user; step 204, determining a prompt text of the dialogue large model according to the question text and the user portrait characteristics, wherein the prompt text indicates expected emotion and/or personality style when answering the questions; step 206, inputting the question text into a question-answer model to obtain a corresponding first answer text; step 208, inputting the prompt text and the first answer text into a large dialogue model, so that the large dialogue model converts the first answer text into a second answer text according to the style type.
First, in step 202, a question text presented by a user and a user portrait feature of the user are acquired.
The question text of the question presented by the user may be text directly input by the user in the customer service interface dialog box, or may be a text input by the user, converted by using a speech-to-text tool, or a common question clicked by the user in a preset common question list, which is not limited herein.
The user portrayal is a detailed description and analysis of the user, is obtained by analyzing and constructing data such as personal information, behavior data, interest preference and the like of the user in advance, and is stored in a corresponding user portrayal database. The personal information of the user generating the user portrait may not necessarily include the private information. If its private information needs to be used, collection can be performed with user authorization obtained. The type of user information to be obtained may be presented, for example, by way of privacy statement, and allows the user to make settings and selections as to whether to collect or not. On the basis of acquiring and storing user portrait data, user portrait features of a user can be acquired from a user portrait database by asking for the user's user identification, such as user id. Since the user's behavior and needs are dynamically changing, the user portraits are also updated periodically.
In one embodiment, the user portrayal feature comprises at least: professional characteristics, hobby characteristics, personality preferences for customer service.
Then, at step 204, based on the question text and user portrait characteristics, a prompt text for a large dialog model is determined, the prompt text indicating the type of emotion and/or personality desired when answering the question.
In one embodiment, the specific flow of step 204 may be as shown in FIG. 3. FIG. 3 shows a flowchart of a method of determining a prompt text from a question text and a user representation, the method comprising steps 302 to 306, according to one embodiment.
In step 302, question features of the question text are extracted, and a first feature combination is formed based on the question features and the user portrait features.
The question features of the question text include at least: difficulty level of the problem, emergency level of the problem and emotion characteristics of the problem.
In one embodiment, the difficulty level and urgency level of a problem is determined by the following method:
by collecting common questions of users in advance and corresponding standard answers provided by professional customer service personnel or technicians, a plurality of groups of answer pairs are formed, and corresponding urgency and difficulty are set for each question.
And performing similarity matching on the question text and a plurality of questions in a plurality of preset question-answer pairs, determining the question with the highest similarity as a first candidate question, wherein the first candidate question has a preset first difficulty level and a preset first emergency level, and then determining the first difficulty level and the first emergency level as the difficulty level of the question and the emergency level of the question respectively.
There are various methods for matching the similarity between texts, for example, a plurality of texts are input into a text encoder to obtain a plurality of text token vectors, and then cosine similarity between the text token vectors is calculated as the similarity between the texts. The text can also be directly subjected to word segmentation processing, and then the word overlap ratio between word segmentation results of the text is compared to be used as the similarity between the texts. The description is not intended to be limiting.
In other embodiments, a corresponding machine learning model may also be pre-trained, and the model used to determine the difficulty level and urgency of the problem.
In one embodiment, the emotional characteristics of the questions are determined from the questions text by an emotion analysis model. The emotion characteristics of the question can be obtained by inputting the text of the question into a pre-trained emotion analysis model.
Then, in step 304, according to the first feature combination, a preset first mapping relationship is used to determine a prompt word combination, where the first mapping relationship is a mapping relationship between a plurality of feature combinations and a plurality of prompt word combinations.
Specifically, the cue words in the cue word combination may include a first cue word that characterizes the answer emotion and/or a second cue word that characterizes the personality trait. For example, the cue words that characterize the answer emotion may include the type of robot emotion, including in particular, enthusiasm, stationarity, pacifying; the prompting words for characterizing the personality characteristics can include robot types, particularly including specific fields of special essence, general purpose, humour, quadratic element and the like, and can also include speaking types, particularly including professional answers, concise answers and the like.
The first mapping relationship may be manually determined, and when the feature combinations and the prompt combinations are more, the first mapping relationship may also be determined by training a corresponding machine learning model, which is not limited herein. All possible kinds of feature combinations and prompt word combinations can be enumerated respectively, then the total mapping between the feature combinations and the prompt word combinations is determined, or one or more feature mappings can be set as one or more corresponding prompt words, which is not limited herein. For example, when the target feature combination is { primary school students, hobby painting, preference for a patience customer service, simple problem difficulty level, general problem urgency level, urgent problem emotion feature } the target feature combination can be mapped into the prompt word combination { robot emotion pacifying, robot type general, robot emotion succinct }, using the first mapping relation.
In step 306, the prompt text is determined according to the combination of the prompt words using a preset first template.
The first template is a preset prompt text template, and comprises gaps which can be filled with corresponding prompt words, the prompt words in the prompt word combination are filled into the gaps preset in the first template, and then the prompt text is determined. The first template may be set manually according to actual situations, or may be generated by using a corresponding model, which is not limited herein.
For example, when the prompt word combination contains { [ emotion type ], [ robot type ], [ talk type ] }, the first template may be "you are a service robot in the area of a professional area is [ robot type ], i will provide you with a standard service answer text, you need to convert it into a service answer text of the style type i require, you should generate text [ talk type ], and language should be biased towards [ emotion type ]. ". In a specific example, when the prompt word combination is { pacify, general, succinct }, the prompt text determined according to the first template may be "you are a customer service robot in a professional field which is a general field, i will provide you with a standard customer service answer text, you need to convert it into a customer service answer text of the style type i require, you should generate text succinct, and language should be biased towards pacifying. "
In other embodiments, other methods may also be used to determine the prompt text of the dialog large model based on the question text and user portrait features. For example, by means of a pre-trained machine learning model, corresponding prompt text is generated directly from the input question text and user portraits. The description is not intended to be limiting.
Returning then to fig. 2, at step 206, the question text is entered into a question-answer model, resulting in a corresponding first answer text.
The question-answer model may be an offline training unified robot dialogue model, and the first answer text may be a general answer text, i.e., a general answer text that does not consider user portrait features or question features.
In one embodiment, the question-answer model comprises a plurality of preset question-answer pairs; step 206 at this time specifically includes: performing similarity matching on the question text and a plurality of questions in the plurality of question-answering pairs, and determining the question with the highest similarity as a second candidate question; and determining the corresponding answer of the second candidate question in the question-answer pair as a first answer text.
The method for matching the text with the similarity may refer to the method in step 302, and will not be described herein.
For example, the text of the question input by the user is "i am not logged in to account, i am always speaking i am wrong password", the second candidate question obtained by similarity matching is "logging wrong password, resulting in no login", and the first answer text corresponding to the second candidate question in the question-answer pair is "please click" forget password under the login page? And entering a retrieving password interface, and then inputting a mobile phone number and a verification code of the user to modify a login password of the user. "
At step 208, the prompt text and the first answer text are input into a dialog big model, such that the dialog big model converts the first answer text into a second answer text according to the style type.
The second answer text can be a customized answer text, and is a targeted thousand-person thousand-face customer service answer made by the customer service robot according to user portrait features of different users and question features of the question text.
In one embodiment, step 208 may include: inputting the prompt text into the large dialogue model, and setting a first dialogue scene for the large dialogue model; and under the first dialogue scene, inputting the first answer text into the dialogue large model to obtain a converted second answer text.
For example, continuing with the previous example, the prompt text used is "you are a custom robot in the professional field is a general field, i will provide you with a standard custom answer text, you need to convert it to custom answer text of the style type i want, you should generate text that is concise, and the mood should be biased towards pacifying. The prompt text is input into the large dialogue model, a first dialogue scene is set for the large dialogue model, and the large dialogue model can reply to provide standard customer service answer text which you want to convert, i am a customer service robot in the general field and will try to convert the standard customer service answer text into a concise and pacified language style. "then, please click" forget password "under the login page" please click on the first answer text? And entering a retrieving password interface, and then inputting a mobile phone number and a verification code of the user to modify a login password of the user. "input into dialogue big model, get second answer text" I understand your trouble, please click "forget password" under login page? "enter recovery password interface". There, you need to enter your cell phone number and passcode, and you can then modify your login password. If you encounter any problem, we ask for help anytime. We will try to help you solve the problem. In this way, for a question of a user in urgent emotion, a concise answer can be made to the question while soothing the emotion of the user, so that the problem of the user can be solved more quickly.
In another embodiment, step 208 may include: and after the prompt text and the first answer text are spliced, inputting the spliced prompt text and the first answer text into the large dialogue model to obtain the second answer text.
For example, the prompt text "you are a customer service robot in the general field of the professional field, i will provide a standard customer service answer text for you, you need to convert it into a customer service answer text of the style type i require, you should generate a concise text, and the mood should be biased towards pacifying. "and first answer text" please click "forget password" under the login page? And entering a retrieving password interface, and then inputting a mobile phone number and a verification code of the user to modify a login password of the user. And splicing, and adding a line connector in the middle, so that the large dialogue model can more clearly distinguish a scene guiding part from an answer text part, and the following text is obtained:
"you are a customer service robot in the general field of the professional field, i will provide a standard customer service answer text for you, you need to convert it into a customer service answer text of the style that i require, the text you generate should be concise, and the mood should be biased towards pacifying.
Please click "forget password" under the login page? And entering a retrieving password interface, and then inputting a mobile phone number and a verification code of the user to modify a login password of the user. "
Then input into the large dialogue model to get the second answer text "i understand your trouble, please click" forget password is not under login page? "enter recovery password interface". There, you need to enter your cell phone number and passcode, and you can then modify your login password. If you encounter any problem, we ask for help anytime. We will try to help you solve the problem. "
The text of the answer may be converted using a variety of conversation models, such as ChatGPT, chatGLM, a Zhenqiao model of the ant group, without limitation.
By the method shown in fig. 2, the effect of thousands of people and thousands of sides of the customer service robot can be achieved according to the specific answer text generated by the related user portrait and the question feature of the question.
According to another embodiment, a device for generating customer service answers is further provided. Fig. 4 illustrates a schematic block diagram of an apparatus for generating customer service answers, which may be deployed in any device, platform, or cluster of devices having computing, processing capabilities, according to one embodiment. As shown in fig. 4, the apparatus 400 includes:
an obtaining unit 401, configured to obtain a question text proposed by a user and a user portrait feature of the user;
a prompt text generation unit 402 configured to determine a prompt text of a large dialogue model according to the question text and the user portrait features, the prompt text indicating a style type of emotion and/or personality expected when answering a question;
a first answer generation unit 403 configured to input the question text into a question-answer model, to obtain a corresponding first answer text;
and a second answer generation unit 404 configured to input the prompt text and the first answer text into a large dialogue model, so that the large dialogue model converts the first answer text into a second answer text according to the style type.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in any of the above embodiments.
According to an embodiment of yet another aspect, there is also provided a computing device including a memory and a processor, wherein the memory has executable code stored therein, and the processor, when executing the executable code, implements the method described in any of the above embodiments.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (13)

1. A method of generating customer service answers, comprising:
acquiring a question text proposed by a user and user portrait features of the user;
determining a prompt text of a large dialogue model according to the question text and the user portrait characteristics, wherein the prompt text indicates expected emotion and/or personality style when answering a question;
inputting the question text into a question-answering model to obtain a corresponding first answer text;
and inputting the prompt text and the first answer text into a large dialogue model, so that the large dialogue model converts the first answer text into a second answer text according to the style type.
2. The method of claim 1, determining a prompt text for a large dialog model based on the question text and user portrait features, comprising:
extracting the question features of the question text, and forming a first feature combination based on the question features and the user portrait features;
determining a prompt word combination according to the first feature combination by using a preset first mapping relation, wherein the first mapping relation is a mapping relation between a plurality of feature combinations and a plurality of prompt word combinations;
and determining the prompt text by using a preset first template according to the prompt word combination.
3. The method of claim 2, wherein the combination of cues includes a first cue that characterizes an answer emotion and/or a second cue that characterizes a personality trait.
4. The method of claim 1, wherein the user representation features comprise at least: professional characteristics, hobby characteristics, personality preferences for customer service.
5. The method of claim 2, wherein the problem features include at least: difficulty level of the problem, emergency level of the problem and emotion characteristics of the problem.
6. The method of claim 5, wherein emotion characteristics of the question are determined from the question text by an emotion analysis model.
7. The method of claim 5, wherein the difficulty level and urgency level are determined by:
performing similarity matching on the question text and a plurality of questions in a plurality of preset question-answer pairs, and determining a question with highest similarity as a first candidate question, wherein the first candidate question has preset first difficulty level and first emergency level;
the first difficulty level and the first urgency level are determined as the difficulty level of the problem and the urgency level of the problem, respectively.
8. The method of claim 1, wherein the question-answer model comprises a plurality of preset question-answer pairs; inputting the question text into a question-answering model to obtain a corresponding first answer text, wherein the method comprises the following steps:
performing similarity matching on the question text and a plurality of questions in the plurality of question-answering pairs, and determining the question with the highest similarity as a second candidate question;
and determining the corresponding answer of the second candidate question in the question-answer pair as a first answer text.
9. The method of claim 1, inputting the prompt text and first answer text into a conversation large model such that the conversation large model converts the first answer text into second answer text in accordance with the style type, comprising:
inputting the prompt text into the large dialogue model, and setting a first dialogue scene for the large dialogue model;
and under the first dialogue scene, inputting the first answer text into the dialogue large model to obtain a converted second answer text.
10. The method of claim 1, inputting the prompt text and first answer text into a conversation large model such that the conversation large model converts the first answer text into second answer text in accordance with the style type, comprising:
and after the prompt text and the first answer text are spliced, inputting the spliced prompt text and the first answer text into the large dialogue model to obtain the second answer text.
11. An apparatus for generating customer service answers, comprising:
the acquisition unit is configured to acquire a question text proposed by a user and a user portrait characteristic of the user;
a prompt text generation unit configured to determine a prompt text of a large dialogue model according to the question text and the user portrait characteristics, wherein the prompt text indicates expected emotion and/or personality style types when answering a question;
the first answer generation unit is configured to input the question text into a question-answer model to obtain a corresponding first answer text;
and the second answer generation unit is configured to input the prompt text and the first answer text into a large dialogue model, so that the large dialogue model converts the first answer text into the second answer text according to the style type.
12. A computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-10.
13. A computing device comprising a memory and a processor, wherein the memory has executable code stored therein, which when executed by the processor, implements the method of any of claims 1-10.
CN202311476763.9A 2023-11-08 2023-11-08 Method and device for generating customer service answers Pending CN117216229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311476763.9A CN117216229A (en) 2023-11-08 2023-11-08 Method and device for generating customer service answers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311476763.9A CN117216229A (en) 2023-11-08 2023-11-08 Method and device for generating customer service answers

Publications (1)

Publication Number Publication Date
CN117216229A true CN117216229A (en) 2023-12-12

Family

ID=89039319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311476763.9A Pending CN117216229A (en) 2023-11-08 2023-11-08 Method and device for generating customer service answers

Country Status (1)

Country Link
CN (1) CN117216229A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117857599A (en) * 2024-01-09 2024-04-09 北京安真医疗科技有限公司 Digital person dialogue intelligent management system based on Internet of things

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017041370A1 (en) * 2015-09-07 2017-03-16 百度在线网络技术(北京)有限公司 Human-computer chatting method and device based on artificial intelligence
CN116049360A (en) * 2022-11-29 2023-05-02 兴业银行股份有限公司 Intelligent voice dialogue scene conversation intervention method and system based on client image
CN116226344A (en) * 2023-02-20 2023-06-06 湖北星纪时代科技有限公司 Dialogue generation method, dialogue generation device, and storage medium
CN116483979A (en) * 2023-05-19 2023-07-25 平安科技(深圳)有限公司 Dialog model training method, device, equipment and medium based on artificial intelligence
CN116521843A (en) * 2023-04-27 2023-08-01 广州华多网络科技有限公司 Intelligent customer service method facing user, device, equipment and medium thereof
CN116541504A (en) * 2023-06-27 2023-08-04 北京聆心智能科技有限公司 Dialog generation method, device, medium and computing equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017041370A1 (en) * 2015-09-07 2017-03-16 百度在线网络技术(北京)有限公司 Human-computer chatting method and device based on artificial intelligence
CN116049360A (en) * 2022-11-29 2023-05-02 兴业银行股份有限公司 Intelligent voice dialogue scene conversation intervention method and system based on client image
CN116226344A (en) * 2023-02-20 2023-06-06 湖北星纪时代科技有限公司 Dialogue generation method, dialogue generation device, and storage medium
CN116521843A (en) * 2023-04-27 2023-08-01 广州华多网络科技有限公司 Intelligent customer service method facing user, device, equipment and medium thereof
CN116483979A (en) * 2023-05-19 2023-07-25 平安科技(深圳)有限公司 Dialog model training method, device, equipment and medium based on artificial intelligence
CN116541504A (en) * 2023-06-27 2023-08-04 北京聆心智能科技有限公司 Dialog generation method, device, medium and computing equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵阳洋;王振宇;王佩;杨添;张睿;尹凯;: "任务型对话***研究综述", 计算机学报, no. 10, pages 74 - 108 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117857599A (en) * 2024-01-09 2024-04-09 北京安真医疗科技有限公司 Digital person dialogue intelligent management system based on Internet of things

Similar Documents

Publication Publication Date Title
Canonico et al. A comparison and critique of natural language understanding tools
RU2672176C2 (en) Natural expression processing method, processing and response method, device and system
Ondáš et al. How chatbots can be involved in the education process
CN112307742B (en) Session type human-computer interaction spoken language evaluation method, device and storage medium
CN109960723B (en) Interaction system and method for psychological robot
KR102033388B1 (en) Apparatus and method for question answering
CN111177359A (en) Multi-turn dialogue method and device
CN110797010A (en) Question-answer scoring method, device, equipment and storage medium based on artificial intelligence
CN117216229A (en) Method and device for generating customer service answers
CN117332072B (en) Dialogue processing, voice abstract extraction and target dialogue model training method
CN112199486A (en) Task type multi-turn conversation method and system for office scene
Skidmore et al. Using Alexa for flashcard-based learning
CN115982400A (en) Multi-mode-based emotion image generation method and server
CN117520523A (en) Data processing method, device, equipment and storage medium
Zaidi et al. Artificial intelligence based career counselling chatbot a system for counselling
CN117370190A (en) Test case generation method and device, electronic equipment and storage medium
CN113763962A (en) Audio processing method and device, storage medium and computer equipment
CN113051388B (en) Intelligent question-answering method and device, electronic equipment and storage medium
CN115132353A (en) Method, device and equipment for generating psychological question automatic response model
CN111818290B (en) Online interviewing method and system
CN111310847B (en) Method and device for training element classification model
CN115408500A (en) Question-answer consistency evaluation method and device, electronic equipment and medium
Zobel et al. Improving the scalability of MOOC platforms with automated, dialogue-based systems
RU2807436C1 (en) Interactive speech simulation system
CN111309990A (en) Statement response method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination