CN111651573A - Intelligent customer service dialogue reply generation method and device and electronic equipment - Google Patents

Intelligent customer service dialogue reply generation method and device and electronic equipment Download PDF

Info

Publication number
CN111651573A
CN111651573A CN202010457142.6A CN202010457142A CN111651573A CN 111651573 A CN111651573 A CN 111651573A CN 202010457142 A CN202010457142 A CN 202010457142A CN 111651573 A CN111651573 A CN 111651573A
Authority
CN
China
Prior art keywords
vector
word
customer service
conversation
current round
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010457142.6A
Other languages
Chinese (zh)
Other versions
CN111651573B (en
Inventor
陈成才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiaoi Robot Technology Co Ltd
Original Assignee
Shanghai Xiaoi Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiaoi Robot Technology Co Ltd filed Critical Shanghai Xiaoi Robot Technology Co Ltd
Priority to CN202010457142.6A priority Critical patent/CN111651573B/en
Publication of CN111651573A publication Critical patent/CN111651573A/en
Application granted granted Critical
Publication of CN111651573B publication Critical patent/CN111651573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Economics (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides an intelligent customer service dialogue reply generation method, an intelligent customer service dialogue reply generation device and electronic equipment. The method comprises the following steps: acquiring the user input session in the current round; encoding the user input session of the current round to obtain an input vector of the current round; obtaining a historical conversation record, wherein the historical conversation record comprises at least one round of historical user conversation and corresponding historical customer service responses; encoding the historical dialogue records to obtain historical record hidden vectors; calculating key information vectors corresponding to the preset word slots according to the history record hidden vectors; calculating association vectors of each preset word slot and other word slots according to the key information vectors; decoding to obtain a plurality of rounds of conversation states according to the association vector; and acquiring the customer service replies of the current round according to the input vectors of the current round and the multi-round conversation state. Through the steps, the invention can effectively improve the effectiveness of conversation tracking, further improve the performance of the intelligent customer service conversation system, and has wide application value.

Description

Intelligent customer service dialogue reply generation method and device and electronic equipment
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an intelligent customer service dialogue reply generation method and device and electronic equipment.
Background
With the spreading of intelligent customer service application in various industries, a multi-turn task type conversation system at the core of the intelligent customer service application is also widely researched. A conventional multi-turn dialog system generally includes four parts of natural language understanding, dialog state tracking, dialog strategy learning, and natural language generation. The session state tracking is a key step for accurately giving a system reply in order to acquire a history state in multiple rounds of interactive sessions of a user and a system customer service.
In general, a dialog state may be represented by a combination of a pair of word slots and values corresponding thereto. Initially, the process of tracking the conversation state needs preset ontology information, but the method has huge cost and is difficult to implement; for this reason, sequence-to-sequence generation is mainly used at present, but this method has difficulty in extracting key information from the original dialog. In addition, some word slots with weak correlation in the conversation history record also generate great interference, and the problem of word bank overflow is also an important reason for influencing state tracking.
Disclosure of Invention
In order to solve the above problems, the present invention provides an intelligent customer service dialog reply generation method, which is applied to multiple rounds of dialogues, and comprises:
acquiring the user input session in the current round;
encoding the user input session of the current round to obtain an input vector of the current round;
obtaining a historical conversation record, wherein the historical conversation record comprises at least one round of historical user conversation and corresponding historical customer service responses;
encoding the historical dialogue records to obtain historical record hidden vectors;
calculating key information vectors corresponding to the preset word slots according to the history record hidden vectors;
calculating association vectors of each preset word slot and other word slots according to the key information vectors;
decoding to obtain a plurality of rounds of conversation states according to the association vector;
and acquiring the customer service replies of the current round according to the input vectors of the current round and the multi-round conversation state.
Optionally, after the encoding of the user input session of the current round is performed to obtain the input vector of the current round, before the calculating of the key information vector corresponding to each preset word slot according to the history hidden vector, the method further includes:
and acquiring a user dialogue understanding vector in the current round according to the input vector in the current round, wherein the user dialogue understanding vector in the current round and the history record hidden vector are jointly used for calculating the key information vector.
Optionally, the obtaining the customer service response of the current round according to the multi-round conversation state specifically includes:
acquiring the customer service execution actions of the current round according to the multi-round conversation state;
and acquiring the customer service response of the current round according to the multi-round conversation state and the customer service execution action of the current round.
Optionally, the calculating, according to the history hidden vector, a key information vector corresponding to each preset word slot includes:
coding the word slot category of the word slot into a word slot hidden vector, and acquiring the last word slot hidden vector as context information corresponding to the word slot;
and calculating a context vector as a key information vector corresponding to the word slot by using an attention mechanism according to the context information and the history record hidden vector.
Optionally, the calculating, according to the key information vector, an association vector between each preset word slot and another word slot includes:
calculating word slot class similarity between the word slots and word slot value similarity between the word slots;
constructing a shielding matrix according to the word groove category similarity and the word groove value similarity;
and calculating association vectors of the word slot and other word slots according to the shielding matrix and the key information vector.
Optionally, the constructing a masking matrix according to the word bin category similarity and the word bin value similarity includes:
the word slot category similarity and the word slot value similarity are fused through hyper-parameters to establish a word slot similarity matrix, and a shielding matrix is constructed according to the word slot similarity matrix; or
Constructing similarity vectors by taking the word groove category similarity and the word groove value similarity as horizontal and vertical coordinates respectively; and carrying out secondary classification on the similarity vectors, and constructing a shielding matrix according to a classification result.
Optionally, the decoding to obtain multiple rounds of dialog states according to the association vector includes:
judging whether the user has a clear intention in the customer service conversation process according to the history hidden vector, and if not, giving the corresponding multi-turn conversation state;
if the user has a clear intention, decoding to obtain a state hidden vector according to the association vector and the word vector sequence;
calculating and generating probability distribution and replication probability distribution according to the state hidden vector;
calculating final probability distribution according to the generation probability distribution and the replication probability distribution;
and acquiring the multi-round conversation state according to the final probability distribution.
In addition, the invention also provides an intelligent customer service dialogue reply generation device, which comprises:
the input module is used for acquiring the input session of the user in the current round;
the input encoding module is used for encoding the input session of the current round of users to obtain an input vector of the current round;
the system comprises a record acquisition module, a service processing module and a service processing module, wherein the record acquisition module is used for acquiring a historical conversation record, and the historical conversation record comprises at least one round of historical user conversation and corresponding historical customer service responses;
the record coding module is used for coding the historical dialogue record to obtain a historical record hidden vector;
the key information extraction module is used for calculating key information vectors corresponding to the preset word slots according to the history record hidden vectors;
the association recombination module is used for calculating association vectors of each preset word slot and other word slots according to the key information vectors;
the state acquisition module is used for decoding to obtain a plurality of rounds of conversation states according to the association vector;
and the customer service reply module is used for acquiring the customer service reply of the current round according to the input vector of the current round and the multi-round conversation state.
Furthermore, the invention provides an electronic device comprising a memory, a processor and an output means, said processor when executing implementing the method steps as described above, said output means being adapted to output said present customer service response.
The invention also provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the aforementioned method steps.
According to the multi-turn conversation method in the intelligent customer service, the key information of the word slots is obtained, the relevance is established among the word slots, the influence of data shortage on conversation state tracking is reduced, and the most valuable content is obtained from historical conversation information. Further, in a preferred embodiment, key information is acquired by combining an attention mechanism, information sharing among word slots is realized, the method is simple and easy to realize, and convenient data migration can be realized in different application fields.
The intelligent customer service dialogue reply generation method can effectively improve the accuracy of dialogue state tracking and has good application performance in various intelligent customer service scenes of multiple rounds of dialogue such as reservation and arrangement.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a flowchart illustrating a method for tracking a multi-turn dialog state according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for calculating a key information vector according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for calculating an association vector according to a first embodiment of the invention;
FIG. 4 is a flowchart illustrating a method for obtaining a multi-turn dialog state by decoding according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for generating an intelligent customer service dialog reply according to a second embodiment of the present invention;
FIG. 6 is a flowchart illustrating a method for generating an intelligent customer service dialog reply according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of an intelligent customer service dialog reply generation device according to a fourth embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device in a fifth embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In a traditional multi-round human-computer interaction task type conversation system, in order to better acquire the requirements of users and give appropriate and reasonable replies according to the requirements, the state of a historical conversation record needs to be tracked according to a set word slot.
In the process of human-computer interaction conversation of a user, necessary limiting conditions needing system understanding exist, word slots are information needed for converting preliminary user intention into definite user instructions in the process of multi-turn conversation, and each word slot corresponds to one information needed to be acquired in one transaction. Each word slot includes a word slot category and a word slot value, the word slot category indicates the category of the required information, the word slot value is the information obtained from the user dialog and filled in the word slot, for example, a "golden chicken lake" in a "hotel near a reserved golden chicken lake" is a "hotel position" word slot, a word slot category in which the "hotel position" is the word slot, and the "golden chicken lake" is the word slot value. The content of the word slot value finally influences the execution intention of the system, and the word slot category is used for helping a developer to classify the qualification.
However, in the sequence-to-sequence generative tracking model, information not explicitly mentioned in the previous rounds is not extracted well. For example, a user may have a need to book a hotel during a conversation with an intelligent customer service, but information about the location of the hotel, date of check-in, etc. may be mentioned by the user before several rounds of conversation, which may greatly interfere with the decision making system by the various information contained in the conversations, so that the wrong historical conversation state is tracked.
In embodiments of the present invention, various implementations of multiple rounds of customer service dialog replies are presented. Compared with the traditional sequence-sequence generation mode, in the various examples of the invention, an attention mechanism is introduced to a conversation state tracking method to extract key information, so that the practicability and accuracy of the whole system are greatly improved.
Example one
In this embodiment, a method for tracking a dialog state is provided, as shown in fig. 1, including:
step S110: a historical conversation record is obtained, the historical conversation record including at least one round of historical user sessions and corresponding historical customer service responses.
In step S110, a historical dialog record of the user and the customer service is obtained, which should include at least one round of user input session and one round of customer service generated answers. The historical dialog record may be represented by Z ═ u1,r1),(u2,r2),…,(ui,ri),…,(ut,rt) In a form of) in which ui、riAnd historical user conversation sentences and historical customer service reply sentences which respectively represent the ith round of conversation.
Step S120: and coding the historical dialogue record to obtain a historical record hidden vector.
Coding the historical conversation record, mapping each word of the user conversation and customer service response in the historical conversation record Z into a low-dimensional dense space to obtain a hidden layer representation as a historical record hidden vector H, wherein H is { H ═ H }1,h2,…,hzIs a hidden vector in z dimension, whose dimension is determined by the trained network. Similarly, the present invention does not limit the way of encoding the history dialog record, as long as the hidden layer representation can be obtained, for example, the history record can be encoded through a gated loop network to obtain the history record hidden vector.
Step S130: and calculating key information vectors corresponding to the preset word slots according to the history record hidden vectors.
After obtaining the history hidden vector H, in step S130, a key information vector k is calculated according to H. In the process of communication between an actual user and a customer service, a large amount of information is often contained, and for a word slot, the types and contents which can be filled in are very many. However, these pieces of information are not all data useful for tracking the session state, and are often very complicated, which has a great influence on the accurate extraction of the session state. Therefore, the purpose of step S130 is to extract the key information that is valid, and filter out useless information values.
Preferably, the ith word slot w corresponding to the preset is calculated in step S130iAs shown in fig. 2, the key information vector may specifically include the following steps:
step S131: put the word into the groove wiIs encoded as a word slot hidden vector WHiAnd acquiring the last word slot hidden vector as the context information corresponding to the word slot.
In particular, the slot w is reserved for a specific preset wordiEncoding its word slot class as word slot hidden vector
Figure BDA0002509643000000061
Wherein, N is the dimension of the word slot hidden vector. Last hidden state
Figure BDA0002509643000000062
Often including the context information in the entire historical dialog record, and therefore in this embodiment, it is extracted separately to represent the word slot wiThe context of (a).
Step S132: and calculating a context vector as a key information vector k corresponding to the word slot by using an attention mechanism according to the context information and the history implicit vector.
The key information vector k is calculated in conjunction with an attention mechanism, preferably a contextual attention vector is calculated as the key information vector k. The attention mechanism has great advantages in extracting key information, in particular, according to context information
Figure BDA0002509643000000063
The word bin w can be calculated as followsiCorresponding key information vector ki
Figure BDA0002509643000000071
Wherein the content of the first and second substances,
Figure BDA0002509643000000072
expression groove wiThe relationship to the historical dialog record can be calculated by the following formula:
Figure BDA0002509643000000073
where e is a natural base number and T represents the transpose of the matrix.
Thus, based on the context information
Figure BDA0002509643000000074
The word groove w is obtained by calculationiCorresponding key information vector kiThe most relevant content to the word slot is found in a large amount of historical dialog information.
Step S140: and calculating the association vector of each preset word slot and other word slots according to the key information vector.
Since the extraction of the key information in the history is performed independently based on each word slot, the information transfer between some related word slots is affected. Therefore, in step S140, the correlation is compensated. According to the key information vector kiCalculating word slot wiAssociation vectors with other word slots, avoiding the word slot wiAnd the data in the training set is insufficient, so that the related features are not learned.
Specifically, in step S140, as shown in fig. 3, the extraction of the association vector may be implemented by the following steps:
step S141: and calculating the word slot class similarity between the word slots and the word slot value similarity between the word slots.
Firstly, the similarity between the word slot category and the word slot value of each word slot is calculated respectively. For example, "hotel location" and "hotel room type" have identical "hotel" in the word slot category of the two word slots, and the similarity should be high; on the other hand, the value corresponding to the word slot of "hotel position" should be a place such as "mountain road" and "city center", and the value corresponding to the word slot of "hotel room type" should be a room type such as "standard room" and "large bed room", and the values of the word slot are different greatly, and the similarity is low.
Preferably, in step S141, the word slot w may be obtained by calculating cosine similarityiWord and phrase groove wjWord-groove class similarity Name betweenijAnd word slot value similarity Typeij. Word-groove class similarity NameijThe calculation can be performed according to the words included in the word slot categories, for example, the word slot categories of the two word slots of "hotel position" and "hotel room type" both include "hotel", so the word slot category similarity is higher; for word groove value similarity TypeijThen, the word slot values can be classified according to the types of the word slot values to obtain labeled words such as time, place and number, and similarity calculation is performed according to the labeled words, for example, "hotel placeThe word groove values of the 'scenic spot location' are location data, and the similarity of the word groove values is higher.
It should be noted that, in the actual calculation, the similarity degree may be calculated in other manners, and the present invention is not limited in detail herein.
Step S142: and constructing a shielding matrix according to the word groove category similarity and the word groove value similarity.
According to the similarity Name of the word groove classijAnd word slot value similarity TypeijA mask matrix M can be calculated. The mask matrix M is a matrix with each constituent element being 0 or 1, and it can clearly show whether there is an association between word slots of corresponding coordinates according to the values of different positions.
According to the needs of different situations, the occlusion matrix M can be calculated in the following manner in the present embodiment:
A. and introducing hyper-parameters to calculate the shielding matrix M. Specifically, the word groove class similarity Name is setijAnd word slot value similarity TypeijBy a hyper-parametric μ fusion. Firstly, establishing a word slot type similarity matrix V:
Vij=μ·Nameij+(1-μ)·Typeij
and then establishing a shielding matrix M according to the word groove type similarity matrix V and a preset threshold value omega:
Figure BDA0002509643000000081
when V isijWhen ω is ω, it may be set to 1 or 0 in the mask matrix according to specific situations and requirements, and this is not particularly limited in the present invention.
B. On the basis of the mode a, when the super parameter is selected as the coefficient of similarity, the super parameter μmay also be used as the Type of word-space value similarityijEstablishing a word slot type similarity matrix V:
Vij=μ·Typeij+(1-μ)·Nameij
and then establishing a shielding matrix M according to the word-groove type similarity matrix V and a preset threshold value omega, wherein the method can be referred to as a mode A.
C. And constructing a shielding matrix M in a clustering mode, and performing secondary classification on the similarity relation between word grooves, wherein the element in the class with high similarity is 1, and the other class is 0.
Specifically, words may be slotted WiWord and phrase groove WjWord-groove class similarity Name betweenijAnd word slot value similarity TypeijIs regarded as a coordinate value (Name)ij,Typeij) Which in turn can be used to represent a vector. Then, the classification is carried out by a clustering method, for example, common K mean value, support vector machine and other methods can realize good two-classification effect.
D. In the case of constructing the mask matrix M as in the method C, after the classification, the element in the class with the high similarity may be set to 0, and the class with the low similarity may be set to 1. However, in subsequent use, when the mask matrix needs to be utilized, the matrix M needs to be inverted in one step to reflect the association relationship between different word slots.
Obviously, there may be many choices in the calculation method of the mask matrix, and the invention is not listed here, as long as the relationship between different word slots can be embodied by the element distribution in the matrix.
In practical application, the specific way of calculating the occlusion matrix can be further selected according to different requirements. For example, the method of hyper-parameter fusion is better in tracking historical conversation state, and the method of establishing the shielding matrix through the two-classification mode is convenient and fast and has higher efficiency. Therefore, different ways can be selected to achieve the practical purpose when the shielding matrix is constructed according to different requirements.
Step S143: and calculating association vectors of the word slot and other word slots according to the shielding matrix and the key information vector.
In step S143, a key information vector k is calculatediAnd a shielding matrix M for reconstructing the information association between the word slots so as to realize selective sharing between the word slots. Specifically, the association vector int can be calculated by the following formulai
Figure BDA0002509643000000101
Where p represents the total number of word slots.
Thus, through the steps, the key information vector k can be obtainediObtaining the association vector int between word slotsi。intiIncluding the word slot w in the original dialogiThe key information of other word slots with larger relevance realizes the integration, recombination and sharing of the key information.
Step S150: and decoding to obtain a plurality of rounds of conversation states according to the association vector.
Specifically, as shown in fig. 4, step S150 may include the steps of:
step S151: and judging whether the user has a clear intention in the customer service conversation process according to the history hidden vector.
Step S151 classifies the history hidden vector H according to a preset status, and determines whether the user has a clear intention tendency in the history session. During actual calculation, the hidden vector H can be classified by a trained classifier to obtain probability distributions of different states, for example, with or without explicit attitude.
Step S152: and if the user does not have a clear intention, giving the corresponding multi-turn conversation state.
Step S153: if the user has a clear intention, decoding to obtain a state hidden vector according to the association vector and the word vector sequence; calculating and generating probability distribution and replication probability distribution according to the state hidden vector; calculating final probability distribution according to the generation probability distribution and the replication probability distribution; and acquiring the multi-round conversation state according to the final probability distribution.
If no explicit tendency has been indicated by the user is found in the historical dialog record, the historical dialog state may be considered to be a "no explicit intent state".
Further, specific differences are possible for the case of states without explicit intent. For example, the user may indicate that there is no particular need, or care, for example, there is no so-called room-type orientation when booking a hotel; it is also possible that the user does not mention specific requirements, which is an important condition for whether further determination of the relevant content is needed in subsequent replies, and thus can be further subdivided. The classification of the states is already completed by cascading a plurality of sets of classifiers, and the classification can be directly performed in the initial classifier, which is not limited herein.
When the user mentions a specific need in the historical dialogue, further operations are required to give the historical dialogue state. First, a word vector sequence [ a ] is associated with an association vector int1,a2,…,az]Decoding is carried out through a trained decoder network to obtain a decoded hidden vector G ═ G1,g2,…,gzThe hidden vector G fully contains key information in multiple rounds of conversations and is beneficial to generation of historical conversation states.
Then, a generation probability distribution for generating a corresponding word n from the word list is calculated
Figure BDA0002509643000000111
Probability distribution of replication associated with individual word replication from interaction history
Figure BDA0002509643000000112
The specific calculation method can be as follows:
Figure BDA0002509643000000113
Figure BDA0002509643000000114
wherein, WEIs a weight matrix obtained by training, and T represents the transposition of the matrix.
Further, generating a probability distribution based on the generated probability distribution
Figure BDA0002509643000000115
And probability distribution of replication
Figure BDA0002509643000000116
Calculating a final probability distribution Pn
Figure BDA0002509643000000117
Wherein q isnIs a coefficient that controls the behavior of the model and is calculated in the following way:
Figure BDA0002509643000000121
qnit is decided whether to obtain new word generation from the vocabulary or to obtain words from the original history in the generation state.
Obtain the final probability distribution PnThen, the historical dialog state S of the current round can be obtained through a weighting operation or the like.
In this embodiment, the method for tracking the historical dialogue state is optimized, and the features are shared among word slots by obtaining the relevance among the word slots. For example, in the training set, the data information corresponding to the word slot "sight spot location" is less, and the recombined data corresponding to the "hotel location" can be shared with the word slot "according to the relevance, so that the accuracy of the system in training the model is greatly improved, and a more accurate dialogue state can be obtained.
In the preferred embodiment, the context information is introduced as the key information of each independent word slot through the attention mechanism, so that the interference of redundant information is avoided, the relevance among the word slots is established in a matrix shielding mode, and the defects of the existing model are well overcome.
Example two
The embodiment is based on the first embodiment, and provides an intelligent customer service dialogue reply generation method which can be used in multiple rounds of dialogue. The steps that are the same as those in the first embodiment are not described herein again.
The present embodiment is different from the first embodiment in that, as shown in fig. 5, before step S110, the method further includes:
step S210: and acquiring the user input session in the current round.
In this embodiment, first, in step S210, dialog data input by the user in the current round is acquired, for example: i want to book a hotel. The conversation content input by the user can be a complete sentence or a meaningful short sentence composed of specific words. Further, according to the requirement, the content of the user session may be preprocessed, for example, the steps of segmenting words, removing stop words, and/or discourse words may be performed, and the present invention is not limited herein.
In step S220: and coding the input session of the current round of users to obtain the input vector of the current round.
Let this round of user input session X ═ X1,x2,…,xi,…,xnWherein x isiWhich represents the ith word obtained by segmenting the input session, and n represents the number of words, the input session is encoded in step S120 so that it is converted into a vector form that can be processed by the model. Specifically, the word vector x corresponding to each word may be obtained according to the trained encoder network1,x2,…,xnObtaining the input vector X of the current round ═ X1,x2,…,xn}。
It should be noted that, in the embodiment of the present invention, the encoding manner of the input session is not limited, and all of them are within the protection scope of the present invention.
After step S210, steps S110 to S150 as described in the first embodiment can be executed to implement tracking of the historical dialog state.
In this embodiment, after step S150, the method further includes the steps of:
step S230: and acquiring the customer service replies of the current round according to the input vectors of the current round and the multi-round conversation state.
And inquiring in a database to obtain the customer service response of the current round through a trained natural language generation model according to the input vector X of the current round and the multi-round conversation state S. Preferably, the customer service reply vector of the current round can be given comprehensively according to the input vector X of the current round, the customer service reply vector of the previous round, the multi-round conversation state S and the preset model database, and the customer service reply vector of the current round can be given after being converted into a text form.
The intelligent customer service dialogue reply generation method in the embodiment can fully acquire the dialogue state in the historical dialogue record, and provide the most appropriate reply content on the basis of the historical dialogue state according to the dialogue content input by the user. In the process of tracking the historical conversation state, the embodiment pays attention to the relevance between the key information in the word slot and the word slot, so that the conversation state acquisition accuracy is effectively improved, the interference of miscellaneous information is reduced, and a more appropriate system reply can be obtained.
EXAMPLE III
On the basis of the second embodiment, the embodiment further provides a reply generation method of the intelligent customer service dialog. The similar parts in the steps can refer to the related contents in the second embodiment, and are not expanded in detail in this embodiment.
In this embodiment, three implementation manners are given, and the difference between the first implementation manner and the second implementation manner is that, after step S220 of the second implementation manner, the method further includes the following steps:
step S310: and acquiring the user dialogue understanding vector in the current round according to the input vector in the current round.
It should be noted that step S310 only needs to be located after step S220 and before step S130, and the sequence of steps S110 to S120 is not limited in this embodiment.
Specifically, in this embodiment, the t-th round of user dialog understanding vector M may be obtained through a trained natural language understanding networktAs shown in the following formula:
Mt=DeeoderNLU(St-1,Rt-1,X)
wherein, DecoderNLURepresenting a natureThe language understanding decodes the network, and the task is to perform intention detection and word slot filling on data input by a user. Compared with the traditional natural language understanding module, in the embodiment, the intention detection and the word slot filling can be integrally carried out on the generation of the vector sequence, and the intention detection is not carried out through semantic classification, and the word slot filling is carried out through sequence marking, so that the problem of multiple intentions can be well solved. In the above formula, the input vector X of the current round and the return vector R of the last round of customer service are usedt-1Last wheel to speech state St-1Input natural language understanding decoding network DecoderNLUIn the method, a user dialogue understanding vector M in the current round is obtainedtThe current round of user dialog understanding vector MtTogether with the history hidden vector H, in step S130, it is used to calculate the key information vector k.
In the second implementation manner of this embodiment, the difference from the second implementation manner is that step S230 specifically includes:
step S320: and acquiring the customer service execution action of the current round according to the multi-round conversation state.
As shown in the following formula:
At=DecoderDPL(Rt-1,X,S)
wherein A istPerforming action vectors, Decoder, for the customer service in this roundDPLRepresenting a dialogue strategy learning decoding network, and predicting the action A which should be taken by the system in the next step by considering a multi-turn dialogue state S and combining the query result of a model databasetDPL networks are capable of producing multiple system actions. For example, in a scenario where a user queries a hotel, the DPL network may give different actions such as a hotel name, a hotel room type, and a hotel address.
Step S330: and acquiring the customer service response of the current round according to the multi-round conversation state and the customer service execution action of the current round.
Represented by the following formula:
Rt=DeeoderNLG(Rt-1,Ut,At)
wherein DecoderNLGRepresenting a natural language generating decoding network.The task of the NLG network is to convert the motion vector A of the system intotConversion to the system return vector R of the current roundtHere, it is also required to query the model database.
Finally, the vector R is recovered according to the current round systemtAnd obtaining the customer service reply of the current round.
In a third embodiment of this embodiment, the above two embodiments may be implemented simultaneously or separately, and both embodiments can obtain the customer service response of the present wheel. When the above two embodiments are implemented simultaneously, as shown in fig. 6, the input data of steps S310, S320 and S330 may be further adjusted according to specific needs to better utilize the information generated in the multi-turn dialog system.
For step S310, the input vector X of the present round is acquired. Then, obtaining the user dialogue understanding vector M of the current round through an NLP networkt
Mt=DecoderNLU(Bt-1,Rt-1,X)
Wherein, Bt-1Accumulating vectors for the previous round of conversation history, including the previous round of historical conversation state vector St-1And the last round of system motion vector At-1. The two can be simply spliced into B in an end-to-end modet-1Other fusion methods may also be adopted, which is not limited in this embodiment.
For step S320, according to Rt-1X, S, calculating the execution action sequence A of the system in the current roundt
At=DecoderDPL(Rt-1,X,S)
For step S330, according to Rt-1,X,AtCalculating the return vector R of the current round systemtAs shown in the following formula:
Rt=DecoderNLG(Rt-1,X,At)
the contents of the first embodiment and the second embodiment can be referred to for other steps.
On the basis of the first embodiment and the second embodiment, the natural language understanding and the system action execution strategy are added, so that the system can better give out customer service responses according to the current round of user conversation, further give out corresponding execution actions, improve the execution flow of the system, and obtain more accurate customer service responses.
Example four
In this embodiment, an intelligent customer service dialog reply generation device is provided, as shown in fig. 7, specifically including:
the input module 10 is used for acquiring the user input session in the current round;
an input encoding module 20, configured to encode the current round of user input sessions to obtain a current round of input vectors;
a record obtaining module 30, configured to obtain a historical conversation record, where the historical conversation record includes at least one round of historical user sessions and corresponding historical customer service responses;
the record encoding module 40 is configured to encode the historical dialogue record to obtain a historical record hidden vector;
the key information extraction module 50 is configured to calculate a key information vector corresponding to each preset word slot according to the history hidden vector;
the association reorganization module 60 is configured to calculate association vectors of each preset word slot and other word slots according to the key information vector;
a state obtaining module 70, configured to decode to obtain multiple turns of dialog states according to the association vector;
and the customer service reply module 80 is configured to obtain the customer service replies in the current round according to the input vectors in the current round and the multi-round conversation states.
In this embodiment, the input module 10 is configured to obtain dialog data input by the user in the current round, for example: i want to find a premium restaurant. The conversation content input by the user can be a complete sentence or a meaningful short sentence composed of specific words. Further, according to the requirement, the content of the user session may be preprocessed, for example, the steps of segmenting words, removing stop words, and/or discourse words may be performed, and the present invention is not limited herein.
The input encoding module 20 is used to encode the data input by the user into a vector form that can be processed by the model.
Modules 30-70 are used to track the dialog state from the historical dialog records, and the execution process of the specific module and unit can refer to the description in the first embodiment. In this embodiment, the acquired historical dialogue data is encoded, and key information required by each word slot is selected from the encoded data, so as to acquire correlation between information, thereby obtaining a dialogue state.
The record obtaining module 30 is used for obtaining a history dialog record of the user and the customer service, and the history dialog record at least comprises a round of user input session and a round of customer service generated reply.
And then, the record coding module 40 codes the historical conversation record, and maps each word of the user conversation and customer service response in the historical conversation record into a low-dimensional dense space to obtain a hidden layer representation as a hidden vector of the historical conversation record. Likewise, the present invention is not limited herein to the manner in which the historical dialog records are encoded, as long as the hidden layer representation can be obtained.
Further, after the record encoding module 40 obtains the history hidden vector, the key information extracting module 50 calculates the key information vector. In the process of communication between an actual user and a customer service, a large amount of information is often contained, and for a word slot, the types and contents which can be filled in are very many. However, these pieces of information are not all data useful for tracking the session state, and are often very complicated, which has a great influence on the accurate extraction of the session state. Therefore, the purpose of the key information extraction module 50 is to extract the key information that is valid, and filter out useless information values.
Preferably, the key information extraction module 50 further includes:
a context information obtaining unit 51, configured to encode the word slot class of the word slot into a word slot hidden vector, and obtain a last word slot hidden vector as context information corresponding to the word slot;
and a key information vector calculating unit 52, configured to calculate a context vector as a key information vector corresponding to the word slot according to the attention mechanism and according to the context information and the history hidden vector.
Specifically, the context information obtaining unit 51 encodes the word slot class of a specific preset word slot as a word slot hidden vector. The last hidden state often includes context information in the entire historical dialog record, and thus the invention extracts it separately here to represent the context of the word slot.
After that, the key information vector calculation unit 52 calculates a key information vector in conjunction with the attention mechanism. Preferably, a context attention vector is computed as the key information vector. The attention mechanism has great advantages in extracting key information
Therefore, the key information vector corresponding to the preset word slot is obtained through calculation according to the context information, and the content most relevant to the word slot is obtained in a large amount of historical dialogue information.
Since the extraction of the key information in the history is performed independently based on each word slot, the information transfer between some related word slots is affected. The association reorganization module 60 compensates for the association. And calculating the association vectors of the preset word slot and other word slots according to the key information vectors, so as to avoid the problem that the relevant features are not obtained by learning due to insufficient data in the word slot training set and the like.
Specifically, the association restructuring module 60 further includes:
a similarity calculation unit 61, configured to calculate word bin class similarity between the word bins and word bin value similarity between the word bins;
a masking matrix constructing unit 62, configured to construct a masking matrix according to the word bin category similarity and the word bin value similarity;
and the association vector calculation unit 63 is configured to calculate association vectors between the word slot and other word slots according to the mask matrix and the key information vector.
The mask matrix constructing unit 62 may still have various options in the calculation method of the mask matrix, and referring to the first embodiment, the invention is not listed here, as long as the relationship between different word slots can be embodied by the element distribution in the matrix.
In practical application, the specific way of calculating the occlusion matrix can be further selected according to different requirements. For example, the method of hyper-parameter fusion has a better effect in tracking the historical dialogue state, but is more laborious in parameter adjustment, and the method of establishing the shielding matrix through the two-classification has a slightly worse effect than the hyper-parameter method, but is convenient and fast and has higher efficiency. Therefore, different ways can be selected to achieve the practical purpose when the shielding matrix is constructed according to different requirements.
The state obtaining module 70 decodes the obtained association vector to obtain multiple turns of dialog states.
Specifically, the state acquisition module 70 includes:
an intention judging unit 71, configured to judge whether the user has a clear intention in the customer service dialog process according to the history hidden vector;
if the user does not have a clear intention, the state generating unit 72 gives the corresponding multi-turn dialog state;
if the user has a clear intention, the probability calculation unit 73 decodes the association vector and the word vector sequence to obtain a state hidden vector; calculating and generating probability distribution and replication probability distribution according to the state hidden vector; calculating final probability distribution according to the generation probability distribution and the replication probability distribution; the state generating unit 72 acquires the multi-turn dialog states according to the final probability distribution.
The customer service reply module 80 takes the historical dialog state of the current round as input, and can query the database to obtain the customer service reply of the current round through a trained natural language generation model. Preferably, the customer service reply sequence of the current round is comprehensively given according to the user input session sequence, the customer service reply sequence of the previous round, the historical dialogue state sequence of the current round and the model database, and the reply data of the system of the current round is given.
The intelligent customer service dialog reply generation device in the embodiment optimizes the method for tracking the historical dialog state, and shares the characteristics among the word slots by acquiring the relevance among the word slots. For example, in the training set, the data information corresponding to the word slot "sight spot location" is less, and the recombined data corresponding to the "hotel location" can be shared with the word slot "according to the relevance, so that the accuracy of the system in training the model is greatly improved, and a more accurate dialogue state can be obtained.
EXAMPLE five
It should be noted that, as shown in fig. 8, the intelligent customer service dialog reply generation apparatus according to the embodiment of the present application may be integrated into the electronic device 90 as a software module and/or a hardware module, in other words, the electronic device 90 may integrate the intelligent customer service dialog reply generation apparatus in the above embodiment. For example, the intelligent customer service dialog reply generation means may be applied to a software module in the operating system of the electronic device 90, or may be applied to an application developed therefor; of course, the intelligent customer service dialog generating means may also be integrated into one of the hardware modules of the electronic device 90.
In another embodiment of the present application, the carrier integrated with the intelligent customer service dialog reply generation apparatus and the electronic device 90 may be separate devices (e.g., a server), and the carrier integrated with the intelligent customer service dialog reply generation apparatus may be connected to the electronic device 90 through a wired and/or wireless network and transmit the interactive information according to an agreed data format.
Fig. 8 is a schematic structural diagram of an electronic device 90 according to an embodiment of the present application. As shown in fig. 8, the electronic apparatus 90 includes: one or more processors 91 and memory 92; and computer program instructions stored in memory 92 which, when executed by processor 91, cause processor 91 to perform the intelligent customer service dialog reply generation apparatus as in any of the embodiments described above.
The processor 91 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 90 to perform desired functions.
Memory 92 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 91 to implement the steps in the intelligent customer service dialog reply generation apparatus of the various embodiments of the present application described above and/or other desired functions. Information such as light intensity, compensation light intensity, position of the filter, etc. may also be stored in the computer readable storage medium.
In one example, the electronic device 90 may further include: an input device 93 and an output device 94, which are interconnected by a bus system and/or other form of connection mechanism (not shown in fig. 8).
The output device 94 may output the service reply sentence to the outside, and may include, for example, a display, a speaker, a printer, a communication network, a remote output device connected thereto, and the like.
Of course, for simplicity, only some of the components of the electronic device 90 relevant to the present application are shown in fig. 8, and components such as buses, input devices/output interfaces, and the like are omitted. In addition, the electronic device 90 may include any other suitable components, depending on the particular application.
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps of the intelligent customer service dialog reply generation method according to any of the above-described embodiments.
The computer program product may write program code for carrying out operations for embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the intelligent customer service dialog reply generation method according to various embodiments of the present application described in the above-mentioned intelligent customer service dialog reply generation device section of the present specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that in the apparatus and devices of the present application, the components may be disassembled and/or reassembled. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. An intelligent customer service dialogue reply generation method is used for multi-turn dialogue and is characterized by comprising the following steps:
acquiring the user input session in the current round;
encoding the user input session of the current round to obtain an input vector of the current round;
obtaining a historical conversation record, wherein the historical conversation record comprises at least one round of historical user conversation and corresponding historical customer service responses;
encoding the historical dialogue records to obtain historical record hidden vectors;
calculating key information vectors corresponding to the preset word slots according to the history record hidden vectors;
calculating association vectors of each preset word slot and other word slots according to the key information vectors;
decoding to obtain a plurality of rounds of conversation states according to the association vector;
and acquiring the customer service replies of the current round according to the input vectors of the current round and the multi-round conversation state.
2. The method according to claim 1, wherein after the encoding of the current round of user input sessions to obtain the current round of input vectors, and before the calculating of the key information vector corresponding to each preset word slot according to the history hidden vectors, further comprises:
and acquiring a user dialogue understanding vector in the current round according to the input vector in the current round, wherein the user dialogue understanding vector in the current round and the history record hidden vector are jointly used for calculating the key information vector.
3. The method as recited in claim 1, wherein said obtaining the current round of customer service responses based on the multiple rounds of dialog states comprises:
acquiring the customer service execution actions of the current round according to the multi-round conversation state;
and acquiring the customer service response of the current round according to the multi-round conversation state and the customer service execution action of the current round.
4. The method of claim 1, wherein the calculating the key information vector corresponding to each preset word slot according to the hidden vectors of the history records comprises:
coding the word slot category of the word slot into a word slot hidden vector, and acquiring the last word slot hidden vector as context information corresponding to the word slot;
and calculating a context vector as a key information vector corresponding to the word slot by using an attention mechanism according to the context information and the history record hidden vector.
5. The method of claim 1, wherein the calculating association vectors of each preset word slot and other word slots according to the key information vector comprises:
calculating word slot class similarity between the word slots and word slot value similarity between the word slots;
constructing a shielding matrix according to the word groove category similarity and the word groove value similarity;
and calculating association vectors of the word slot and other word slots according to the shielding matrix and the key information vector.
6. The method of claim 5, wherein constructing a masking matrix based on the word bin class similarity and the word bin value similarity comprises:
the word slot category similarity and the word slot value similarity are fused through hyper-parameters to establish a word slot similarity matrix, and a shielding matrix is constructed according to the word slot similarity matrix; or
Constructing similarity vectors by taking the word groove category similarity and the word groove value similarity as horizontal and vertical coordinates respectively; and carrying out secondary classification on the similarity vectors, and constructing a shielding matrix according to a classification result.
7. The method of claim 1, wherein decoding multiple rounds of dialog states based on the association vector comprises:
judging whether the user has a clear intention in the customer service conversation process according to the history record hidden vector;
if the user does not have a clear intention, giving the corresponding multi-turn conversation state;
if the user has a clear intention, decoding to obtain a state hidden vector according to the association vector and the word vector sequence; calculating and generating probability distribution and replication probability distribution according to the state hidden vector; calculating final probability distribution according to the generation probability distribution and the replication probability distribution; and acquiring the multi-round conversation state according to the final probability distribution.
8. An intelligent customer service dialog reply generation device, comprising:
the input module is used for acquiring the input session of the user in the current round;
the input encoding module is used for encoding the input session of the current round of users to obtain an input vector of the current round;
the system comprises a record acquisition module, a service processing module and a service processing module, wherein the record acquisition module is used for acquiring a historical conversation record, and the historical conversation record comprises at least one round of historical user conversation and corresponding historical customer service responses;
the record coding module is used for coding the historical dialogue record to obtain a historical record hidden vector;
the key information extraction module is used for calculating key information vectors corresponding to the preset word slots according to the history record hidden vectors;
the association recombination module is used for calculating association vectors of each preset word slot and other word slots according to the key information vectors;
the state acquisition module is used for decoding to obtain a plurality of rounds of conversation states according to the association vector;
and the customer service reply module is used for acquiring the customer service reply of the current round according to the input vector of the current round and the multi-round conversation state.
9. An electronic device comprising a memory, a processor and output means, wherein the processor when executed performs the method steps of any of claims 1-7, the output means being adapted to output the current customer service response.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN202010457142.6A 2020-05-26 2020-05-26 Intelligent customer service dialogue reply generation method and device and electronic equipment Active CN111651573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010457142.6A CN111651573B (en) 2020-05-26 2020-05-26 Intelligent customer service dialogue reply generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010457142.6A CN111651573B (en) 2020-05-26 2020-05-26 Intelligent customer service dialogue reply generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111651573A true CN111651573A (en) 2020-09-11
CN111651573B CN111651573B (en) 2023-09-05

Family

ID=72346850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010457142.6A Active CN111651573B (en) 2020-05-26 2020-05-26 Intelligent customer service dialogue reply generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111651573B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112988960A (en) * 2021-02-09 2021-06-18 中国科学院自动化研究所 Dialog state tracking method, device, equipment and storage medium
CN113821620A (en) * 2021-09-18 2021-12-21 湖北亿咖通科技有限公司 Multi-turn conversation task processing method and device and electronic equipment
US20220253549A1 (en) * 2021-02-08 2022-08-11 Capital One Services, Llc Methods and systems for automatically preserving a user session on a public access shared computer
CN115953434A (en) * 2023-01-31 2023-04-11 北京百度网讯科技有限公司 Track matching method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710704A (en) * 2018-05-28 2018-10-26 出门问问信息科技有限公司 Determination method, apparatus, electronic equipment and the storage medium of dialogue state
CN110321418A (en) * 2019-06-06 2019-10-11 华中师范大学 A kind of field based on deep learning, intention assessment and slot fill method
CN110555095A (en) * 2018-05-31 2019-12-10 北京京东尚科信息技术有限公司 Man-machine conversation method and device
WO2019233219A1 (en) * 2018-06-07 2019-12-12 腾讯科技(深圳)有限公司 Dialogue state determining method and device, dialogue system, computer device, and storage medium
CN110659360A (en) * 2019-10-09 2020-01-07 初米网络科技(上海)有限公司 Man-machine conversation method, device and system
CN110704588A (en) * 2019-09-04 2020-01-17 平安科技(深圳)有限公司 Multi-round dialogue semantic analysis method and system based on long-term and short-term memory network
CN110727771A (en) * 2019-09-03 2020-01-24 北京三快在线科技有限公司 Information processing method and device, electronic equipment and readable storage medium
CN111061850A (en) * 2019-12-12 2020-04-24 中国科学院自动化研究所 Dialog state tracking method, system and device based on information enhancement

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710704A (en) * 2018-05-28 2018-10-26 出门问问信息科技有限公司 Determination method, apparatus, electronic equipment and the storage medium of dialogue state
CN110555095A (en) * 2018-05-31 2019-12-10 北京京东尚科信息技术有限公司 Man-machine conversation method and device
WO2019233219A1 (en) * 2018-06-07 2019-12-12 腾讯科技(深圳)有限公司 Dialogue state determining method and device, dialogue system, computer device, and storage medium
CN110321418A (en) * 2019-06-06 2019-10-11 华中师范大学 A kind of field based on deep learning, intention assessment and slot fill method
CN110727771A (en) * 2019-09-03 2020-01-24 北京三快在线科技有限公司 Information processing method and device, electronic equipment and readable storage medium
CN110704588A (en) * 2019-09-04 2020-01-17 平安科技(深圳)有限公司 Multi-round dialogue semantic analysis method and system based on long-term and short-term memory network
CN110659360A (en) * 2019-10-09 2020-01-07 初米网络科技(上海)有限公司 Man-machine conversation method, device and system
CN111061850A (en) * 2019-12-12 2020-04-24 中国科学院自动化研究所 Dialog state tracking method, system and device based on information enhancement

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵明星等: "《基于深度学习的任务型对话***的设计与实现》", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
黄毅等: "《智能对话***架构及算法》", 《北京邮电大学学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220253549A1 (en) * 2021-02-08 2022-08-11 Capital One Services, Llc Methods and systems for automatically preserving a user session on a public access shared computer
US11861041B2 (en) * 2021-02-08 2024-01-02 Capital One Services, Llc Methods and systems for automatically preserving a user session on a public access shared computer
CN112988960A (en) * 2021-02-09 2021-06-18 中国科学院自动化研究所 Dialog state tracking method, device, equipment and storage medium
CN113821620A (en) * 2021-09-18 2021-12-21 湖北亿咖通科技有限公司 Multi-turn conversation task processing method and device and electronic equipment
CN113821620B (en) * 2021-09-18 2023-12-12 亿咖通(湖北)技术有限公司 Multi-round dialogue task processing method and device and electronic equipment
CN115953434A (en) * 2023-01-31 2023-04-11 北京百度网讯科技有限公司 Track matching method and device, electronic equipment and storage medium
CN115953434B (en) * 2023-01-31 2023-12-19 北京百度网讯科技有限公司 Track matching method, track matching device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111651573B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN109101537B (en) Multi-turn dialogue data classification method and device based on deep learning and electronic equipment
CN111651573A (en) Intelligent customer service dialogue reply generation method and device and electronic equipment
CN113254610B (en) Multi-round conversation generation method for patent consultation
CN109190134B (en) Text translation method and device
CN113011186B (en) Named entity recognition method, named entity recognition device, named entity recognition equipment and computer readable storage medium
CN112860862B (en) Method and device for generating intelligent agent dialogue sentences in man-machine dialogue
CN112734881A (en) Text synthesis image method and system based on significance scene graph analysis
CN114528898A (en) Scene graph modification based on natural language commands
CN115145551A (en) Intelligent auxiliary system for machine learning application low-code development
CN112364664B (en) Training of intention recognition model, intention recognition method, device and storage medium
CN112200664A (en) Repayment prediction method based on ERNIE model and DCNN model
CN112949758A (en) Response model training method, response method, device, equipment and storage medium
Gulyaev et al. Goal-oriented multi-task bert-based dialogue state tracker
CN114168754A (en) Relation extraction method based on syntactic dependency and fusion information
CN113761868A (en) Text processing method and device, electronic equipment and readable storage medium
CN113408287A (en) Entity identification method and device, electronic equipment and storage medium
Jhunjhunwala et al. Multi-action dialog policy learning with interactive human teaching
CN115858756A (en) Shared emotion man-machine conversation system based on perception emotional tendency
CN115759062A (en) Knowledge injection-based text and image pre-training model processing method and text and image retrieval system
Zhao et al. Aligned visual semantic scene graph for image captioning
CN114742016A (en) Chapter-level event extraction method and device based on multi-granularity entity differential composition
CN113343692B (en) Search intention recognition method, model training method, device, medium and equipment
CN116432755A (en) Weight network reasoning method based on dynamic entity prototype
CN112989794A (en) Model training method and device, intelligent robot and storage medium
CN116127013A (en) Personal sensitive information knowledge graph query method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant