CN112800737B - Natural language text generation method and device and dialogue system - Google Patents

Natural language text generation method and device and dialogue system Download PDF

Info

Publication number
CN112800737B
CN112800737B CN201911036989.0A CN201911036989A CN112800737B CN 112800737 B CN112800737 B CN 112800737B CN 201911036989 A CN201911036989 A CN 201911036989A CN 112800737 B CN112800737 B CN 112800737B
Authority
CN
China
Prior art keywords
text
natural language
structured data
language text
output part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911036989.0A
Other languages
Chinese (zh)
Other versions
CN112800737A (en
Inventor
王娟
程建波
彭南博
黄志翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN201911036989.0A priority Critical patent/CN112800737B/en
Publication of CN112800737A publication Critical patent/CN112800737A/en
Application granted granted Critical
Publication of CN112800737B publication Critical patent/CN112800737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The disclosure provides a natural language text generation method and device and a dialogue system, and relates to the field of natural language processing. In the present disclosure, a natural language text generation model is obtained by training a recurrent neural network; and inputting the structured data to be generated into a natural language text generation model, outputting a corresponding text sentence pattern, replacing placeholders of slots in the text sentence pattern by using slot values in the structured data to be generated according to requirements, and finally generating the natural language text. According to the scheme, the text sentence patterns corresponding to the text templates can be flexibly and automatically learned, so that manual configuration work is greatly reduced, and manpower is saved.

Description

Natural language text generation method and device and dialogue system
Technical Field
The present disclosure relates to the field of natural language processing, and in particular, to a method and apparatus for generating a natural language text, and a dialog system.
Background
With the development of internet technology and the development of artificial intelligence, natural language processing technology has made great progress.
Natural language text generation (Nature Language Generating, abbreviated NLG) technology is an important part of natural language processing technology, and refers to the generation of structured data into natural language text.
In some related technologies, a text template and a text sentence pattern corresponding to the text template are predefined manually, then, which text sentence pattern under which text template the structured data corresponds to is determined, and corresponding natural language text is finally generated by using the corresponding text sentence pattern.
Disclosure of Invention
The inventor discovers that in the related art, the text sentence patterns corresponding to the text templates need to be manually preconfigured, which is time-consuming and labor-consuming and not flexible enough.
In the method, a natural language text generation model is obtained by training a cyclic neural network by using a structured data training sample and a text template corresponding to the structured data training sample, structured data to be generated is input into the natural language text generation model and a corresponding text sentence pattern is output, and if necessary, placeholders of slots in the text sentence pattern are replaced by slot values in the structured data to be generated, so that a natural language text is finally generated, and the text sentence pattern corresponding to the text template can be flexibly and automatically learned, thereby greatly reducing manual configuration work and saving manpower.
According to some embodiments of the present disclosure, there is provided a natural language text generation method including:
Obtaining structured data to be generated, wherein the structured data comprises an intention, or the structured data comprises the intention, a slot and a value thereof;
inputting the structured data into a natural language text generation model and outputting a corresponding text template sentence pattern, wherein the natural language text generation model is obtained by training a cyclic neural network by utilizing a structured data training sample and a text template corresponding to the structured data training sample;
If the structured data comprises slots and values thereof, replacing placeholders of the slots in the text template sentence pattern with values of corresponding slots in the structured data to obtain corresponding natural language text;
and if the structured data does not comprise slots and values thereof, taking the text template sentence pattern as corresponding natural language text.
In some embodiments, inputting the structured data into a natural language text generation model and outputting a corresponding text template sentence comprises sequentially obtaining a plurality of output portions from the structured data into the natural language text generation model, each output portion comprising a predicted plurality of text nodes; and determining a plurality of text template sentence patterns corresponding to the structured data according to the combination of the text nodes in each output part. In some embodiments, the determining a plurality of text sentence templates corresponding to the structured data according to a combination of text nodes in each output portion includes: selecting a preset number of text nodes with maximum prediction probability from each output part, wherein the preset number is a plurality of text nodes; combining the selected text nodes in each output part; and selecting the text node combinations with the preset number from the text node combinations as a plurality of text sentence templates corresponding to the structured data.
In some embodiments, the inputting the structured data into the natural language text generation model sequentially obtaining a plurality of output portions includes: and inputting the structured data into a natural language text generation model to sequentially obtain a plurality of output parts, wherein each output part is used as an input for predicting the next output part.
In some embodiments, the structured data and the structured data training samples are one-hot coded.
In some embodiments, the training process of the natural language text generation model includes: determining total loss according to the loss between the predicted word in each output part of the natural language text generation model and the actual word of the corresponding part in the text template; and when the gradient calculated based on the total loss meets a preset condition, the cyclic neural network training is completed to obtain the natural language text generation model.
In some embodiments, structured data is obtained that a dialog system replies to based on a user request; and converting the structured data replied by the dialogue system into corresponding natural language text by using the natural language text generation method, and outputting the corresponding natural language text.
According to other embodiments of the present disclosure, there is provided a natural language text generating apparatus including: a memory; and a processor coupled to the memory, the processor configured to perform the natural language text generation method of any embodiment based on instructions stored in the memory.
According to still further embodiments of the present disclosure, there is provided a dialog system comprising: a structured data generation device configured to generate structured data of the reply based on the user request; and a natural language text generating device as in any one of the embodiments of the present disclosure configured to convert the replied structured data into corresponding natural language text and output further embodiments of the present disclosure, a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the natural language text generating method as in any one of the embodiments.
Drawings
The drawings that are required for use in the description of the embodiments or the related art will be briefly described below. The present disclosure will be more clearly understood from the following detailed description with reference to the accompanying drawings.
It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without inventive faculty.
Fig. 1 is a flow diagram of some embodiments of a method of generating natural language text of the present disclosure.
Fig. 2 is a schematic diagram of combining text nodes to obtain a corresponding text sentence pattern by using a bundle search algorithm in the present disclosure.
FIG. 3 is a flow diagram of some embodiments of the present disclosure generating a natural language text generation model.
FIG. 4 is a schematic diagram of some embodiments of training and prediction of a natural language text generation model of the present disclosure.
Fig. 5 is a schematic diagram of some embodiments of a natural language text generating device of the present disclosure.
Fig. 6 is a schematic diagram of a dialog system of an exemplary embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure.
Fig. 1 is a flow diagram of some embodiments of a method of generating natural language text of the present disclosure. The method may be performed, for example, by a natural language text generating device.
As shown in FIG. 1, the method of this embodiment includes steps 101-103.
In step 101, structured data to be generated is acquired as input information.
The structured data includes at least an intent (also called an intent), and optionally, a slot (also called a slot) and its value (also called a slot-value). The structured data may be represented using a structured query language (Structured Query Language, SQL), for example. For example, in order service, one structured data is hello, which is intended as "hello", does not include slots and their values, and indicates the meaning of "hello". For another example, still another structured data is select (name=kender), which includes what is meant as "select", slot is "name", and slot value (or slot value) is "kender", meaning "select a restaurant whose name is kender". For another example, the structured data is select (name=kender, count=2), where the intent of the structured data is "select", indicating that the corresponding restaurant was found; the combination of slots and their values is "name=kender, and count=2", meaning "find a restaurant whose 2 names are kender".
In some embodiments, the structured data to be generated may be one-hot encoded. The parts (e.g., the intent, the slot values) in the structured data are separately single-heat coded. Each of the parts corresponds to one of the one-hot encoded vectors, and which position in the one-hot encoded vector is 1 is determined according to the ordering of the state values of the part in all the state values of the part. The order of the ordering is not limited. But the ordering should be consistent during the training process and the prediction process. For example, in order service, assuming that there are 5 intents in total, the intent to be encoded is select, based on the ordered list of intents, select is intended to be ranked first in the ordered list of all intents, and the corresponding one-hot encoding vector is denoted as (1, 0).
In some embodiments, if the structured data includes multiple parts such as intent, slot and value thereof, the single-hot encoded vectors of the multiple parts (such as the single-hot encoded vector of the intent and the single-hot encoded vector of the slot and value thereof) are concatenated to obtain the single-hot encoded vector of the structured data as a whole (such as the intent-slot-value combination).
In step 102, the structured data to be generated is input into a natural language text generation model and a corresponding text sentence pattern is output.
Step 102 includes two sub-steps, 1021 and 1022.
In step 1021, the structured data to be generated is input to a natural language text generation model, and predictive text nodes of a plurality of output parts are sequentially obtained.
The structured data to be generated is input into a natural language text generation model, which sequentially outputs a plurality of parts by automatic learning, each output part includes at least one predicted text node (the text node may be a word, which may be represented by a corresponding identifier (also referred to as ID)), and each output part serves as an input of a next output part, and the prediction of the next output part is continued until the prediction ends. And after the prediction is finished, converting the identification into a corresponding text node according to a word stock constructed in the training process.
In step 1022, text sentence patterns corresponding to the structured data are obtained by combining according to the text nodes of each predicted output portion.
An exemplary method for combining text nodes to obtain corresponding text sentence patterns includes: selecting a preset number (at least one preset number) of text nodes with the maximum prediction probability from each output part as a prediction text node set of each output part according to the prediction probability of the text nodes output by each output part; then, respectively selecting a text node from the predicted text node set of each output part, and combining the selected text nodes to obtain a text node combination; then traversing text nodes in the predicted text node sets of all the output parts in sequence to obtain a plurality of text node combinations; and finally, selecting a preset number of text nodes with the highest combination probability from the plurality of text node combinations as the corresponding text sentence patterns of the structured data.
Each text node is combined to obtain a corresponding text sentence pattern, which can be realized by using a cluster search algorithm, and specifically comprises the following steps: the method comprises the steps of determining the preset number by setting the beamwidth (Beam-Width), calculating the heuristic cost corresponding to each text node when predicting each output part, and carrying out descending sorting according to the heuristic cost of each text node of the output part from large to small. And according to the heuristic cost descending ordering sequence of the text nodes, a preset number of text nodes are left to expand at the next layer, and other text nodes which are not expanded at the next layer are pruned. The heuristic cost may be calculated, for example, by log (P (x)), where x is a text node or a text node combination, and P (x) represents the probability of x.
Therefore, if the bundle width (i.e., the preset number) is greater than 1, a plurality of text sentences may be correspondingly generated for one piece of input structured data, thereby increasing the diversity of predicted text sentences.
The determining of the text node combinations may be to sum probabilities of all text nodes in one text node combination through prediction probabilities of all text nodes in the text node combinations to obtain a combination probability corresponding to the text node combination, and select a text node combination with a preset number with the highest combination probability from a plurality of text node combinations as a text sentence pattern corresponding to the structured data finally, so as to predict and obtain a preset number of text sentence patterns corresponding to the structured data finally.
Fig. 2 is a schematic diagram of combining text nodes to obtain a corresponding text sentence pattern by using a bundle search algorithm in the present disclosure. For example, as shown in fig. 2, assuming that a certain structured data is predicted by using a natural language text generation model to obtain two output parts, namely a first output part and a second output part, wherein the first output part is provided with three text nodes of "a", "b" and "c", the prediction probability is respectively-1.05, -0.92, -1.39, the preset number is set to be 2, and since the two text nodes with the largest probability are "a" (probability is-1.05) and "b" (probability is-0.92), and the node of "c" with the smallest probability is pruned, the text node set of the first output part is { a, b }. Three text nodes expanded by the second output part aiming at the 'a' in the first output part are respectively'd' (the probability is-1.90), 'e' (the probability is-0.22), 'f' (the probability is-3.00), and according to the principle of selecting two text nodes with the highest probability, pruning the 'f' node with the smallest probability, wherein the text node set of the second output part obtained by the 'a' is { d, e }; the three text nodes expanded for the "b" in the second output part are "g" (the probability is-0.92), "h" (the probability is-0.69), "i" (the probability is-2.30), and according to the principle of selecting two text nodes with the highest probability, pruning is carried out on the "i" node with the lowest probability, so that the text node set of the second output part obtained by the "b" is { g, h }. Thus, four text node combinations of "ad" (probability of-1.05+ (-1.90) = -2.95), "ae" (probability of-1.05+ (-0.22) = -1.27), "bg" (probability of-0.92+ (-0.92) = -0.84), "bh" (probability of-0.92+ (-0.69) = -1.61) can be obtained. And selecting 2 'ae' and 'bh' with the highest combination probability from the four text node combinations to be used as the final text sentence pattern corresponding to the structured data.
In step 103, the placeholders of the slots in the text sentence patterns are replaced as required, so as to obtain the corresponding natural language text.
Specifically, if the structured data to be generated comprises slots and values thereof, replacing placeholders of the slots in the text sentence pattern with values of corresponding slots in the structured data to obtain corresponding natural language text; if the structured data to be generated does not comprise slots and values thereof, directly taking the text sentence pattern as the corresponding natural language text generated by the structured data.
According to the embodiment, the structured data training sample and the corresponding text template are utilized to train the cyclic neural network to obtain the natural language text generation model, the structured data to be generated is input into the natural language text generation model and the corresponding text sentence pattern is output, if necessary, the placeholders of the slots in the text sentence pattern are replaced by the slot values in the structured data to be generated, and finally the natural language text is generated, so that the text sentence pattern corresponding to the text template can be flexibly and automatically learned, the manual configuration work is greatly reduced, and the manpower is saved. In addition, if the bundle width is greater than 1, a plurality of text sentences may be correspondingly generated for one piece of input structured data, thereby increasing the diversity of predicted text sentences.
FIG. 3 is a flow diagram of some embodiments of the present disclosure generating a natural language text generation model. As shown in FIG. 3, the method of this embodiment includes steps 301-303.
In step 301, a training sample set is obtained, which includes a plurality of structured data training samples and text templates corresponding to each training sample, and data in the training sample set is preprocessed to be input information of a training model.
Obtaining the structured data training sample comprises: and extracting all intents and slots from the system reply of the dialogue system of a certain service, and combining all intents and slots to obtain a structured data training sample.
The text template corresponding to the structured data training sample is obtained, which comprises the following steps: and in the system reply of the dialogue system of a certain service, replacing the specific slot value in the reply sentence with the corresponding slot placeholder to obtain the text template corresponding to the structured data training sample. For example, "find your home restaurant" for you [ 2 ], after replacement is "find your home restaurant" for you [ count (number of restaurants).
For example, one list library includes 3 intent-slot combinations (i.e., 3 structured data training samples), with text templates corresponding to each combination in brackets. The list library data includes: select_name (found restaurant named name), bye_ (see again) select_name_count (found restaurant named name's count home).
The obtained structured data training sample and the text template corresponding to the structured data training sample can be preprocessed. Preprocessing for the text templates includes padding processing, addition of sentence symbols, and ID mapping of words and mapping of word vectors in the text templates. Preprocessing for structured data includes performing one-hot encoding. Described in detail below.
A filling process (also called padding) is performed for the text templates, that is, the text templates having equal lengths are processed. For example, the length can be made the same by cutting out the words at the end of the sentence by the sentences with the word number exceeding the fixed length, and adding the same words before the sentences with the insufficient word number. The filling processing method comprises forward filling, and filling the same words before sentences with insufficient word numbers, so that the lengths of the processed sentences are the same.
Adding sentence pattern symbols for the text templates, namely adding a start symbol < sos > for each text template, and representing the start of a sentence; an ending symbol < eos > is added after the text template, indicating the end of the sentence.
And carrying out ID mapping on words in the text template based on the word stock. The process for constructing the word stock comprises the following steps: based on the text template result after word segmentation, each word is marked with a unique identifier of the word, and the mapping relation between the word and the unique identifier is stored. The word stock arranges and stores words in reverse order according to word frequency. For example, according to the word frequency from large to small, the constructed word stock can be expressed as: "UNK", "SOS", "EOS", "and" on the other hand ". "," NAME "," COUNT ","? "," restaurant "," you "," go "," find "," you "," home "," good "," have "," yes "," have "," phone "," or "have". Where "UNK" represents words that do not appear in the word vector (e.g., words with word frequencies below a certain value to cull infrequent occurrences of words), "SOS" is a start indicator representing the beginning of the text template, and "EOS" is an end indicator representing the end of the text template. The word frequency is calculated by dividing the number of times of occurrence of the word in the word stock by the sum of the number of times of occurrence of all words in the word stock.
Preprocessing of word vector mapping for a text template refers to mapping each word in the text template into a word vector based on the segmented text template. For example, the employed word vector generation method includes, for example: glove model, word2vec model.
The preprocessing of generating the one-hot codes for the structured data refers to the one-hot coding of each part (such as intention, groove and groove value) in the structured data training sample. Each of the parts corresponds to one of the one-hot encoded vectors, and which position in the one-hot encoded vector is 1 is determined according to the ordering of the state values of the part in all the state values of the part. The order of the ordering is not limited. But the ordering should be consistent during the prediction process and the training process. For example, in order service, assuming that there are 5 intents in total, the intent to be encoded is select, based on the ordered list of intents, select is intended to be ranked first in the ordered list of all intents, and the corresponding one-hot encoding vector is denoted as (1, 0).
And connecting the word vector obtained by the preprocessed text template with the single-hot coding vector obtained by the preprocessed structured data (the connection can be carried out by adopting a concact method) to obtain a connection vector of the single-hot coding vector and the word vector.
In step 302, input information is input to a recurrent neural network, a plurality of output portions are sequentially obtained, and a total loss is determined according to losses between predicted words in each output portion and actual words in corresponding portions in the text template.
In some embodiments, training of the recurrent neural network (Recurrent Neural Network, RNN) is achieved by way of an encoder-decoder (encoder-decoder). An encoder (encoder) may handle the encoding process for the data in the training sample set and a decoder (decoder) may handle the mapping process for the outputted word ID into a word.
The connection vector of the single thermal coding vector and the word vector obtained in step 301 is input to the recurrent neural network, and a plurality of output parts are sequentially obtained, wherein one output part comprises a predicted word, and each previous output part is used as an input for training the next output part, namely, the predicted word of the next position predicted according to the word of the current position. For example, the next position word is predicted to be "to" according to the current position word, "find" and "to" before are taken as input of the next prediction.
Each step of prediction process can obtain a predicted word from one output part, calculate the loss between the predicted word and the actual word of the corresponding part in the text template, and add up and sum the losses calculated by all the output parts to obtain the total loss. The method for calculating the loss is to calculate the softmax function of the predicted word and the actual word to obtain the loss.
In step 303, a gradient is calculated based on the total loss, and when the gradient meets a preset condition, the training of the cyclic neural network is completed, and a natural language text generation model is obtained.
In some embodiments, the algorithm for gradient based on the loss includes a back propagation algorithm, which refers to optimizing the gradient of the loss and updating the weights of the recurrent neural network according to the gradient. The weights of the networks in the recurrent neural network may be trained through TensorFlow framework.
In some embodiments, the preset condition includes setting a threshold of loss, and when the gradient is less than or equal to the set threshold, the training process of the recurrent neural network is completed, so as to obtain the natural language text generation model.
FIG. 4 is a schematic diagram of some embodiments of training and prediction of a natural language text generation model of the present disclosure. Wherein H0, H1, H2, …, H6 represent a hidden layer of the recurrent neural network, and typically, the initial state H0 is obtained by random initialization.
As shown in fig. 4, a specific training procedure is: each piece of structured data is trained by starting from a start symbol SOS, for example, structured data select (name=kender, count=2), and after passing through a hidden layer H1 of the neural network in the first step, obtaining a first output portion X1, i.e. a first predicted word; taking the connection vector formed by the single-hot coding vector of the X1 and the word vector of the structured data and the word vector of the text template thereof as the input of the second step, and obtaining a second output part X2 after passing through the hidden layer H2 of the cyclic neural network; and taking the connection vector formed by the single-hot coding vector of the X2 and the word vector of the text template of the structured data as the input of the third step, obtaining a third output part X3, … … after passing through the hidden layer H3 of the neural network, and the like, finally obtaining five output parts X1, X2, X3, X4 and X5, wherein the end symbol EOS of the encountered sentence pattern indicates that the training process of the structured data is ended. And then respectively calculating the losses between the predicted words and the actual words contained in the output part obtained in each step, summing the losses in each step to obtain the total loss of the piece of structured data, and calculating the gradient of the structured data based on the total loss until the gradient meets the preset condition, and ending the training process of the piece of structured data by using the cyclic neural network. And continuing training the next piece of structured data until all pieces of structured data are trained, and obtaining the natural language text generation model.
As shown in fig. 4, a specific prediction procedure is: starting from a start symbol SOS, obtaining a first output part X1 after the structured data to be generated passes through a hidden layer H1 of a neural network in the first step, wherein the X1 comprises one or more prediction words; taking X1 and the single-heat coding vector of the structured data to be generated as the input of the second step, and obtaining a second output part X2 after passing through a hidden layer H2 of the neural network, wherein the X2 contains one or more predicted words; taking X2 and the single-heat coding vector of the structured data to be generated as the input of the third step, obtaining a third output part X3 after passing through a hidden layer H3 of the neural network, wherein X3 comprises one or more predicted words, … …, and the like, finally obtaining the predicted words of five output parts of X1, X2, X3, X4 and X5, and ending the prediction process of the structured data when an end symbol EOS of a sentence pattern is encountered. Finally, the predicted words of the five output parts are combined to generate one or more text sentence patterns. And finally, replacing placeholders of the slots in the text sentence pattern with slot values according to the requirements to obtain corresponding natural language texts. The text sentence patterns can be obtained by using the bundle searching method.
Fig. 5 is a schematic diagram of some embodiments of a natural language text generating device of the present disclosure.
As shown in fig. 5, the natural language text generating device 500 of this embodiment includes a memory 510 and a processor 520 coupled to the memory 510, the processor 520 being configured to perform the natural language text generating method in any of the embodiments of the present disclosure based on instructions stored in the memory 510.
The memory 510 may include, for example, system memory, fixed nonvolatile storage media, and the like. The system memory stores, for example, an operating system, application programs, boot loader (BootLoader), database, and other programs.
Fig. 6 is a schematic diagram of a dialog system of an exemplary embodiment of the present disclosure.
As shown in fig. 6, the apparatus of this embodiment includes: structured data generation means 601, natural language text generation means 602.
The structured data generating device 601 is configured to make a system reply according to a user request by the dialog system, and obtain structured data based on the system reply. The structured data generation apparatus 601 includes a user request module 6011, a natural language text understanding module 6012, and a dialog management module 6013.
The user request module 6011 is configured to cause the dialog system to receive a user request from a user side. The user request is natural language text. The user request may include any of a voice request, a text request.
The natural language text understanding module 6012 is configured to process the received user request into structured data that can be understood by the dialog system, that is, structured data corresponding to the user request, through a natural language understanding (Nature Language Understanding, NLU) technique.
The session management module 6013 is configured to generate a system reply in the form of structured data according to the structured data corresponding to the user request.
The natural language text generating device 602 is configured to convert the structured data of the system reply acquired by the structured data generating device 601 into a corresponding natural language text by using a natural language text generating method in any of some embodiments of the present disclosure, and output the corresponding natural language text. The natural language text generating device 602 includes a natural language text generating module 6021 and a replying user module 6022.
The natural language text generation module 6021 is configured to convert the system reply requested by the user into a corresponding natural language text using the natural language text generation method in any of some embodiments of the present disclosure, and output the converted natural language text.
The reply user module 6022 is configured to send the natural language text output by the natural language text generating module 6021 to the user, so as to form a final reply of the dialogue system to the user request.
It will be appreciated by those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flowchart and/or block of the flowchart illustrations and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of the preferred embodiments of the present disclosure is not intended to limit the disclosure, but rather to enable any modification, equivalent replacement, improvement or the like, which fall within the spirit and principles of the present disclosure.

Claims (10)

1. A method for generating natural language text, comprising:
Obtaining structured data to be generated, wherein the structured data comprises an intention, or the structured data comprises an intention, a slot and a value thereof, and the structured data is represented by using a structured query language;
Inputting the structured data into a natural language text generation model and outputting a corresponding text sentence pattern, wherein the natural language text generation model is obtained by training a cyclic neural network by utilizing a structured data training sample and a text template corresponding to the structured data training sample, wherein word vectors obtained by the preprocessed text template and independent heat coding vectors obtained by the preprocessed structured data training sample are connected in the training process to obtain connection vectors of the independent heat coding vectors and the word vectors, the connection vectors of the independent heat coding vectors and the word vectors are input into the cyclic neural network to sequentially obtain a plurality of output parts, and each output part is used as input for predicting the next output part;
If the structured data comprises slots and values thereof, replacing placeholders of the slots in the text sentence pattern with values of corresponding slots in the structured data to obtain corresponding natural language text;
and if the structured data does not comprise slots and values thereof, taking the text sentence pattern as corresponding natural language text.
2. The method of generating natural language text according to claim 1, wherein,
The step of inputting the structured data into a natural language text generation model and outputting a corresponding text sentence pattern comprises the following steps:
inputting the structured data into a natural language text generation model to sequentially obtain a plurality of output parts, wherein each output part comprises a plurality of predicted text nodes;
And determining a plurality of text sentence patterns corresponding to the structured data according to the combination of the text nodes in each output part.
3. The method of generating natural language text according to claim 2, wherein,
The determining a plurality of text sentence patterns corresponding to the structured data according to the combination of the text nodes in each output part comprises:
Selecting a preset number of text nodes with maximum prediction probability from each output part, wherein the preset number is a plurality of text nodes;
combining the selected text nodes in each output part;
And selecting the text node combinations with the preset number from the text node combinations as a plurality of text sentence patterns corresponding to the structured data.
4. The method of generating natural language text according to claim 2, wherein,
The step of inputting the structured data into a natural language text generation model to sequentially obtain a plurality of output parts comprises the following steps:
and inputting the structured data into a natural language text generation model to sequentially obtain a plurality of output parts, wherein each output part is used as an input for predicting the next output part.
5. The method of generating natural language text according to claim 1, wherein,
The structured data is one-hot encoded.
6. The method of generating natural language text according to claim 1, wherein,
The training process of the natural language text generation model comprises the following steps:
determining total loss according to the loss between the predicted word in each output part of the natural language text generation model and the actual word of the corresponding part in the text template;
And when the gradient calculated based on the total loss meets a preset condition, the training of the cyclic neural network is completed, and the cyclic neural network after the training is completed is used as the natural language text generation model.
7. The natural language text generating method of claim 1, further comprising:
Obtaining structured data of replies generated by a dialogue system based on user requests;
And converting the replied structured data generated by the dialogue system into corresponding natural language text by using the natural language text generation method, and outputting the corresponding natural language text.
8. A natural language text generation apparatus, comprising:
A memory; and
A processor coupled to the memory, the processor configured to perform the natural language text generation method of any one of claims 1-7 based on instructions stored in the memory.
9. A dialog system, comprising:
A structured data generation device configured to generate structured data of the reply based on the user request;
And
The natural language text generating device of claim 8, configured to convert the replied structured data into corresponding natural language text and output.
10. A non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the natural language text generation method of any one of claims 1-7.
CN201911036989.0A 2019-10-29 2019-10-29 Natural language text generation method and device and dialogue system Active CN112800737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911036989.0A CN112800737B (en) 2019-10-29 2019-10-29 Natural language text generation method and device and dialogue system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911036989.0A CN112800737B (en) 2019-10-29 2019-10-29 Natural language text generation method and device and dialogue system

Publications (2)

Publication Number Publication Date
CN112800737A CN112800737A (en) 2021-05-14
CN112800737B true CN112800737B (en) 2024-06-18

Family

ID=75802995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911036989.0A Active CN112800737B (en) 2019-10-29 2019-10-29 Natural language text generation method and device and dialogue system

Country Status (1)

Country Link
CN (1) CN112800737B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113593573B (en) * 2021-07-30 2024-01-12 思必驰科技股份有限公司 Machine interaction method and device
CN113869046B (en) * 2021-09-29 2022-10-04 阿波罗智联(北京)科技有限公司 Method, device and equipment for processing natural language text and storage medium
CN117592436A (en) * 2023-11-23 2024-02-23 知学云(北京)科技股份有限公司 Automatic document generation system based on artificial intelligence technology

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815486A (en) * 2018-12-25 2019-05-28 出门问问信息科技有限公司 Spatial term method, apparatus, equipment and readable storage medium storing program for executing

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9448992B2 (en) * 2013-06-04 2016-09-20 Google Inc. Natural language search results for intent queries
JP6789303B2 (en) * 2016-03-18 2020-11-25 グーグル エルエルシー Generation of text segment dependency analysis using neural networks
CN106021364B (en) * 2016-05-10 2017-12-12 百度在线网络技术(北京)有限公司 Foundation, image searching method and the device of picture searching dependency prediction model
CN107766559B (en) * 2017-11-06 2019-12-13 第四范式(北京)技术有限公司 training method, training device, dialogue method and dialogue system for dialogue model
US10431207B2 (en) * 2018-02-06 2019-10-01 Robert Bosch Gmbh Methods and systems for intent detection and slot filling in spoken dialogue systems
CN108334497A (en) * 2018-02-06 2018-07-27 北京航空航天大学 The method and apparatus for automatically generating text
CN109063035B (en) * 2018-07-16 2021-11-09 哈尔滨工业大学 Man-machine multi-turn dialogue method for trip field
CN110119765B (en) * 2019-04-18 2021-04-06 浙江工业大学 Keyword extraction method based on Seq2Seq framework
WO2021000362A1 (en) * 2019-07-04 2021-01-07 浙江大学 Deep neural network model-based address information feature extraction method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815486A (en) * 2018-12-25 2019-05-28 出门问问信息科技有限公司 Spatial term method, apparatus, equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN112800737A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
US10515155B2 (en) Conversational agent
US10503834B2 (en) Template generation for a conversational agent
CN112800737B (en) Natural language text generation method and device and dialogue system
CN110543552B (en) Conversation interaction method and device and electronic equipment
CN112765306B (en) Intelligent question-answering method, intelligent question-answering device, computer equipment and storage medium
CN109977201B (en) Machine chat method and device with emotion, computer equipment and storage medium
CN110245221B (en) Method and computer device for training dialogue state tracking classifier
CN112487168B (en) Semantic question-answering method and device of knowledge graph, computer equipment and storage medium
CN111209740A (en) Text model training method, text error correction method, electronic device and storage medium
CN108959388B (en) Information generation method and device
EP3486842A1 (en) Template generation for a conversational agent
CN111813923A (en) Text summarization method, electronic device and storage medium
CN111400481A (en) Method and device for generating reply sentences aiming at multiple rounds of conversations
CN116151132A (en) Intelligent code completion method, system and storage medium for programming learning scene
EP3525107A1 (en) Conversational agent
CN108664464B (en) Method and device for determining semantic relevance
CN116719520A (en) Code generation method and device
CN113296755A (en) Code structure tree library construction method and information push method
CN113065322B (en) Code segment annotation generation method and system and readable storage medium
CN113553847A (en) Method, device, system and storage medium for parsing address text
CN112395880A (en) Error correction method and device for structured triples, computer equipment and storage medium
KR101839121B1 (en) System and method for correcting user&#39;s query
CN111126047B (en) Method and device for generating synonymous text
CN111695350B (en) Word segmentation method and word segmentation device for text
CN114564493A (en) Auxiliary control mechanism for complex query processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Holding Co.,Ltd.

Address before: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Digital Technology Holding Co.,Ltd.

Address after: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Digital Technology Holding Co.,Ltd.

Address before: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant