CN111008267B - Intelligent dialogue method and related equipment - Google Patents

Intelligent dialogue method and related equipment Download PDF

Info

Publication number
CN111008267B
CN111008267B CN201911034425.3A CN201911034425A CN111008267B CN 111008267 B CN111008267 B CN 111008267B CN 201911034425 A CN201911034425 A CN 201911034425A CN 111008267 B CN111008267 B CN 111008267B
Authority
CN
China
Prior art keywords
sentence
sentences
target
question
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911034425.3A
Other languages
Chinese (zh)
Other versions
CN111008267A (en
Inventor
刘涛
许开河
王少军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201911034425.3A priority Critical patent/CN111008267B/en
Priority to PCT/CN2019/117542 priority patent/WO2021082070A1/en
Publication of CN111008267A publication Critical patent/CN111008267A/en
Application granted granted Critical
Publication of CN111008267B publication Critical patent/CN111008267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the field of voice semantics, in particular to an intelligent dialogue method and related equipment, which are applied to electronic equipment, wherein the method comprises the following steps: determining N first question sentences based on target question sentences input by a user, wherein each first question sentence is associated with one first answer sentence; determining N first parameters based on a preset neural network model, wherein the N first parameters are in one-to-one correspondence with the N first problem sentences; taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, and the N first parameters comprise the target parameter; and outputting the target answer sentence, wherein the problem which does not appear in the controllable answer corpus can be realized by adopting the embodiment of the application.

Description

Intelligent dialogue method and related equipment
Technical Field
The application relates to the technical field of electronics, in particular to an intelligent dialogue method and related equipment.
Background
Intelligent dialogue is an important application in the field of artificial intelligence, and human beings naturally have the capability of analyzing dialogue states, topics and mood, so that the intelligent dialogue is of great significance on machines. At present, intelligent conversations are mainly realized based on two models, namely a generation model and a rule model. The generated model can answer questions which do not appear in the corpus, but answer sentences are uncontrollable; while the rule model is controllable in answering sentences, the questions which do not appear in the corpus cannot be answered. How to implement a question that does not appear in a controllable corpus of answers is therefore a technical problem that needs to be solved.
Disclosure of Invention
The embodiment of the application provides an intelligent dialogue method and related equipment, which are used for realizing the problem which does not appear in a controllable answer corpus.
In a first aspect, an embodiment of the present application provides an intelligent dialogue method, applied to an electronic device, where the method includes:
Determining N first question sentences based on target question sentences input by a user, wherein the similarity between each first question sentence and the target question sentences is greater than or equal to a first threshold value, N is an integer greater than 1, and each first question sentence is associated with one first answer sentence;
Determining N first parameters based on a preset neural network model, wherein the N first parameters are in one-to-one correspondence with the N first problem sentences, and the N first parameters are used for evaluating the similarity between the corresponding first problem sentences and the target problem sentences;
Taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold, and the N first parameters comprise the target parameter;
And outputting the target answer sentence.
In a second aspect, an embodiment of the present application provides an intelligent dialogue apparatus, applied to an electronic device, where the apparatus includes:
The determining unit is used for determining N first question sentences based on target question sentences input by a user, wherein the similarity between each first question sentence and the target question sentences is greater than or equal to a first threshold value, N is an integer greater than 1, and each first question sentence is associated with one first answer sentence; determining N first parameters based on a preset neural network model, wherein the N first parameters are in one-to-one correspondence with the N first problem sentences, and the N first parameters are used for evaluating the similarity between the corresponding first problem sentences and the target problem sentences; taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold, and the N first parameters comprise the target parameter;
and the output unit is used for outputting the target answer sentence.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing part or all of the steps described in the method of the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium is used to store a computer program, where the computer program is executed by a processor to implement some or all of the steps described in the method according to the first aspect of the embodiment of the present application.
It can be seen that in the embodiment of the present application, N first question sentences are determined based on a target question sentence input by a user, then N first parameters are determined based on a preset neural network model, the N first parameters are used to evaluate the similarity between the corresponding first question sentences and the target question sentence, then a first answer sentence associated with the first question sentence corresponding to the first parameter greater than or equal to a second threshold is used as an answer sentence of the target question sentence, finally the answer sentence is output, and N first question sentences are determined based on the target question sentence input by the user, so that a rough screening is performed, and the controllability of the answer sentence is ensured; n first parameters are determined based on a preset neural network model, and the problem which does not occur in the corpus is flexibly answered.
These and other aspects of the application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2A is a flow chart of an intelligent dialogue method according to an embodiment of the present application;
FIG. 2B is a schematic diagram of a sentence similarity calculation process according to an embodiment of the present application;
FIG. 3 is a flow chart of an intelligent dialogue method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an intelligent dialogue device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
The following will describe in detail.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims and drawings are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the following, some terms used in the present application are explained for easy understanding by those skilled in the art.
The electronic devices may include various handheld devices, vehicle mounted devices, wearable devices (e.g., smart watches, smart bracelets, pedometers, etc.), computing devices or other processing devices communicatively coupled to wireless modems, as well as various forms of User Equipment (UE), mobile Stations (MS), terminal devices (TERMINAL DEVICE), etc. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
As shown in fig. 1, fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device includes a processor, memory, signal processor, transceiver, display, speaker, microphone, random access memory (Random Access Memory, RAM), camera and sensor, and the like. The device comprises a memory, a signal processor, a display screen, a loudspeaker, a microphone, a RAM, a camera and a sensor, wherein the memory, the signal processor, the display screen, the loudspeaker, the microphone, the RAM, the camera and the sensor are connected with the processor, and the transceiver is connected with the signal processor.
The display screen may be a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), an Active Matrix Organic LIGHT EMITTING Diode (AMOLED), or the like.
The camera may be a normal camera or an infrared camera, which is not limited herein. The camera may be a front camera or a rear camera, which is not limited herein.
Wherein the sensor comprises at least one of: light sensing sensors, gyroscopes, infrared proximity sensors, fingerprint sensors, pressure sensors, etc. Wherein a light sensor, also called ambient light sensor, is used to detect the ambient light level. The light sensor may comprise a photosensitive element and an analog-to-digital converter. The photosensitive element is used for converting the collected optical signals into electric signals, and the analog-to-digital converter is used for converting the electric signals into digital signals. Optionally, the optical sensor may further include a signal amplifier, where the signal amplifier may amplify the electrical signal converted by the photosensitive element and output the amplified electrical signal to the analog-to-digital converter. The photosensitive element may include at least one of a photodiode, a phototransistor, a photoresistor, and a silicon photocell.
The processor is a control center of the electronic device, and is connected with various parts of the whole electronic device by various interfaces and lines, and executes various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory and calling data stored in the memory, so that the electronic device is monitored as a whole.
The processor may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The memory is used for storing software programs and/or modules, and the processor executes the software programs and/or modules stored in the memory so as to execute various functional applications of the electronic device and data processing. The memory may mainly include a memory program area and a memory data area, wherein the memory program area may store an operating system, a software program required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Embodiments of the present application are described in detail below.
Referring to fig. 2A, fig. 2A is a flow chart of an intelligent dialogue method applied to an electronic device according to an embodiment of the application, where the method includes:
Step 201: n first question sentences are determined based on target question sentences input by a user, the similarity between each first question sentence and the target question sentences is greater than or equal to a first threshold value, N is an integer greater than 1, and each first question sentence is associated with one first answer sentence.
The information input by the user can be voice, text or pictures, and then the information input by the user is analyzed to obtain the target problem statement.
Where N may be, for example, 5, 10, 15, 20, or other values, without limitation.
The first threshold may be, for example, 80%, 85%, 90%, 95%, or other values, which are not limited herein.
Step 202: n first parameters are determined based on a preset neural network model, the N first parameters are in one-to-one correspondence with the N first problem sentences, and the N first parameters are used for evaluating the similarity between the corresponding first problem sentences and the target problem sentences.
Step 203: and taking the target answer sentence as the answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold value, and the N first parameters comprise the target parameter.
Wherein, the first threshold value and the second threshold value are both preset values.
For example, the determined first question sentences include 3 first parameter values, for example, 80%, 85%, and 90%, respectively, and then the target parameter may be 90%, and the first answer sentence associated with the first question sentence corresponding to 90% is taken as the answer sentence of the target question sentence.
Step 204: and outputting the target answer sentence.
The target answer sentence may be output by voice, or may be output by text, which is not limited herein.
It can be seen that in the embodiment of the present application, N first question sentences are determined based on a target question sentence input by a user, then N first parameters are determined based on a preset neural network model, the N first parameters are used to evaluate the similarity between the corresponding first question sentences and the target question sentence, then a first answer sentence associated with the first question sentence corresponding to the first parameter greater than or equal to a second threshold is used as an answer sentence of the target question sentence, finally the answer sentence is output, and N first question sentences are determined based on the target question sentence input by the user, so that a rough screening is performed, and the controllability of the answer sentence is ensured; n first parameters are determined based on a preset neural network model, and the problem which does not occur in the corpus is flexibly answered.
In an implementation manner of the present application, the determining N first question sentences based on the target question sentences input by the user includes:
Acquiring a target problem statement input by a user;
Determining M second problem sentences from a preset corpus based on literal search, and determining W third problem sentences from the preset corpus based on semantic search, wherein keywords of the literal search are determined based on the target problem sentences, the literal similarity of each second problem sentence and the target problem sentences is larger than or equal to a third threshold value, the semantic similarity of each third problem sentence and the target problem sentences is larger than or equal to a fourth threshold value, the first threshold value is larger than or equal to the third threshold value, the first threshold value is larger than or equal to the fourth threshold value, and the M and the W are integers larger than 0;
determining N first question sentences based on the M second question sentences and the W third question sentences, wherein the N first question sentences comprise at least one second question sentence and at least one third question sentence.
Specifically, the target question sentence is composed of a first character set, wherein the first character set comprises P first characters, and P is an integer greater than 0; the specific implementation manner of determining the M second problem sentences from the preset corpus based on the literal search is as follows: searching in a preset corpus by taking at least one first character in the P first characters as a keyword to obtain Q fifth problem sentences; selecting M fifth question sentences from the Q fifth question sentences; and determining the M fifth problem sentences as M second problem sentences.
The M fifth question sentences may be any M fifth question sentences selected manually, may be M fifth question sentences ranked earlier after searching, or may be M fifth question sentences including the most keywords, and are not limited herein.
Further, the number of first characters included in the M second problem sentences is greater than or equal to the number of first characters included in Q-M sixth problem sentences, where Q-M sixth problem sentences are problem sentences that are the Q fifth problem sentences except the M fifth problem sentences.
The third threshold may be, for example, 60%, 70%, 80%, 90%, or other values, which are not limited herein; the fourth threshold may be, for example, 60%, 70%, 80%, 90%, or other values, without limitation.
Specifically, a specific manner of determining the N first question sentences based on the M second question sentences and the W third question sentences is as follows: determining n×n second question sentences from the M second question sentences, and (1-N) N third question sentences from the W third question sentences; and taking the N x N second question sentences and the (1-N) x N third question sentences as N first question sentences.
Where n is a number greater than 0 and less than 1, and may be, for example, 0.1, 0.2, 0.3, 0.4, or other values, without limitation.
The literal similarity between the n×n second question sentences and the target question sentence is greater than or equal to a fifth threshold, the semantic similarity between the (1-N) third question sentences and the target question sentence is greater than or equal to a sixth threshold, the fifth threshold may be equal to the sixth threshold, and the fifth threshold may also be not equal to the sixth threshold, which is not limited herein.
In an implementation manner of the present application, the determining W third problem sentences from the preset corpus based on semantic search includes:
determining statement constituent components of the target problem statement;
Filtering the target problem statement based on the statement constituent components to obtain a fourth problem statement, wherein the statement constituent components of the fourth problem statement are less than or equal to the statement constituent components of the target problem statement;
and determining W third problem sentences from the preset corpus, wherein the semantic similarity of each third problem sentence and the fourth problem sentence is larger than or equal to the fourth threshold value.
Wherein, the sentence constitution component comprises at least one of the following: subject, predicate, object, subject, idiom, complement, center, animal.
For example, the subject in the target question sentence is removed, thereby obtaining a sentence from which the subject is removed. The subject in the sentence may be, for example, words such as "he", "she", "it", "they", "me", "you", and the like. Illustratively, the target question sentence is "recommend a proper bag for me", and the sentence after the stop word is removed is "recommend a proper bag".
In an implementation manner of the present application, the determining W third problem sentences from the preset corpus based on semantic search includes:
Word segmentation processing is carried out on the target problem sentences so as to obtain a plurality of target words;
deleting the stop words in the target words based on a preset stop word list to obtain a seventh problem statement;
And determining W third problem sentences from the preset corpus, wherein the semantic similarity of each third problem sentence and the seventh problem sentence is larger than or equal to the fourth threshold value.
Wherein, stop words are words that are nonsensical to sentences, such as words like "o", "y", etc. Illustratively, the target question statement is "how weather is tomorrow". And the sentence after the stop words are removed is "how good the tomorrow is.
In an implementation manner of the present application, the determining N first parameters based on the preset neural network model includes:
Determining N sentence similarities, N editing distances and N Jacquard similarities of the target problem sentences and the N first problem sentences based on a preset neural network model, wherein the N sentence similarities, the N editing distances and the N Jacquard similarities are in one-to-one correspondence with the N first problem sentences;
and determining N first parameters based on the N sentence similarities, the N editing distances and the N Jacquard similarities, wherein the N first parameters are uniform and corresponding to the N sentence similarities, the N editing distances and the N Jacquard similarities.
The sentence similarity refers to the similarity between the target problem sentence and the first problem sentence.
Wherein the editing distance refers to the minimum number of edits to convert the first question sentence into the target question sentence through the editing operation.
Specifically, the determining N first parameters based on the N sentence similarities, the N edit distances, and the N jaccard similarities includes:
converting the N editing distances into N first similarities;
Determining a first weight, a second weight and a third weight, wherein the first weight is used for representing the proportion of sentence similarity when the sentence similarity is used for evaluating a first parameter, the second weight is used for representing the proportion of the first similarity when the sentence similarity is used for evaluating the first parameter, the third weight is used for representing the proportion of the Jacquard similarity when the Jacquard similarity is used for evaluating the first parameter, and the sum of the first weight, the second weight and the third weight is 1;
N first formulas are determined based on the first weight, the second weight, the third weight, the N sentence similarities, the N first similarities, the N Jaccard similarities, and a first parameter formula.
For example, table 2 is a one-to-one correspondence table of editing distances and first similarities provided in an embodiment of the present application.
TABLE 2
Further, the first parameter formula is: s=a+b+b+c, where S is a first parameter, a is the first weight, B is the second weight, C is the third weight, a is sentence similarity, B is first similarity, and C is jekcard similarity.
For example, a is 0.3, B is 0.5, C is 0.2, a is 80%, B is 90%, C is 80%, s=85% is calculated.
In an implementation manner of the present application, the determining, based on a preset neural network model, N sentence similarities between the target question sentence and the N first question sentences includes:
Converting the target question sentence into first sentence vectors, and converting the N first question sentences into N second sentence vectors, wherein the N second sentence vectors are in one-to-one correspondence with the N first question sentences;
Extracting feature information of the first sentence vector to obtain a first target vector, and extracting feature information of the N second sentence vectors to obtain N second target vectors, wherein the N second target vectors are in one-to-one correspondence with the N second sentence vectors;
And determining the sentence similarity of the first target vector and each second target vector based on a sentence similarity calculation formula to obtain N sentence similarities.
Further, the target question sentence is formed by a first character set, the first character set includes P first characters, and a specific implementation manner of converting the target question sentence into the first sentence vector includes: converting the P first characters into P word vectors; and combining the P word vectors to obtain a first sentence vector.
It should be noted that, the manner of converting the P first characters into P word vectors may be at least one of the following: bi-directional coded representation (Bidirectional Encoder Representation from Transformers, BERT) model, language model embedded (Embeddings from Language Models, ELMo) model, word2vec model.
Wherein, the sentence similarity calculation formula is A=Softmax #) Wherein h a、hb is the first target vector and the second target vector, respectively.
As shown in fig. 2B, fig. 2B is a schematic diagram illustrating a process for calculating sentence similarity according to an embodiment of the present application. The target question sentence is "HE IS SMART", "He" has a word vector of x 1 a, the "is" has a word vector of x 2 a, The word vector of "smart" is x 3 a, and then feature information of x 1 a、x2 a、x3 a is extracted through LSTMa algorithm, so as to obtain h 1 a、h2 a、h3 a. Similarly, the first question sentence is "A truly wise man", the word vector of "A" is x 1 b, the word vector of "truly" is x 2 b, the word vector of "wise" is x 3 b, the word vector of "man" is x 4 b, Then extracting characteristic information of x 1 b、x2 b、x3 b、x4 b through LSTMb algorithm to obtain h 1 b、h2 b、h3 b、h4 b. Finally, the sentence similarity A can be obtained through a sentence similarity calculation formula f (h a,hb), and the sentence similarity A is output.
In an implementation manner of the present application, the target question sentence is composed of a first character set, the N first question sentences are composed of N second character sets, and the N second character sets are in one-to-one correspondence with the N first question sentences; the determining N edit distances between the target question sentence and the N first question sentences based on the preset neural network model includes:
determining a minimum number of editing operations required to convert the first character set into each second character set;
And determining the obtained N minimum editing operation times as N editing distances, wherein the N editing distances are in one-to-one correspondence with the N minimum editing operation times.
Wherein the editing operation includes at least one of: insertion, deletion, replacement.
For example, the two words "kitten" and "sitting", the minimum single character editing operations required to convert from "kitten" to "sitting" are: first, kitten → sitten (replace "k" with "s"); second, sitten → sittin (replace "e" with "i"); third, sittin → sitting (insert "g" at the end of the word). Thus, the edit distance between the two words "kitten" and "sitting" is 3.
In an implementation manner of the present application, the determining, based on a preset neural network model, N jaccard similarities between the target question sentence and the N first question sentences includes:
determining N intersections and N union sets of the first character set and the N second character sets, wherein the N intersections and the N union sets are in one-to-one correspondence with the N second character sets;
N Jacquard similarities are determined based on the N intersections and the N union sets, and the N Jacquard similarities correspond to the N intersections and the N union sets uniformly.
Further, the first character set includes P first characters, the second character set includes Q second characters, where R are the same for the first characters and the second characters, the intersection of the first character set and the second character set is R, the union of the first character set and the second character set is p+q-R, the jekcard similarity is R/(p+q-R), and R and Q are integers greater than 0.
Referring to fig. 3, fig. 3 is a schematic flow chart of an intelligent dialogue method according to an embodiment of the present application, consistent with the embodiment shown in fig. 2A, and the method is applied to an electronic device, and includes:
step 301: and acquiring a target problem statement input by a user, wherein the target problem statement consists of a first character set.
Step 302: and determining M second problem sentences from a preset corpus based on literal search, wherein the keyword of the literal search is determined based on the target problem sentences, the literal similarity of each second problem sentence and the target problem sentence is larger than or equal to a third threshold value, and M is an integer larger than 0.
Step 303: and determining statement constituent components of the target problem statement.
Step 304: filtering the target problem statement based on the statement constituent components to obtain a fourth problem statement, wherein the statement constituent components of the fourth problem statement are less than or equal to the statement constituent components of the target problem statement.
Step 305: determining W third problem sentences from the preset corpus, wherein the semantic similarity of each third problem sentence and the fourth problem sentence is larger than or equal to the fourth threshold value, and W is an integer larger than 0.
Step 306: determining N first problem sentences based on the M second problem sentences and the W third problem sentences, wherein the N first problem sentences comprise at least one second problem sentence and at least one third problem sentence, the similarity of each first problem sentence and the target problem sentence is larger than or equal to a first threshold value, the first threshold value is larger than or equal to a third threshold value, the first threshold value is larger than or equal to a fourth threshold value, the N first problem sentences are composed of N second character sets, and the N second character sets are in one-to-one correspondence with the N first problem sentences.
Step 307: the target question sentences are converted into first sentence vectors, the N first question sentences are converted into N second sentence vectors, and the N second sentence vectors are in one-to-one correspondence with the N first question sentences.
Step 308: extracting the characteristic information of the first sentence vector to obtain a first target vector, and extracting the characteristic information of the N second sentence vectors to obtain N second target vectors, wherein the N second target vectors are in one-to-one correspondence with the N second sentence vectors.
Step 309: and determining the sentence similarity of the first target vector and each second target vector based on a sentence similarity calculation formula to obtain N sentence similarities.
Step 310: a minimum number of editing operations required to convert the first character set into each second character set is determined.
Step 311: and determining the obtained N minimum editing operation times as N editing distances, wherein the N editing distances are in one-to-one correspondence with the N minimum editing operation times.
Step 312: and determining N intersections and N union sets of the first character set and the N second character sets, wherein the N intersections and the N union sets are in one-to-one correspondence with the N second character sets.
Step 313: n Jacquard similarities are determined based on the N intersections and the N union sets, and the N Jacquard similarities correspond to the N intersections and the N union sets uniformly.
Step 314: and determining N first parameters based on the N sentence similarities, the N editing distances and the N Jacquard similarities, wherein the N first parameters are uniform and corresponding to the N sentence similarities, the N editing distances and the N Jacquard similarities.
Step 315: and taking the target answer sentence as the answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold value, and the N first parameters comprise the target parameter.
Step 316: and outputting the target answer sentence.
It should be noted that, the step 302 and the steps 303-305 may be executed simultaneously, or the step 302 may be executed first, then the steps 303-305 may be executed, or the step 303-305 may be executed first, then the step 302 may be executed. Steps 307-309, steps 310-311, and steps 312-314 may be performed simultaneously, or steps 307-309 may be performed first, then steps 310-311 may be performed, then steps 312-314 may be performed first, then steps 307-309 may be performed, then steps 312-314 may be performed, or steps 312-314 may be performed first, then steps 307-309 may be performed, and then steps 310-311 may be performed, without limitation. The implementation of this embodiment may be referred to the implementation of the method embodiment described in the foregoing, and will not be described herein.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 2A and 3, the electronic device includes a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the following steps:
Determining N first question sentences based on target question sentences input by a user, wherein the similarity between each first question sentence and the target question sentences is greater than or equal to a first threshold value, N is an integer greater than 1, and each first question sentence is associated with one first answer sentence;
Determining N first parameters based on a preset neural network model, wherein the N first parameters are in one-to-one correspondence with the N first problem sentences, and the N first parameters are used for evaluating the similarity between the corresponding first problem sentences and the target problem sentences;
Taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold, and the N first parameters comprise the target parameter;
And outputting the target answer sentence.
In an implementation of the present application, in determining N first question sentences based on the target question sentences input by the user, the program includes instructions specifically for performing the steps of:
Acquiring a target problem statement input by a user;
Determining M second problem sentences from a preset corpus based on literal search, and determining W third problem sentences from the preset corpus based on semantic search, wherein keywords of the literal search are determined based on the target problem sentences, the literal similarity of each second problem sentence and the target problem sentences is larger than or equal to a third threshold value, the semantic similarity of each third problem sentence and the target problem sentences is larger than or equal to a fourth threshold value, the first threshold value is larger than or equal to the third threshold value, the first threshold value is larger than or equal to the fourth threshold value, and the M and the W are integers larger than 0;
determining N first question sentences based on the M second question sentences and the W third question sentences, wherein the N first question sentences comprise at least one second question sentence and at least one third question sentence.
In an implementation manner of the present application, in determining W third problem sentences from the preset corpus based on semantic search, the program includes instructions specifically for performing the following steps:
determining statement constituent components of the target problem statement;
Filtering the target problem statement based on the statement constituent components to obtain a fourth problem statement, wherein the statement constituent components of the fourth problem statement are less than or equal to the statement constituent components of the target problem statement;
and determining W third problem sentences from the preset corpus, wherein the semantic similarity of each third problem sentence and the fourth problem sentence is larger than or equal to the fourth threshold value.
In an implementation of the present application, in determining N first parameters based on a preset neural network model, the program includes instructions specifically for:
Determining N sentence similarities, N editing distances and N Jacquard similarities of the target problem sentences and the N first problem sentences based on a preset neural network model, wherein the N sentence similarities, the N editing distances and the N Jacquard similarities are in one-to-one correspondence with the N first problem sentences;
and determining N first parameters based on the N sentence similarities, the N editing distances and the N Jacquard similarities, wherein the N first parameters are uniform and corresponding to the N sentence similarities, the N editing distances and the N Jacquard similarities.
In an implementation manner of the present application, in determining N sentence similarities between the target question sentence and the N first question sentences based on a preset neural network model, the program includes instructions specifically for executing the following steps:
Converting the target question sentence into first sentence vectors, and converting the N first question sentences into N second sentence vectors, wherein the N second sentence vectors are in one-to-one correspondence with the N first question sentences;
Extracting feature information of the first sentence vector to obtain a first target vector, and extracting feature information of the N second sentence vectors to obtain N second target vectors, wherein the N second target vectors are in one-to-one correspondence with the N second sentence vectors;
And determining the sentence similarity of the first target vector and each second target vector based on a sentence similarity calculation formula to obtain N sentence similarities.
In an implementation manner of the present application, the target question sentence is composed of a first character set, the N first question sentences are composed of N second character sets, and the N second character sets are in one-to-one correspondence with the N first question sentences; in determining N edit distances between the target question sentence and the N first question sentences based on a preset neural network model, the program includes instructions specifically for:
determining a minimum number of editing operations required to convert the first character set into each second character set;
And determining the obtained N minimum editing operation times as N editing distances, wherein the N editing distances are in one-to-one correspondence with the N minimum editing operation times.
In an implementation manner of the present application, in determining N jaccard similarities between the target question sentence and the N first question sentences based on a preset neural network model, the program includes instructions specifically for executing the following steps:
determining N intersections and N union sets of the first character set and the N second character sets, wherein the N intersections and the N union sets are in one-to-one correspondence with the N second character sets;
N Jacquard similarities are determined based on the N intersections and the N union sets, and the N Jacquard similarities correspond to the N intersections and the N union sets uniformly.
It should be noted that, the specific implementation process of this embodiment may refer to the specific implementation process described in the foregoing method embodiment, which is not described herein.
The foregoing embodiments mainly describe the solution of the embodiment of the present application from the point of view of the method-side execution process. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
According to the embodiment of the application, the electronic equipment can be divided into the functional units according to the method examples, for example, each functional unit can be divided corresponding to each function, and two or more functions can be integrated into one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
The following is an embodiment of the apparatus according to the present application, which is configured to perform a method implemented by an embodiment of the method according to the present application. Referring to fig. 5, fig. 5 is a schematic structural diagram of an intelligent dialogue device according to an embodiment of the present application, which is applied to an electronic device, and the device includes:
A determining unit 501, configured to determine N first question sentences based on a target question sentence input by a user, where a similarity between each first question sentence and the target question sentence is greater than or equal to a first threshold, N is an integer greater than 1, and each first question sentence is associated with a first answer sentence; determining N first parameters based on a preset neural network model, wherein the N first parameters are in one-to-one correspondence with the N first problem sentences, and the N first parameters are used for evaluating the similarity between the corresponding first problem sentences and the target problem sentences; taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold, and the N first parameters comprise the target parameter;
and an output unit 502, configured to output the target answer sentence.
In an implementation of the present application, in determining N first question sentences based on the target question sentences input by the user, the determining unit 501 includes an acquiring unit 5011, a first sub-determining unit 5012, a second sub-determining unit 5013, and a third sub-determining unit 5014, in which:
the acquiring unit 5011 is configured to acquire a target question sentence input by a user;
The first determining unit 5012 is configured to determine M second question sentences from a preset corpus based on a literal search, where keywords of the literal search are determined based on the target question sentences;
The second determining unit 5013 is configured to determine W third problem sentences from the preset corpus based on semantic search, where a literal similarity between each second problem sentence and the target problem sentence is greater than or equal to a third threshold, a semantic similarity between each third problem sentence and the target problem sentence is greater than or equal to a fourth threshold, the first threshold is greater than or equal to the third threshold, the first threshold is greater than or equal to the fourth threshold, and both M and W are integers greater than 0;
The third determining unit 5014 is configured to determine N first question sentences based on the M second question sentences and the W third question sentences, where the N first question sentences include at least one second question sentence and at least one third question sentence.
In an implementation manner of the present application, in determining W third question sentences from the preset corpus based on semantic search, the second determining unit 5013 is specifically configured to determine sentence components of the target question sentences; filtering the target problem statement based on the statement constituent components to obtain a fourth problem statement, wherein the statement constituent components of the fourth problem statement are less than or equal to the statement constituent components of the target problem statement; and determining W third problem sentences from the preset corpus, wherein the semantic similarity of each third problem sentence and the fourth problem sentence is larger than or equal to the fourth threshold value.
In an implementation manner of the present application, in determining N first parameters based on a preset neural network model, the determining unit 501 further includes a fourth determining unit 5015 and a fifth determining unit 5016, where:
The fourth determining unit 5015 is configured to determine N sentence similarities, N edit distances, and N jekcard similarities between the target question sentence and the N first question sentences based on a preset neural network model, where the N sentence similarities, the N edit distances, and the N jekcard similarities are all in one-to-one correspondence with the N first question sentences;
The fifth determining unit 5016 is configured to determine N first parameters based on the N sentence similarities, the N edit distances, and the N jaccard similarities, where the N first parameters uniformly correspond to the N sentence similarities, the N edit distances, and the N jaccard similarities.
In an implementation manner of the present application, in determining N sentence similarities between the target question sentence and the N first question sentences based on a preset neural network model, the fourth determining unit 5015 is specifically configured to:
Converting the target question sentence into first sentence vectors, and converting the N first question sentences into N second sentence vectors, wherein the N second sentence vectors are in one-to-one correspondence with the N first question sentences;
Extracting feature information of the first sentence vector to obtain a first target vector, and extracting feature information of the N second sentence vectors to obtain N second target vectors, wherein the N second target vectors are in one-to-one correspondence with the N second sentence vectors;
And determining the sentence similarity of the first target vector and each second target vector based on a sentence similarity calculation formula to obtain N sentence similarities.
In an implementation manner of the present application, the target question sentence is composed of a first character set, the N first question sentences are composed of N second character sets, and the N second character sets are in one-to-one correspondence with the N first question sentences; in determining N edit distances between the target question sentence and the N first question sentences based on a preset neural network model, the fourth determining unit 5015 is specifically configured to:
determining a minimum number of editing operations required to convert the first character set into each second character set;
And determining the obtained N minimum editing operation times as N editing distances, wherein the N editing distances are in one-to-one correspondence with the N minimum editing operation times.
In an implementation manner of the present application, in determining N jaccard similarities between the target question sentence and the N first question sentences based on a preset neural network model, the fourth determining unit 5015 is specifically configured to:
determining N intersections and N union sets of the first character set and the N second character sets, wherein the N intersections and the N union sets are in one-to-one correspondence with the N second character sets;
N Jacquard similarities are determined based on the N intersections and the N union sets, and the N Jacquard similarities correspond to the N intersections and the N union sets uniformly.
The obtaining unit 5011, the first determining sub-unit 5012, the second determining sub-unit 5013, the third determining sub-unit 5014, the fourth determining sub-unit 5015, the fifth determining sub-unit 5016, and the output unit 502 may be implemented by a processor. The embodiment of the application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program makes a computer execute part or all of the steps of any one of the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (7)

1. An intelligent conversation method, characterized by being applied to an electronic device, the method comprising:
Determining N first question sentences based on target question sentences input by a user, wherein the similarity between each first question sentence and the target question sentences is greater than or equal to a first threshold value, N is an integer greater than 1, and each first question sentence is associated with one first answer sentence; comprising the following steps:
Acquiring a target problem statement input by a user;
Determining M second problem sentences from a preset corpus based on literal search, and determining W third problem sentences from the preset corpus based on semantic search, wherein keywords of the literal search are determined based on the target problem sentences, the literal similarity of each second problem sentence and the target problem sentences is larger than or equal to a third threshold value, the semantic similarity of each third problem sentence and the target problem sentences is larger than or equal to a fourth threshold value, the first threshold value is larger than or equal to the third threshold value, the first threshold value is larger than or equal to the fourth threshold value, and the M and the W are integers larger than 0;
The target question sentence is composed of a first character set, the first character set comprises P first characters, P is an integer greater than 0, and M second question sentences are determined from a preset corpus based on literal search, and the method comprises the following steps: searching in a preset corpus by taking at least one first character in the P first characters as a keyword to obtain Q fifth problem sentences; selecting M fifth question sentences from the Q fifth question sentences; determining the M fifth problem sentences as M second problem sentences;
The determining W third problem sentences from the preset corpus based on semantic search includes: determining statement constituent components of the target problem statement; filtering the target problem statement based on the statement constituent components to obtain a fourth problem statement, wherein the statement constituent components of the fourth problem statement are less than or equal to the statement constituent components of the target problem statement; determining W third problem sentences from the preset corpus, wherein the semantic similarity of each third problem sentence and the fourth problem sentence is larger than or equal to the fourth threshold value;
Determining n×n second question sentences from the M second question sentences, and (1-N) N third question sentences from the W third question sentences; taking the n×n second question sentences and the (1-N) N third question sentences as N first question sentences, wherein N is a number greater than 0 and less than 1;
Determining N first parameters based on a preset neural network model, wherein the N first parameters are in one-to-one correspondence with the N first problem sentences, and the N first parameters are used for evaluating the similarity between the corresponding first problem sentences and the target problem sentences;
The determining N first parameters based on the preset neural network model includes: determining N sentence similarities, N editing distances and N Jacquard similarities of the target problem sentences and the N first problem sentences based on a preset neural network model, wherein the N sentence similarities, the N editing distances and the N Jacquard similarities are in one-to-one correspondence with the N first problem sentences; determining N first parameters based on the N sentence similarities, the N edit distances, and the N jaccard similarities, where the N first parameters are uniformly and correspondingly to the N sentence similarities, the N edit distances, and the N jaccard similarities;
Taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold, and the N first parameters comprise the target parameter;
And outputting the target answer sentence.
2. The method of claim 1, wherein determining N sentence similarities of the target question sentence and the N first question sentences based on a preset neural network model comprises:
Converting the target question sentence into first sentence vectors, and converting the N first question sentences into N second sentence vectors, wherein the N second sentence vectors are in one-to-one correspondence with the N first question sentences;
Extracting feature information of the first sentence vector to obtain a first target vector, and extracting feature information of the N second sentence vectors to obtain N second target vectors, wherein the N second target vectors are in one-to-one correspondence with the N second sentence vectors;
And determining the sentence similarity of the first target vector and each second target vector based on a sentence similarity calculation formula to obtain N sentence similarities.
3. The method of claim 2, wherein the target question sentence is composed of a first character set, the N first question sentences are composed of N second character sets, and the N second character sets are in one-to-one correspondence with the N first question sentences; the determining N edit distances between the target question sentence and the N first question sentences based on the preset neural network model includes:
determining a minimum number of editing operations required to convert the first character set into each second character set;
And determining the obtained N minimum editing operation times as N editing distances, wherein the N editing distances are in one-to-one correspondence with the N minimum editing operation times.
4. A method according to claim 3, wherein said determining N jekcard similarities of said target question sentence and said N first question sentences based on a preset neural network model comprises:
determining N intersections and N union sets of the first character set and the N second character sets, wherein the N intersections and the N union sets are in one-to-one correspondence with the N second character sets;
N Jacquard similarities are determined based on the N intersections and the N union sets, and the N Jacquard similarities correspond to the N intersections and the N union sets uniformly.
5. An intelligent conversation apparatus for application to an electronic device, the apparatus comprising:
The determining unit is used for determining N first question sentences based on target question sentences input by a user, wherein the similarity between each first question sentence and the target question sentences is greater than or equal to a first threshold value, N is an integer greater than 1, and each first question sentence is associated with one first answer sentence; comprising the following steps:
Acquiring a target problem statement input by a user;
Determining M second problem sentences from a preset corpus based on literal search, and determining W third problem sentences from the preset corpus based on semantic search, wherein keywords of the literal search are determined based on the target problem sentences, the literal similarity of each second problem sentence and the target problem sentences is larger than or equal to a third threshold value, the semantic similarity of each third problem sentence and the target problem sentences is larger than or equal to a fourth threshold value, the first threshold value is larger than or equal to the third threshold value, the first threshold value is larger than or equal to the fourth threshold value, and the M and the W are integers larger than 0;
The target question sentence is composed of a first character set, the first character set comprises P first characters, P is an integer greater than 0, and M second question sentences are determined from a preset corpus based on literal search, and the method comprises the following steps: searching in a preset corpus by taking at least one first character in the P first characters as a keyword to obtain Q fifth problem sentences; selecting M fifth question sentences from the Q fifth question sentences; determining the M fifth problem sentences as M second problem sentences;
The determining W third problem sentences from the preset corpus based on semantic search includes: determining statement constituent components of the target problem statement; filtering the target problem statement based on the statement constituent components to obtain a fourth problem statement, wherein the statement constituent components of the fourth problem statement are less than or equal to the statement constituent components of the target problem statement; determining W third problem sentences from the preset corpus, wherein the semantic similarity of each third problem sentence and the fourth problem sentence is larger than or equal to the fourth threshold value;
Determining n×n second question sentences from the M second question sentences, and (1-N) N third question sentences from the W third question sentences; taking the n×n second question sentences and the (1-N) N third question sentences as N first question sentences, wherein N is a number greater than 0 and less than 1;
Determining N first parameters based on a preset neural network model, wherein the N first parameters are in one-to-one correspondence with the N first problem sentences, and the N first parameters are used for evaluating the similarity between the corresponding first problem sentences and the target problem sentences; taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold, and the N first parameters comprise the target parameter;
The determining N first parameters based on the preset neural network model includes: determining N sentence similarities, N editing distances and N Jacquard similarities of the target problem sentences and the N first problem sentences based on a preset neural network model, wherein the N sentence similarities, the N editing distances and the N Jacquard similarities are in one-to-one correspondence with the N first problem sentences; determining N first parameters based on the N sentence similarities, the N edit distances, and the N jaccard similarities, where the N first parameters are uniformly and correspondingly to the N sentence similarities, the N edit distances, and the N jaccard similarities;
and the output unit is used for outputting the target answer sentence.
6. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-4.
7. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is executed by a processor to implement the method of any one of claims 1 to 4.
CN201911034425.3A 2019-10-29 2019-10-29 Intelligent dialogue method and related equipment Active CN111008267B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911034425.3A CN111008267B (en) 2019-10-29 2019-10-29 Intelligent dialogue method and related equipment
PCT/CN2019/117542 WO2021082070A1 (en) 2019-10-29 2019-11-12 Intelligent conversation method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911034425.3A CN111008267B (en) 2019-10-29 2019-10-29 Intelligent dialogue method and related equipment

Publications (2)

Publication Number Publication Date
CN111008267A CN111008267A (en) 2020-04-14
CN111008267B true CN111008267B (en) 2024-07-12

Family

ID=70111048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911034425.3A Active CN111008267B (en) 2019-10-29 2019-10-29 Intelligent dialogue method and related equipment

Country Status (2)

Country Link
CN (1) CN111008267B (en)
WO (1) WO2021082070A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694942A (en) * 2020-05-29 2020-09-22 平安科技(深圳)有限公司 Question answering method, device, equipment and computer readable storage medium
CN112667794A (en) * 2020-12-31 2021-04-16 民生科技有限责任公司 Intelligent question-answer matching method and system based on twin network BERT model
CN113407699A (en) * 2021-06-30 2021-09-17 北京百度网讯科技有限公司 Dialogue method, dialogue device, dialogue equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472008A (en) * 2018-11-20 2019-03-15 武汉斗鱼网络科技有限公司 A kind of Text similarity computing method, apparatus and electronic equipment
CN110096580A (en) * 2019-04-24 2019-08-06 北京百度网讯科技有限公司 A kind of FAQ dialogue method, device and electronic equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5213098B2 (en) * 2007-06-22 2013-06-19 独立行政法人情報通信研究機構 Question answering method and system
CN104598445B (en) * 2013-11-01 2019-05-10 腾讯科技(深圳)有限公司 Automatically request-answering system and method
CN109829040B (en) * 2018-12-21 2023-04-07 深圳市元征科技股份有限公司 Intelligent conversation method and device
CN109710744B (en) * 2018-12-28 2021-04-06 合肥讯飞数码科技有限公司 Data matching method, device, equipment and storage medium
CN109740077B (en) * 2018-12-29 2021-02-12 北京百度网讯科技有限公司 Answer searching method and device based on semantic index and related equipment thereof
CN109948143B (en) * 2019-01-25 2023-04-07 网经科技(苏州)有限公司 Answer extraction method of community question-answering system
CN110162611B (en) * 2019-04-23 2021-03-26 苏宁金融科技(南京)有限公司 Intelligent customer service response method and system
CN110263346B (en) * 2019-06-27 2023-01-24 卓尔智联(武汉)研究院有限公司 Semantic analysis method based on small sample learning, electronic equipment and storage medium
CN110334356B (en) * 2019-07-15 2023-08-04 腾讯科技(深圳)有限公司 Article quality determining method, article screening method and corresponding device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472008A (en) * 2018-11-20 2019-03-15 武汉斗鱼网络科技有限公司 A kind of Text similarity computing method, apparatus and electronic equipment
CN110096580A (en) * 2019-04-24 2019-08-06 北京百度网讯科技有限公司 A kind of FAQ dialogue method, device and electronic equipment

Also Published As

Publication number Publication date
CN111008267A (en) 2020-04-14
WO2021082070A1 (en) 2021-05-06

Similar Documents

Publication Publication Date Title
CN109871532B (en) Text theme extraction method and device and storage medium
CN111008267B (en) Intelligent dialogue method and related equipment
CN110490213A (en) Image-recognizing method, device and storage medium
CN111223498A (en) Intelligent emotion recognition method and device and computer readable storage medium
CN110334347A (en) Information processing method, relevant device and storage medium based on natural language recognition
CN111967224A (en) Method and device for processing dialog text, electronic equipment and storage medium
CN110427614A (en) Construction method, device, electronic equipment and the storage medium of paragraph level
CN111858861B (en) Question-answer interaction method based on picture book and electronic equipment
CN110619050B (en) Intention recognition method and device
CN112287085B (en) Semantic matching method, system, equipment and storage medium
CN111104516B (en) Text classification method and device and electronic equipment
CN108345612A (en) A kind of question processing method and device, a kind of device for issue handling
CN110377778A (en) Figure sort method, device and electronic equipment based on title figure correlation
CN112861518A (en) Text error correction method and device, storage medium and electronic device
CN111814538B (en) Method and device for identifying category of target object, electronic equipment and storage medium
CN113342948A (en) Intelligent question and answer method and device
CN112232066A (en) Teaching outline generation method and device, storage medium and electronic equipment
CN106708950B (en) Data processing method and device for intelligent robot self-learning system
CN113127729B (en) Household scheme recommendation method and device, electronic equipment and storage medium
CN117273019A (en) Training method of dialogue model, dialogue generation method, device and equipment
CN111916085A (en) Human-computer conversation matching method, device and medium based on pronunciation similarity
CN114970666B (en) Spoken language processing method and device, electronic equipment and storage medium
CN107784037A (en) Information processing method and device, the device for information processing
CN112307269B (en) Intelligent analysis system and method for human-object relationship in novel
CN115062136A (en) Event disambiguation method based on graph neural network and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40019542

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant