CN111832305B - User intention recognition method, device, server and medium - Google Patents

User intention recognition method, device, server and medium Download PDF

Info

Publication number
CN111832305B
CN111832305B CN202010632031.4A CN202010632031A CN111832305B CN 111832305 B CN111832305 B CN 111832305B CN 202010632031 A CN202010632031 A CN 202010632031A CN 111832305 B CN111832305 B CN 111832305B
Authority
CN
China
Prior art keywords
sentence
sentences
user
nodes
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010632031.4A
Other languages
Chinese (zh)
Other versions
CN111832305A (en
Inventor
申众
张又亮
张崇宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaopeng Automobile Co Ltd
Original Assignee
Beijing Xiaopeng Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaopeng Automobile Co Ltd filed Critical Beijing Xiaopeng Automobile Co Ltd
Priority to CN202010632031.4A priority Critical patent/CN111832305B/en
Publication of CN111832305A publication Critical patent/CN111832305A/en
Application granted granted Critical
Publication of CN111832305B publication Critical patent/CN111832305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a user intention recognition method, a device, a server and a medium, wherein the method comprises the following steps: acquiring a user query statement and a text diagram; selecting candidate reference sentences from the reference sentences recorded in sentence nodes; determining a weight score of the candidate reference sentence according to the weight value of the target reference keyword recorded in the edge for the candidate reference sentence; determining the feature vector of the user query sentence, and determining the similarity between the candidate reference sentence and the user query sentence by adopting the feature vector of the user query sentence and the feature vector of the candidate reference sentence recorded in the sentence node; selecting a target reference sentence from the candidate reference sentences according to the weight score of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences; taking the user intention of the target reference sentence as the user intention of the user query sentence. The user intention is deduced through sentence nodes and word nodes of the text graph, so that the recognition accuracy can be improved.

Description

User intention recognition method, device, server and medium
Technical Field
The present invention relates to the field of natural language processing, and more particularly, to a user intention recognition method, a user intention recognition apparatus, a server, and a computer-readable storage medium.
Background
With the development of intelligent automobiles, users are increasingly accustomed to using an on-board voice system in a daily car scene. The vehicle-mounted voice system can conduct voice intention recognition on user voice to obtain user intention, and corresponding feedback is conducted according to the user intention.
The traditional voice intention recognition is often completed in a data-driven mode through a text classification method, and the method often depends on marked rule data, has higher requirements on data quality, and requires the recognized text to be as regular as possible and accord with the habit of natural expression of people. The voice intention recognition mode needs higher-quality labeling data support, new data is needed to be continuously supplemented, and the cost for expanding the new intention is high.
In an actual scene, voice intention recognition often faces the problems of voice recognition errors, sentence breaking errors mixed in noise and the like, and if noise data are mixed in, recall is caused, so that user intention cannot be accurately recognized.
Disclosure of Invention
In view of the above, embodiments of the present invention have been made to provide a user intention recognition method, a user intention recognition apparatus, a server, and a computer-readable storage medium that overcome or at least partially solve the above problems.
In order to solve the above problems, an embodiment of the present invention discloses a user intention recognition method, including:
acquiring a user query sentence and a text diagram; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence nodes record reference sentences marked with user intention and feature vectors of the reference sentences; the word nodes record reference keywords extracted from the reference sentences; the edge records a weight value representing the importance degree of the reference keyword to the reference sentence;
selecting candidate reference sentences from the reference sentences recorded in the sentence nodes; the candidate reference sentences include target reference keywords that match query keywords extracted from the user query sentences;
determining a weight score of the candidate reference sentence according to the weight value of the target reference keyword recorded in the edge for the candidate reference sentence;
Determining the feature vector of the user query sentence, and determining the similarity between the candidate reference sentence and the user query sentence by adopting the feature vector of the user query sentence and the feature vector of the candidate reference sentence recorded in the sentence node;
selecting a target reference sentence from the candidate reference sentences according to the weight score of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences;
taking the user intention of the target reference sentence as the user intention of the user query sentence.
Optionally, the text map is generated by:
acquiring a reference sentence marked with user intention, and extracting a reference keyword from the reference sentence;
inputting the reference sentence into a pre-training model, and obtaining a feature vector output by the pre-training model;
adopting the reference sentences to construct corresponding sentence nodes, and recording the reference sentences, the user intentions and feature vectors of the reference sentences at the sentence nodes;
calculating a weight value representing the importance degree of the reference keyword to the reference sentence;
and constructing corresponding word nodes by adopting the reference keywords, establishing edges between the word nodes and the corresponding sentence nodes, and setting the weight values of the edges as weight values for representing the importance degree of the reference keywords on the reference sentences.
Optionally, the selecting a candidate reference sentence from the reference sentences recorded in the sentence nodes includes:
determining target word nodes recorded with target reference keywords matched with the query keywords from word nodes of the text graph;
and selecting candidate reference sentences from the reference sentences recorded by the sentence nodes corresponding to the target word nodes.
Optionally, the determining the feature vector of the user query statement includes:
and inputting the candidate reference sentences into a pre-training model, and obtaining feature vectors output by the pre-training model. Optionally, the selecting a target reference sentence from the candidate reference sentences according to the weight score of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences includes:
calculating a first score by adopting the weight score of the candidate reference sentence and the similarity between the candidate reference sentence and the user query sentence;
ranking using the first score;
and selecting a target reference sentence according to the sorting result.
Optionally, the taking the user intention of the target reference sentence as the user intention of the user query sentence includes:
Calculating a weight value representing the importance degree of the query keyword to the user query statement;
calculating a second score by adopting the weight value representing the importance degree of the query keyword to the user query statement and the target reference statement;
judging whether the second score is larger than a preset score threshold value or not;
and if the second score is larger than the preset score threshold, taking the user intention of the target reference sentence as the user intention of the user query sentence.
The embodiment of the invention also discloses a device for identifying the user intention, which comprises the following steps:
the acquisition module is used for acquiring a user query statement and acquiring a text diagram; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence nodes record reference sentences marked with user intention and feature vectors of the reference sentences; the word nodes record reference keywords extracted from the reference sentences; the edge records a weight value representing the importance degree of the reference keyword to the reference sentence;
a candidate reference sentence selection module, configured to select a candidate reference sentence from reference sentences recorded in the sentence nodes, where the candidate reference sentence includes a target reference keyword that matches a query keyword extracted from the user query sentence;
A weight score determining module, configured to determine a weight score of the candidate reference sentence according to a weight value of the target reference keyword recorded in the edge for the candidate reference sentence;
the similarity determining module is used for determining the feature vector of the user query sentence, and determining the similarity between the candidate reference sentence and the user query sentence by adopting the feature vector of the user query sentence and the feature vector of the candidate reference sentence recorded in the sentence node;
the target reference sentence selecting module is used for selecting a target reference sentence from the candidate reference sentences according to the weight score of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences;
and the user intention determining module is used for taking the user intention of the target reference sentence as the user intention of the user query sentence.
Optionally, the text graph is generated by the following module:
the reference information acquisition module is used for acquiring a reference sentence marked with the intention of a user and extracting a reference keyword from the reference sentence;
the feature vector obtaining module is used for inputting the reference sentence into a pre-training model and obtaining a feature vector output by the pre-training model;
The first construction module is used for constructing corresponding sentence nodes by adopting the reference sentences and recording the reference sentences, the user intentions of the reference sentences and the feature vectors in the sentence nodes;
the weight value calculation module is used for calculating a weight value representing the importance degree of the reference keywords to the reference sentences;
the second construction module is used for constructing corresponding word nodes by adopting the reference keywords, establishing edges between the word nodes and the corresponding sentence nodes, and setting the weight values of the edges as weight values for representing the importance degree of the reference keywords on the reference sentences.
Optionally, the candidate reference sentence selection module includes:
a target word node determining sub-module, configured to determine, from word nodes of the text graph, target word nodes in which target reference keywords matching the query keywords are recorded;
and the candidate reference sentence selecting sub-module is used for selecting candidate reference sentences from the reference sentences recorded by sentence nodes corresponding to the target word nodes.
Optionally, the similarity determining module includes:
and the feature vector obtaining sub-module is used for inputting the candidate reference sentences into a pre-training model and obtaining feature vectors output by the pre-training model.
Optionally, the target reference sentence selection module includes:
a first score calculating sub-module, configured to calculate a first score by using a weight score of the candidate reference sentence and a similarity between the candidate reference sentence and the user query sentence;
a ranking sub-module for ranking using the first score;
and the target reference sentence selecting sub-module is used for selecting the target reference sentence according to the sequencing result.
Optionally, the user intention determining module includes:
the weight value calculation sub-module is used for calculating a weight value for representing the importance degree of the query keyword to the user query statement;
a second score calculating sub-module, configured to calculate a second score by using the target reference sentence and the weight value representing the importance degree of the query keyword to the user query sentence;
the second score judging sub-module is used for judging whether the second score is larger than a preset score threshold value or not;
and the user intention determining sub-module is used for taking the user intention of the target reference sentence as the user intention of the user query sentence if the second score is larger than the preset score threshold value.
The embodiment of the invention also discloses a server, which comprises: a processor, a memory and a computer program stored on the memory and capable of running on the processor, which when executed by the processor implements the steps of any of the user intent recognition methods.
The embodiment of the invention also discloses a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the user intention recognition method when being executed by a processor.
The embodiment of the invention has the following advantages:
according to the embodiment of the invention, the reference sentences, the reference keywords extracted from the reference sentences, the user intentions corresponding to the reference sentences, the feature vectors of the reference sentences and the weight values representing the importance degree of the reference keywords to the reference sentences are recorded in the form of a text chart. Selecting candidate reference sentences from the reference sentences recorded in sentence nodes; determining a weight score of the candidate reference sentence according to the weight value of the target reference keyword recorded in the edge for the candidate reference sentence; determining the similarity between the candidate reference sentence and the user query sentence by adopting the feature vector of the user query sentence and the feature vector of the candidate reference sentence recorded in the sentence node; then selecting a target reference sentence from the candidate reference sentences according to the weight score of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences; and finally, taking the user intention of the target reference sentence as the user intention of the user query sentence. The method has the advantages that the accuracy of selecting the target reference sentences can be improved when the target reference sentences are selected from the candidate reference sentences by using the characteristics of two dimensions of sentence nodes and word nodes in the text graph, so that the recognition accuracy of the user intention is improved. The text graph is not high in cost, reference sentences marked with user intention can be used, and the text graph has a certain generalization reasoning capability on new data. And as long as a new user query sentence can extract a certain entity, recommendation intention understanding can be performed according to the existing relation in the text graph. When the new intention is expanded, corresponding sentence nodes and word nodes can be directly added in the text graph, so that the expansion is easy.
Drawings
FIG. 1 is a flow chart of steps of an embodiment of a user intent recognition method of the present invention;
FIG. 2 is a flowchart of the steps for generating a text diagram in an embodiment of the invention;
FIG. 3 is a schematic diagram of a text diagram in an embodiment of the invention;
fig. 4 is a block diagram illustrating an embodiment of a user intention recognition apparatus according to the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
The irregular user query sentence obtained by corresponding to the voice recognition is difficult to be performed by a data-driven method before the model is on line from the viewpoint of the intention to understand the design of the classification model. The embodiment of the invention provides a method for understanding intention of a user query sentence based on words and rule sentences marked with the intention of the user.
Referring to FIG. 1, a flowchart illustrating steps of an embodiment of a user intent recognition method of the present invention, the method may include the steps of:
step 101, acquiring a user query sentence and a text diagram; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence nodes record reference sentences marked with user intention and feature vectors of the reference sentences; the word nodes record reference keywords extracted from the reference sentences; the edge is recorded with a weight value representing the importance degree of the reference keyword to the reference sentence.
The user speech may be recognized using automatic speech recognition ASR (Automatic Speech Recognition) to obtain a user query statement. The user query statement is text data, and noise data exists in the user query statement due to the fact that the automatic voice recognition has a disturbed problem.
In the embodiment of the invention, the intention recognition can be performed by adopting the pre-collected reference sentences which accord with the rules of natural expression habits of human beings, so as to obtain the intention of the user. A text map is generated by using the reference sentences marked with the user intention. The text graph is graph structure data and comprises sentence nodes, word nodes and edges for connecting the sentence nodes and the corresponding word nodes.
Referring to FIG. 2, a flowchart of the steps for generating a text diagram in an embodiment of the present invention is shown; in an embodiment of the present invention, the step of generating the text map may include:
step 201, obtaining a reference sentence marked with user intention, and extracting a reference keyword from the reference sentence.
For example, the reference sentence includes "turn on air conditioner", "turn off air conditioner", "turn on air conditioner, and" turn on air conditioner for the front row ". The reference keywords can be extracted by means of keyword extraction, entity extraction, rule template extraction and the like, such as "open", "air conditioner", "close", "front row". The reference keywords may also include other words that are important in identifying sentence features, such as the mood word "mock", which has been treated as stop words in the past, but which may be used in embodiments of the present invention to make question recognition to distinguish specific user intent. Also, some adverbs representing negative meanings, such as "don't care", etc. may also be used as reference keywords to distinguish specific user intentions.
Step 202, inputting the reference sentence into a pre-training model, and obtaining a feature vector output by the pre-training model.
The feature vector of the sentence is generally used for calculating the similarity of two sentences, and in the embodiment of the invention, the feature vector of the reference sentence and the feature vector of the user query sentence can be used for calculating the similarity of the two sentences, and the similarity between the sentences is used as the basis for selecting the target reference sentence. Wherein, the feature vector of the reference sentence can be obtained through a pre-training model.
In one example, a pre-trained bi-directional encoder token model BERT (Bidirectional Encoder Representations from Transformers) may be employed to obtain feature vectors for sentences.
There are two pre-training tasks of the Bert model: masking several words in a sentence, then predicting the masked words, and judging whether the two sentences are in context. These two training tasks are performed simultaneously. After training by a large amount of corpus data, the pretrained model of Bert can well extract the characteristics of words in different sentence context environments.
A sentence can be passed into the Bert model, and the sentence feature vectors can be approximated using the output hidden layer vectors after passing through the 12-layer encoder.
And 203, constructing corresponding sentence nodes by adopting the reference sentences, and recording the reference sentences, the user intentions of the reference sentences and the feature vectors in the sentence nodes.
And step 204, calculating a weight value for representing the importance degree of the reference keyword to the reference sentence.
The importance of a reference keyword to a reference sentence refers to the fact that the reference keyword is frequently present in a collection of reference sentences, and that term may be important. For the case that few (e.g., only 1 occurrence in corpus) reference keywords appear in the set of reference sentences, little information is carried, even "noise". The greater the weight value that characterizes the importance of a reference keyword to a reference sentence, the greater the meaning representation contribution of the reference keyword to the reference sentence.
In one example, the weight value characterizing the importance of the reference keyword to the reference sentence may be represented by word frequency TF (Term Frequency)/reverse file frequency IDF (Inverse Document Frequency), or may be calculated by other complex algorithms, so that the relationship accuracy of the reference keyword to the reference sentence is higher
When the weight value is represented by TF/IDF, TF can be the ratio of the number of times of occurrence of the reference keywords in the reference sentences to the total word number of the reference sentences; the IDF may be a value obtained by multiplying the total number of reference sentences and the number of reference sentences including the reference keywords by one, and taking the logarithm.
Step 205, constructing a corresponding word node by adopting the reference keyword, establishing an edge between the word node and a corresponding sentence node, and setting a weight value of the edge as a weight value representing the importance degree of the reference keyword to the reference sentence.
Referring to fig. 3, a schematic diagram of a text diagram in an embodiment of the present invention is shown. In the text graph, constructing corresponding sentence nodes by adopting reference sentences marked with user intention; as shown in fig. 3, the text diagram includes: sentence nodes of "turn on air conditioner", "turn off air conditioner", "turn on air conditioner", "front row turn on air conditioner". Word nodes are constructed using reference keywords extracted from the reference sentences, and as shown in fig. 3, the text diagram includes word nodes of "on", "off", "air conditioner", "front row", "mock". An edge is established between a word node and a sentence node containing the reference keyword of the word node, and a weight value representing the importance degree of the reference keyword to a reference sentence is recorded on the edge.
Step 102, selecting candidate reference sentences from the reference sentences recorded in the sentence nodes; the candidate reference sentences include target reference keywords that match query keywords extracted from the user query sentences.
Firstly, the query keywords can be extracted from the reference sentences, and the query keywords can be extracted by means of keyword extraction, entity extraction, rule template extraction and the like.
And then searching target reference keywords matched with the query keywords from the reference keywords. The target reference keywords that match the query keywords may be the same terms or may be terms that are generalized to have the same meaning.
And finally, selecting candidate reference sentences containing target reference keywords from the set of reference sentences.
In an embodiment of the present invention, the step of selecting a candidate reference sentence from the reference sentences recorded in the sentence nodes may include: determining target word nodes recorded with target reference keywords matched with the query keywords from word nodes of the text graph; and selecting candidate reference sentences from the reference sentences recorded by the sentence nodes corresponding to the target word nodes.
Specifically, the query keywords included in the user query sentence may include one or more, and the target reference keywords in the text graph that match the query keywords may include one or more. Sentence nodes corresponding to the target word nodes are sentence nodes with edges connected to the target word nodes.
In the embodiment of the invention, the candidate reference sentences can be selected according to the weight value of the target reference keywords in the reference sentences and/or the number of the target reference keywords contained in the reference sentences.
And step 103, determining a weight score of the candidate reference sentence according to the weight value of the target reference keyword recorded in the edge for the candidate reference sentence.
And extracting the weight value of the recorded target reference keywords for the candidate reference sentences in the edges between the target reference keywords and the candidate reference sentences.
In the embodiment of the invention, the weight value of the importance degree of the reference keywords to the reference sentences is used as the basis for selecting the target reference sentences.
The weight score of the candidate reference sentence may be calculated based on the weight value of the target reference keyword contained in the candidate reference sentence. For example, the weight score of the candidate reference sentence may be a sum of weight values of the target reference keywords contained in the candidate reference sentence.
The higher the weight score, the more matching the candidate reference sentence with the query keyword extracted by the user query sentence.
Step 104, determining the feature vector of the user query sentence, and determining the similarity between the candidate reference sentence and the user query sentence by adopting the feature vector of the user query sentence and the feature vector of the candidate reference sentence recorded in the sentence node.
Specifically, the candidate reference sentences may be input into a pre-training model, and feature vectors output by the pre-training model are obtained. In one example, the pre-training model may be a Bert model.
The similarity between sentences is represented by the similarity between feature vectors, and the Cosine similarity function Cosine can be used to calculate the similarity between two feature vectors.
For example, 3 sentences are A, B, C, respectively. If Cosine (a sentence feature vector, B sentence feature vector) >0.9, a, B cannot be described, then the two are similar; however, cosine (a sentence feature vector, B sentence feature vector > Cosine (a sentence feature vector, C sentence feature vector), there is a higher probability that a and B can be considered more similar than a and C.
And 105, selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences.
The weight score reflects the matching degree of the candidate reference sentence and the query keyword extracted from the user query sentence from the word dimension, the similarity of the candidate reference sentence and the user query sentence, and the matching degree of the candidate reference sentence and the user query sentence from the sentence dimension. And selecting a target reference sentence from the candidate reference sentences by using the characteristics of two dimensions of the words and sentences.
In the prior art, the target sentence is usually matched only by the features of a single dimension, and when the user query sentence is an irregular sentence, the target sentence is easy to be incorrectly matched.
For example, the user query statement is "call air conditioner on twenty-five degrees", and the query keyword extracted from it may include: "call", "air conditioner", "open to", "twenty-five degrees"; the "air conditioner on to twenty-five degrees" or "secondary driving air conditioner on to twenty-five degrees" can be recalled according to the query keyword.
If the "air conditioner on to twenty-five degrees" score would be high, based on word dimensional ranking alone, because its sentences are short and perfectly matched with the query keywords in the user query sentence; and the secondary driving air conditioner is started to twenty-five degrees, and the secondary driving air conditioner may be ranked behind due to long sentences. That is, the target reference sentence may be "air-conditioning on to twenty-five degrees"
If the "call" is the result of a transition after the "secondary drive" ASR recognizes an error. If the "secondary air conditioner on to twenty-five degrees" ordering is likely to exceed the "air conditioner on to twenty-five degrees" ordering according to the word and sentence dimensions, that is, the target reference sentence may be "secondary air conditioner on to twenty-five degrees".
In the embodiment of the present invention, the step of selecting the target reference sentence from the candidate reference sentences according to the weight score of the candidate reference sentence and the similarity between the candidate reference sentence and the user query sentence may include: calculating a first score by adopting the weight score of the candidate reference sentence and the similarity between the candidate reference sentence and the user query sentence; ranking using the first score; and selecting a target reference sentence according to the sorting result.
In one example, the first score of the candidate reference sentence may be a weighted sum of the weight score and the similarity. The ranking may be performed from large to small according to the first score, with the first candidate reference sentence ranked as the target reference sentence.
And 106, taking the user intention of the target reference sentence as the user intention of the user query sentence.
And using the target reference sentence marked with the user intention, and taking the user intention of the target reference sentence as the user intention of the user query sentence.
In an embodiment of the present invention, the step of regarding the user intention of the target reference sentence as the user intention of the user query sentence may include the following sub-steps:
Step S11, calculating a weight value representing the importance degree of the query keyword to the user query statement;
the weight value characterizing the importance of the query keyword to the user query sentence is calculated in the same manner as the weight value characterizing the importance of the reference keyword to the reference sentence. For example, the TF-IDF method is used to represent the word-to-sentence weight.
The weight value of the importance degree of the keyword to the user query sentence can be represented by TF/IDF, where TF can be a ratio of the number of occurrences of the query keyword in the reference sentence to the total number of words in the reference sentence, and IDF can be a ratio of the total number of all the reference sentences in the text chart to the number of reference sentences containing the query keyword after adding one to the total number of the reference sentences, taking the logarithmic value.
A sub-step S12 of calculating a second score by adopting the weight value representing the importance degree of the query keyword to the user query statement and the target reference statement;
specifically, a weight value that characterizes the importance of the query keyword to the user query sentence may be combined with the first score of the target reference sentence to calculate the second score.
In one example, the second score is a sum of products of the weight values corresponding to the respective query keywords and the first scores of the target reference sentences, respectively. For example, a user query statement is "air conditioner on", and a second score for the user query statement may be: the product of the weight value of "open" and the first score of the target reference sentence, plus the product of the weight value of "air conditioner" and the first score of the target reference sentence.
Step S13, judging whether the second score is larger than a preset score threshold value or not;
and a substep S14, wherein if the second score is greater than the preset score threshold, the user intention of the target reference sentence is used as the user intention of the user query sentence.
If the second score is less than or equal to the preset score threshold, it may be considered that the target reference sentence is not sufficiently matched with the user query sentence, that is, that there is no sentence in the reference sentence that is exactly matched with the user query sentence, in which case the corresponding user intent is not output for the user query sentence.
Only if the second score is greater than the preset score threshold, the user intent for the user query statement is further output.
According to the embodiment of the invention, the reference sentences, the reference keywords extracted from the reference sentences, the user intentions corresponding to the reference sentences, the feature vectors of the reference sentences and the weight values representing the importance degree of the reference keywords to the reference sentences are recorded in the form of a text chart. Selecting candidate reference sentences from the reference sentences recorded in sentence nodes; determining a weight score of the candidate reference sentence according to the weight value of the target reference keyword recorded in the edge for the candidate reference sentence; determining the similarity between the candidate reference sentence and the user query sentence by adopting the feature vector of the user query sentence and the feature vector of the candidate reference sentence recorded in the sentence node; then selecting a target reference sentence from the candidate reference sentences according to the weight score of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences; and finally, taking the user intention of the target reference sentence as the user intention of the user query sentence. The method has the advantages that the accuracy of selecting the target reference sentences can be improved when the target reference sentences are selected from the candidate reference sentences by using the characteristics of two dimensions of sentence nodes and word nodes in the text graph, so that the recognition accuracy of the user intention is improved. The text graph is not high in cost, reference sentences marked with user intention can be used, and the text graph has a certain generalization reasoning capability on new data. And as long as a new user query sentence can extract a certain entity, recommendation intention understanding can be performed according to the existing relation in the text graph. When the new intention is expanded, corresponding sentence nodes and word nodes can be directly added in the text graph, so that the expansion is easy.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 4, there is shown a block diagram of an embodiment of a user intention recognition apparatus according to the present invention, which may include the following modules:
an obtaining module 401, configured to obtain a user query sentence, and obtain a text diagram; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence nodes record reference sentences marked with user intention and feature vectors of the reference sentences; the word nodes record reference keywords extracted from the reference sentences; the edge records a weight value representing the importance degree of the reference keyword to the reference sentence;
A candidate reference sentence selection module 402, configured to select a candidate reference sentence from the reference sentences recorded in the sentence nodes, where the candidate reference sentence includes a target reference keyword that matches a query keyword extracted from the user query sentence;
a weight score determining module 403, configured to determine a weight score of the candidate reference sentence according to a weight value of the target reference keyword recorded in the edge for the candidate reference sentence;
a similarity determining module 404, configured to determine a feature vector of the user query sentence, and determine a similarity between the candidate reference sentence and the user query sentence by using the feature vector of the user query sentence and the feature vector of the candidate reference sentence recorded in the sentence node;
a target reference sentence selection module 405, configured to select a target reference sentence from the candidate reference sentences according to the weight score of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences;
a user intent determination module 406, configured to take the user intent of the target reference sentence as the user intent of the user query sentence.
In one embodiment of the invention, the text map is generated by the following modules:
the reference information acquisition module is used for acquiring a reference sentence marked with the intention of a user and extracting a reference keyword from the reference sentence;
the feature vector obtaining module is used for inputting the reference sentence into a pre-training model and obtaining a feature vector output by the pre-training model;
the first construction module is used for constructing corresponding sentence nodes by adopting the reference sentences and recording the reference sentences, the user intentions of the reference sentences and the feature vectors in the sentence nodes;
the weight value calculation module is used for calculating a weight value representing the importance degree of the reference keywords to the reference sentences;
the second construction module is used for constructing corresponding word nodes by adopting the reference keywords, establishing edges between the word nodes and the corresponding sentence nodes, and setting the weight values of the edges as weight values for representing the importance degree of the reference keywords on the reference sentences.
In one embodiment of the present invention, the candidate reference sentence selection module 402 may include:
a target word node determining sub-module, configured to determine, from word nodes of the text graph, target word nodes in which target reference keywords matching the query keywords are recorded;
And the candidate reference sentence selecting sub-module is used for selecting candidate reference sentences from the reference sentences recorded by sentence nodes corresponding to the target word nodes.
In one embodiment of the present invention, the similarity determination module 404 may include:
and the feature vector obtaining sub-module is used for inputting the candidate reference sentences into a pre-training model and obtaining feature vectors output by the pre-training model.
In one embodiment of the present invention, the target reference sentence selection module 405 may include the following sub-modules:
a first score calculating sub-module, configured to calculate a first score by using a weight score of the candidate reference sentence and a similarity between the candidate reference sentence and the user query sentence;
a ranking sub-module for ranking using the first score;
and the target reference sentence selecting sub-module is used for selecting the target reference sentence according to the sequencing result.
In one embodiment of the invention, the user intent determination module 406 may include the following sub-modules:
the weight value calculation sub-module is used for calculating a weight value for representing the importance degree of the query keyword to the user query statement;
a second score calculating sub-module, configured to calculate a second score by using the target reference sentence and the weight value representing the importance degree of the query keyword to the user query sentence;
The second score judging sub-module is used for judging whether the second score is larger than a preset score threshold value or not;
and the user intention determining sub-module is used for taking the user intention of the target reference sentence as the user intention of the user query sentence if the second score is larger than the preset score threshold value.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the invention also provides a server, which comprises:
the system comprises a processor, a memory and a computer program stored in the memory and capable of running on the processor, wherein the computer program realizes the processes of the user intention recognition method embodiment when being executed by the processor, and can achieve the same technical effects, and the repetition is avoided, so that the description is omitted.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the above processes of the user intention recognition method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing has outlined a detailed description of a user intent recognition method, a user intent recognition device, a server and a computer readable storage medium, wherein specific examples are provided herein to illustrate the principles and embodiments of the present invention, the above examples being provided solely to assist in understanding the method and core concepts of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. A method for identifying user intention, comprising:
acquiring a user query sentence and a text diagram; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence nodes record reference sentences marked with user intention and feature vectors of the reference sentences; the word nodes record reference keywords extracted from the reference sentences; the edge records a weight value representing the importance degree of the reference keyword to the reference sentence;
selecting candidate reference sentences from the reference sentences recorded in the sentence nodes; the candidate reference sentences include target reference keywords that match query keywords extracted from the user query sentences;
determining a weight score of the candidate reference sentence according to the weight value of the target reference keyword recorded in the edge for the candidate reference sentence;
determining the feature vector of the user query sentence, and determining the similarity between the candidate reference sentence and the user query sentence by adopting the feature vector of the user query sentence and the feature vector of the candidate reference sentence recorded in the sentence node;
Selecting a target reference sentence from the candidate reference sentences according to the weight score of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences;
taking the user intention of the target reference sentence as the user intention of the user query sentence.
2. The method of claim 1, wherein the text map is generated by:
acquiring a reference sentence marked with user intention, and extracting a reference keyword from the reference sentence;
inputting the reference sentence into a pre-training model, and obtaining a feature vector output by the pre-training model;
adopting the reference sentences to construct corresponding sentence nodes, and recording the reference sentences, the user intentions and feature vectors of the reference sentences at the sentence nodes;
calculating a weight value representing the importance degree of the reference keyword to the reference sentence;
and constructing corresponding word nodes by adopting the reference keywords, establishing edges between the word nodes and the corresponding sentence nodes, and setting the weight values of the edges as weight values for representing the importance degree of the reference keywords on the reference sentences.
3. The method of claim 1, wherein selecting candidate reference sentences from the reference sentences recorded at the sentence nodes comprises:
determining target word nodes recorded with target reference keywords matched with the query keywords from word nodes of the text graph;
and selecting candidate reference sentences from the reference sentences recorded by the sentence nodes corresponding to the target word nodes.
4. The method of claim 1, wherein the determining the feature vector of the user query statement comprises:
and inputting the candidate reference sentences into a pre-training model, and obtaining feature vectors output by the pre-training model.
5. The method according to claim 1, wherein selecting a target reference sentence from the candidate reference sentences according to the weight score of the candidate reference sentences and the similarity of the candidate reference sentences to the user query sentences comprises:
calculating a first score by adopting the weight score of the candidate reference sentence and the similarity between the candidate reference sentence and the user query sentence;
ranking using the first score;
And selecting a target reference sentence according to the sorting result.
6. The method of claim 1, wherein the regarding the user intent of the target reference statement as the user intent of the user query statement comprises:
calculating a weight value representing the importance degree of the query keyword to the user query statement;
calculating a second score by adopting the weight value representing the importance degree of the query keyword to the user query statement and the target reference statement;
judging whether the second score is larger than a preset score threshold value or not;
and if the second score is larger than the preset score threshold, taking the user intention of the target reference sentence as the user intention of the user query sentence.
7. A user intention recognition apparatus, comprising:
the acquisition module is used for acquiring a user query statement and acquiring a text diagram; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence nodes record reference sentences marked with user intention and feature vectors of the reference sentences; the word nodes record reference keywords extracted from the reference sentences; the edge records a weight value representing the importance degree of the reference keyword to the reference sentence;
A candidate reference sentence selection module, configured to select a candidate reference sentence from reference sentences recorded in the sentence nodes, where the candidate reference sentence includes a target reference keyword that matches a query keyword extracted from the user query sentence;
a weight score determining module, configured to determine a weight score of the candidate reference sentence according to a weight value of the target reference keyword recorded in the edge for the candidate reference sentence;
the similarity determining module is used for determining the feature vector of the user query sentence, and determining the similarity between the candidate reference sentence and the user query sentence by adopting the feature vector of the user query sentence and the feature vector of the candidate reference sentence recorded in the sentence node;
the target reference sentence selecting module is used for selecting a target reference sentence from the candidate reference sentences according to the weight score of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences;
and the user intention determining module is used for taking the user intention of the target reference sentence as the user intention of the user query sentence.
8. The apparatus of claim 7, wherein the text map is generated by:
the reference information acquisition module is used for acquiring a reference sentence marked with the intention of a user and extracting a reference keyword from the reference sentence;
the feature vector obtaining module is used for inputting the reference sentence into a pre-training model and obtaining a feature vector output by the pre-training model;
the first construction module is used for constructing corresponding sentence nodes by adopting the reference sentences and recording the reference sentences, the user intentions of the reference sentences and the feature vectors in the sentence nodes;
the weight value calculation module is used for calculating a weight value representing the importance degree of the reference keywords to the reference sentences;
the second construction module is used for constructing corresponding word nodes by adopting the reference keywords, establishing edges between the word nodes and the corresponding sentence nodes, and setting the weight values of the edges as weight values for representing the importance degree of the reference keywords on the reference sentences.
9. A server, comprising: a processor, a memory and a computer program stored on the memory and capable of running on the processor, which when executed by the processor implements the steps of the user intention recognition method as claimed in any one of claims 1 to 6.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the user intention recognition method according to any one of claims 1 to 6.
CN202010632031.4A 2020-07-03 2020-07-03 User intention recognition method, device, server and medium Active CN111832305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010632031.4A CN111832305B (en) 2020-07-03 2020-07-03 User intention recognition method, device, server and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010632031.4A CN111832305B (en) 2020-07-03 2020-07-03 User intention recognition method, device, server and medium

Publications (2)

Publication Number Publication Date
CN111832305A CN111832305A (en) 2020-10-27
CN111832305B true CN111832305B (en) 2023-08-25

Family

ID=72900128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010632031.4A Active CN111832305B (en) 2020-07-03 2020-07-03 User intention recognition method, device, server and medium

Country Status (1)

Country Link
CN (1) CN111832305B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284498B (en) * 2021-05-20 2022-09-30 中国工商银行股份有限公司 Client intention identification method and device
CN113157893B (en) * 2021-05-25 2023-12-15 网易(杭州)网络有限公司 Method, medium, apparatus and computing device for intent recognition in multiple rounds of conversations
CN113822019B (en) * 2021-09-22 2024-07-12 科大讯飞股份有限公司 Text normalization method, related device and readable storage medium
CN113869061A (en) * 2021-09-30 2021-12-31 航天信息股份有限公司 Method and device for determining similar sentences and electronic equipment
CN116244413B (en) * 2022-12-27 2023-11-21 北京百度网讯科技有限公司 New intention determining method, apparatus and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375839A (en) * 2010-08-17 2012-03-14 富士通株式会社 Method and device for acquiring target data set from candidate data set, and translation machine
CN106649423A (en) * 2016-06-23 2017-05-10 新乡学院 Retrieval model calculation method based on content relevance
KR20180101955A (en) * 2017-03-06 2018-09-14 주식회사 수브이 Document scoring method and document searching system
CN109063221A (en) * 2018-11-02 2018-12-21 北京百度网讯科技有限公司 Query intention recognition methods and device based on mixed strategy
CN109284357A (en) * 2018-08-29 2019-01-29 腾讯科技(深圳)有限公司 Interactive method, device, electronic equipment and computer-readable medium
CN109492222A (en) * 2018-10-31 2019-03-19 平安科技(深圳)有限公司 Intension recognizing method, device and computer equipment based on conceptional tree
CN109657232A (en) * 2018-11-16 2019-04-19 北京九狐时代智能科技有限公司 A kind of intension recognizing method
CN109800306A (en) * 2019-01-10 2019-05-24 深圳Tcl新技术有限公司 It is intended to analysis method, device, display terminal and computer readable storage medium
CN109815492A (en) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 A kind of intension recognizing method based on identification model, identification equipment and medium
CN110110199A (en) * 2018-01-09 2019-08-09 北京京东尚科信息技术有限公司 Information output method and device
CN110674259A (en) * 2019-09-27 2020-01-10 北京百度网讯科技有限公司 Intention understanding method and device
CN110737768A (en) * 2019-10-16 2020-01-31 信雅达***工程股份有限公司 Text abstract automatic generation method and device based on deep learning and storage medium
CN110837556A (en) * 2019-10-30 2020-02-25 深圳价值在线信息科技股份有限公司 Abstract generation method and device, terminal equipment and storage medium
CN111209480A (en) * 2020-01-09 2020-05-29 上海风秩科技有限公司 Method and device for determining pushed text, computer equipment and medium
CN111259144A (en) * 2020-01-16 2020-06-09 中国平安人寿保险股份有限公司 Multi-model fusion text matching method, device, equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375839A (en) * 2010-08-17 2012-03-14 富士通株式会社 Method and device for acquiring target data set from candidate data set, and translation machine
CN106649423A (en) * 2016-06-23 2017-05-10 新乡学院 Retrieval model calculation method based on content relevance
KR20180101955A (en) * 2017-03-06 2018-09-14 주식회사 수브이 Document scoring method and document searching system
CN110110199A (en) * 2018-01-09 2019-08-09 北京京东尚科信息技术有限公司 Information output method and device
CN109284357A (en) * 2018-08-29 2019-01-29 腾讯科技(深圳)有限公司 Interactive method, device, electronic equipment and computer-readable medium
CN109492222A (en) * 2018-10-31 2019-03-19 平安科技(深圳)有限公司 Intension recognizing method, device and computer equipment based on conceptional tree
CN109063221A (en) * 2018-11-02 2018-12-21 北京百度网讯科技有限公司 Query intention recognition methods and device based on mixed strategy
CN109657232A (en) * 2018-11-16 2019-04-19 北京九狐时代智能科技有限公司 A kind of intension recognizing method
CN109815492A (en) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 A kind of intension recognizing method based on identification model, identification equipment and medium
CN109800306A (en) * 2019-01-10 2019-05-24 深圳Tcl新技术有限公司 It is intended to analysis method, device, display terminal and computer readable storage medium
CN110674259A (en) * 2019-09-27 2020-01-10 北京百度网讯科技有限公司 Intention understanding method and device
CN110737768A (en) * 2019-10-16 2020-01-31 信雅达***工程股份有限公司 Text abstract automatic generation method and device based on deep learning and storage medium
CN110837556A (en) * 2019-10-30 2020-02-25 深圳价值在线信息科技股份有限公司 Abstract generation method and device, terminal equipment and storage medium
CN111209480A (en) * 2020-01-09 2020-05-29 上海风秩科技有限公司 Method and device for determining pushed text, computer equipment and medium
CN111259144A (en) * 2020-01-16 2020-06-09 中国平安人寿保险股份有限公司 Multi-model fusion text matching method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
网络舆情中热点发现与跟踪***的设计与实现;郭哲宏;《中国优秀硕士学位论文全文数据库信息科技辑》(第1期);I138-1484 *

Also Published As

Publication number Publication date
CN111832305A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN111832305B (en) User intention recognition method, device, server and medium
CN110196901B (en) Method and device for constructing dialog system, computer equipment and storage medium
CN108829822B (en) Media content recommendation method and device, storage medium and electronic device
CN107329949B (en) Semantic matching method and system
US9582757B1 (en) Scalable curation system
CN110110062B (en) Machine intelligent question and answer method and device and electronic equipment
CN111708873A (en) Intelligent question answering method and device, computer equipment and storage medium
CN111081220B (en) Vehicle-mounted voice interaction method, full-duplex dialogue system, server and storage medium
CN111539197A (en) Text matching method and device, computer system and readable storage medium
CN111832308B (en) Speech recognition text consistency processing method and device
CN113569011B (en) Training method, device and equipment of text matching model and storage medium
US20230140981A1 (en) Tutorial recommendation using discourse-level consistency and ontology-based filtering
CN112041809A (en) Automatic addition of sound effects to audio files
CN113282711A (en) Internet of vehicles text matching method and device, electronic equipment and storage medium
CN116150306A (en) Training method of question-answering robot, question-answering method and device
CN114722176A (en) Intelligent question answering method, device, medium and electronic equipment
CN117520523A (en) Data processing method, device, equipment and storage medium
CN112417174A (en) Data processing method and device
CN110377706B (en) Search sentence mining method and device based on deep learning
Ye et al. A sentiment based non-factoid question-answering framework
CN116955559A (en) Question-answer matching method and device, electronic equipment and storage medium
CN111639160A (en) Domain identification method, interaction method, electronic device and storage medium
CN116450855A (en) Knowledge graph-based reply generation strategy method and system for question-answering robot
CN112989001B (en) Question and answer processing method and device, medium and electronic equipment
CN113946668A (en) Semantic processing method, system and device based on edge node and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Applicant after: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Address before: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Applicant before: Guangzhou Xiaopeng Internet of vehicles Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201218

Address after: Room 1608, 14th floor, 67 North Fourth Ring West Road, Haidian District, Beijing

Applicant after: Beijing Xiaopeng Automobile Co.,Ltd.

Address before: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Applicant before: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant