CN106294505B - Answer feedback method and device - Google Patents

Answer feedback method and device Download PDF

Info

Publication number
CN106294505B
CN106294505B CN201510316013.4A CN201510316013A CN106294505B CN 106294505 B CN106294505 B CN 106294505B CN 201510316013 A CN201510316013 A CN 201510316013A CN 106294505 B CN106294505 B CN 106294505B
Authority
CN
China
Prior art keywords
training
answer
semantic
semantic extraction
answers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510316013.4A
Other languages
Chinese (zh)
Other versions
CN106294505A (en
Inventor
周光有
肖磊
张小鹏
王巨宏
管刚
刘婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Central China Normal University
Original Assignee
Tencent Technology Shenzhen Co Ltd
Central China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd, Central China Normal University filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510316013.4A priority Critical patent/CN106294505B/en
Publication of CN106294505A publication Critical patent/CN106294505A/en
Application granted granted Critical
Publication of CN106294505B publication Critical patent/CN106294505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3325Reformulation based on results of preceding query
    • G06F16/3326Reformulation based on results of preceding query using relevance feedback from the user, e.g. relevance feedback on documents, documents sets, document terms or passages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a method and a device for feeding back answers, and belongs to the technical field of computers. The method comprises the following steps: training semantic extraction parameters in a preset semantic extraction formula according to corresponding relations among questions, best answers and other answers stored in a training sample library and based on training conditions that the semantic proximity of the questions and the corresponding best answers is larger than that of the questions and the corresponding other answers, and obtaining training values of the semantic extraction parameters; when an answer request carrying a target question is received, respectively determining the semantic proximity of each answer and the target question according to the target question, each answer in an answer query library, the semantic extraction formula and the training value of the semantic extraction parameter; and selecting a target answer from the answers according to the semantic proximity of the answers and the target question, and feeding back the answer request. By adopting the method and the device, the accuracy of answer feedback of the server can be improved.

Description

Answer feedback method and device
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for feeding back answers.
Background
With the development of computers and information retrieval technologies, people tend to seek answers to a certain question by means of computers, and accordingly, the use of question-answering systems is becoming more widespread.
The existing community question-answering system is generally implemented as follows: the user inputs a question through the terminal, the server acquires all pre-stored answers from the answer query library, the vocabulary common to the question input by the user and one answer is determined, the sum of the times of occurrence of each common vocabulary in the answer is calculated, the sum is used as the text proximity of the answer to the question input by the user, according to the method, the text proximity of each answer in the answer query library to the question input by the user is calculated, and the answer with the maximum text proximity to the question is pushed to the user.
In the process of implementing the invention, the inventor finds that the prior art has at least the following problems:
based on the implementation method of the community question-answering system, when the server pushes answers to the user, the text nearness between the questions and the answers is mainly calculated based on the word matching degree between the questions and the answers, but the answers required by the user may not have a common word (namely, a word gap exists) with the questions input by the user, or the number of times of the common word appearing is small, so that the possibility that the answers pushed to the user are matched with the requirements of the user is low, and the accuracy of answer feedback performed by the server is low.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present invention provide a method and an apparatus for feeding back answers. The technical scheme is as follows:
in a first aspect, a method for feeding back answers is provided, the method including:
training semantic extraction parameters in a preset semantic extraction formula according to corresponding relations among questions, best answers and other answers stored in a training sample library and based on training conditions that the semantic proximity of the questions and the corresponding best answers is larger than that of the questions and the corresponding other answers, and obtaining training values of the semantic extraction parameters;
when an answer request carrying a target question is received, respectively determining the semantic proximity of each answer and the target question according to the target question, each answer in an answer query library, the semantic extraction formula and the training value of the semantic extraction parameter;
and selecting a target answer from the answers according to the semantic proximity of the answers and the target question, and feeding back the answer request.
In a second aspect, there is provided an apparatus for feeding back answers, the apparatus including:
the training module is used for training semantic extraction parameters in a preset semantic extraction formula according to corresponding relations among questions, best answers and other answers stored in a training sample library and based on training conditions that the semantic proximity of the questions and the corresponding best answers is greater than that of the questions and the corresponding other answers, and training values of the semantic extraction parameters are obtained;
the determining module is used for respectively determining the semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, the semantic extraction formula and the training value of the semantic extraction parameter when receiving an answer request carrying the target question;
and the feedback module is used for selecting a target answer from the answers according to the semantic proximity of the answers to the target question and feeding back the answer request.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, according to the corresponding relation of a question, a best answer and other answers stored in a training sample library, training semantic extraction parameters in a preset semantic extraction formula on the basis of a training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers to obtain a training value of the semantic extraction parameters, when an answer request carrying a target question is received, respectively determining the semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, the semantic extraction formula and the training value of the semantic extraction parameters, selecting a target answer from the answers according to the semantic proximity of each answer to the target question, and feeding back the answer request. Therefore, the answer is selected based on the semantic proximity, the problem of vocabulary gap existing between the question and the answer is avoided, and the accuracy of the answer fed back aiming at the question can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart of a method for providing answer feedback according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a training process provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an apparatus for feeding back answers according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Example one
An embodiment of the present invention provides a method for feeding back an answer, as shown in fig. 1, a processing flow of the method may include the following steps:
step 101, training semantic extraction parameters in a preset semantic extraction formula according to corresponding relations among questions, best answers and other answers stored in a training sample library and based on training conditions that semantic proximity between the questions and the corresponding best answers is greater than that between the questions and the corresponding other answers, and obtaining training values of the semantic extraction parameters.
Step 102, when an answer request carrying a target question is received, respectively determining semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, a semantic extraction formula and a training value of a semantic extraction parameter.
And 103, selecting a target answer from the answers according to the semantic proximity of each answer and the target question, and feeding back the answer request.
In the embodiment of the invention, according to the corresponding relation of a question, a best answer and other answers stored in a training sample library, training semantic extraction parameters in a preset semantic extraction formula on the basis of a training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers to obtain a training value of the semantic extraction parameters, when an answer request carrying a target question is received, respectively determining the semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, the semantic extraction formula and the training value of the semantic extraction parameters, selecting a target answer from the answers according to the semantic proximity of each answer to the target question, and feeding back the answer request. Therefore, the answer is selected based on the semantic proximity, the problem of vocabulary gap existing between the question and the answer is avoided, and the accuracy of the answer fed back aiming at the question can be improved.
Example two
The embodiment of the invention provides a method for feeding back answers, the execution subject of the method can be a server, the server can be a server of a community question and answer website or application, the server can be provided with a processor, a memory and a transceiver, the processor can be used for training semantic extraction parameters and processing for feeding back answers aiming at questions, the memory can be used for storing data required in the following processing processes and generated data, and the transceiver can be used for receiving and sending data. The process flow shown in fig. 1 will be described in detail below with reference to specific embodiments, and the contents may be as follows:
step 101, training semantic extraction parameters in a preset semantic extraction formula according to corresponding relations among questions, best answers and other answers stored in a training sample library and based on training conditions that semantic proximity between the questions and the corresponding best answers is greater than that between the questions and the corresponding other answers, and obtaining training values of the semantic extraction parameters.
When big data is processed, semantics of a statement (such as a question, an answer, and the like) may be quantized, and the semantic extraction formula may be a formula for extracting the semantics of the question or the answer. The semantic extraction parameters may be constant coefficients in a semantic extraction formula, which may be determined through a training process. Semantic proximity may be the proximity of a question to an answer at the semantic (i.e., the expressive meaning of a sentence) level.
In an embodiment, the server may obtain some questions and their corresponding answers from the internet, store them in a training sample library, for example, may obtain some questions and corresponding answers in some community question-answering system, wherein, for each question in the training sample library, there is a certain number of answers corresponding to each question, including the best answer (which may be generally an answer selected by the user who presented the question) and other answers, there is a word vector (which may be referred to as a distribution vector) for each word in the library, wherein, the word vector may be a d-dimensional vector (d may be 50), wherein, a value in one dimension may be used to indicate the likelihood that the word corresponds to a semantic term, for example, the word vector in the bmac word may be [ 0.5; 0.8; … … ], wherein, a first dimension of the word vector may be "the likelihood that the word is used to indicate an animal", 0.5 indicates the value, a second dimension may be "the word is used to indicate the likelihood that the vehicle", and a corresponding term in the second dimension may be "0.8" the likelihood that the word vector is used to indicate the vehicle ", and the corresponding word vector may be included in the training sample library, and the corresponding word vector may be set as a zero-word in the training sample library, and the corresponding word vector may be included in the corresponding word vector in the training sample library, and the corresponding word vector may be set as a word in the corresponding word in the training sample library, and the corresponding word vector may be set to indicate the answer matrix of the corresponding word in the corresponding question or the corresponding word in the training sample library (e.g., the corresponding word vector).
The server can vectorize a word matrix corresponding to each question or answer in the training sample library to obtain a vector representing the question or answer, and can use ExAs indicated, the subscript x may be either a question (which may be denoted by q) or an answer (which may be denoted by a), i.e., EqVector vectorized by word matrix representing the correspondence of the question, EaAnd representing the vectorized vector of the word matrix corresponding to the answer. The server obtains E for each question and answer in the training sample libraryq、EaThen, the E-ExThe semantics represented by the question or answer can then be extracted using the following semantic extraction formula:
Figure BDA0000735365060000051
wherein z may be called a semantic vector, which may characterize the semantics of the question or answer, and W may be called a weighting matrix, which may be used to pair
Figure BDA0000735365060000052
B may be referred to as a bias vector, and used together with W to extract the semantics represented by the question or the answer, W, b may be referred to as a semantic extraction parameter, and f () is a nonlinear function used to extract the semantics represented by the question or the answer, and may be selected as an S function, a hyperbolic function, or a rectification function, and so on, where f () is used to completeThe flow function is an example, i.e., f (), W, and b work together to extract the semantics characterized by the question or answer.
Aiming at each question and the corresponding answer in the training sample library, after the server obtains the vector z, the semantic proximity between the question and the answer can be calculated, the semantic proximity between the question and the answer can be represented by a cosine included angle between the vectors, and the weighting matrix and the offset matrix in the formula (1) are trained according to the condition that the semantic proximity between the question and the corresponding best answer is greater than the semantic proximity between the question and the corresponding other answer, so that the final training value is obtained.
Alternatively, the semantic extraction parameter in the above formula (1) may be obtained by increasing the semantic proximity of the question to the corresponding best answer minus the sum of the differences between the semantic proximity of the question and the corresponding other answers, and accordingly, the processing procedure in step 101 may be as follows: according to the corresponding relation of the question, the best answer and other answers stored in the training sample library, training semantic extraction parameters in a preset semantic extraction formula on the basis of a training condition of increasing the sum of the difference value of the semantic proximity of the question to the corresponding best answer minus the semantic proximity of the question to the corresponding other answers, and obtaining a training value of the semantic extraction parameters.
In the implementation, a question in the sample training library and its corresponding best answer and other answers are obtained, wherein the best answer corresponding to the question can be marked as a+Other answers corresponding to the question can be marked as aj J represents the j-th other answer corresponding to the question, and may be any integer from 1 to the total number of other answers corresponding to the question, for example, N answers other than the best answer corresponding to the question, where j is 1,2, … N. And establishing an objective function by taking a question in the acquired sample training library, the corresponding best answer and other answers as training data, and training the established objective function to obtain a training value of the semantic extraction parameter.
The training process using a question in the sample training library and its corresponding best answer and other answers as training data is as follows: questions and answers are in accordance withSemantic extraction is carried out by formula (1) to obtain semantic vectors corresponding to the problems, and the semantic vectors can be respectively recorded as zqAnd zaAfter the semantic vectors of the question and all the answers corresponding to the question are obtained, the semantic proximity between the answers and the question can be calculated according to the formula (2),
Figure BDA0000735365060000061
where sim (q, a) represents the predicted closeness of the question and all the answers corresponding thereto, and formula (2) used herein represents the semantic closeness of the question and all the answers corresponding thereto by using the cosine angle between the semantic vectors corresponding to the question and the answers. The server may establish a loss function according to equation (3),
Figure BDA0000735365060000062
where L (q, a) represents the difference sum of the semantic closeness of the question to the corresponding best answer minus the semantic closeness of the question to the corresponding other answers, sim (q, a)+) Representing the semantic proximity of the question to the corresponding best answer,
Figure BDA0000735365060000063
representing the semantic proximity of the question to each of the other answers to which it corresponds, equation (3) is the first objective function that is trained. Setting an initial value for the semantic extraction parameter contained in formula (3), training the first objective function by using a gradient descent method to obtain a training value of the semantic extraction parameter W, b contained in formula (3), at this time, taking a question in the sample training library and an answer corresponding to the question as training data, ending the training process of the objective function established based on the training data, and obtaining the training value of the semantic extraction parameter W, b after the training process is ended.
The server obtains the next question in the sample training library and the corresponding best answer and other answers thereof, takes the next question as training data, establishes a first objective function according to the training process, uses a BP algorithm (Back propagation) to train the first objective function by taking the training value of the semantic extraction parameter W, b obtained as an initial value for training the first objective function, obtains the training value of the semantic extraction parameter W, b at this time, uses the training value as the initial value for the next training, and recurs in sequence until all the questions in the sample training library and the corresponding best answers and other answers thereof are trained, and the whole training process is finished, obtains the final training value of the semantic extraction parameter W, b, and stores the training value.
In addition, in order to reduce the complexity of removing the gradient of the objective function, in the training process, the objective function may also adopt the formula shown in formula (4),
Figure BDA0000735365060000071
wherein the physical meaning represented by formula (4) is approximately the same as that of formula (3), and the training is based on the principle that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the other corresponding answers,
Figure BDA0000735365060000072
optionally, the initial value of the training process may be determined through another training process, and accordingly, the training process may be as follows: according to all questions and all answers stored in a training sample library, training semantic extraction parameters in a preset semantic extraction formula based on training conditions for reducing the difference degree between sentences obtained after reverse processing of semantic extraction and semantic extraction is performed on sentence sequences and sentences before semantic extraction, and obtaining intermediate training values of the semantic extraction parameters; wherein the statement is a question or an answer; and secondly, training the semantic extraction parameters in the semantic extraction formula by taking the middle training value of the semantic extraction parameters as an initial input value according to the corresponding relation among the questions, the best answers and other answers stored in the training sample library and based on the training condition that the semantic proximity between the questions and the corresponding best answers is greater than that between the questions and the corresponding other answers, so as to obtain the training values of the semantic extraction parameters.
The sequential semantic extraction may be semantic extraction of a sentence according to a formula shown in formula (1), which may be referred to as a coding process of the sentence, and the reverse processing of the semantic extraction may be the reverse processing of formula (1), which may obtain E before codingxE 'of the same dimensionality'xThe process can be called a decoding process, the whole process of the encoding process and the decoding process can be realized by using a denoising automatic coding machine, and the denoising automatic coding machine can be regarded as a special neural network.
In implementation, as shown in fig. 2, the process of step one is as follows: obtaining a certain question or answer in each answer of each question in the sample training library to obtain the corresponding ExThen, the certain proportion of the damage is carried out, some values can be forcibly set to be zero to obtain
Figure BDA0000735365060000073
Extracting the semantic of the question according to the formula (1) to obtain a semantic vector z of each question and each answer, wherein z comprises a semantic extraction parameter W, b, namely the pair
Figure BDA0000735365060000074
And (3) obtaining a corresponding semantic vector z after encoding, and then performing inverse transformation on z by using g (z) by the server, namely performing inverse transformation on the semantic vector z obtained after encoding to obtain g (f ()), wherein the process is a decoding process. Based on E 'obtained after reduction decoding'xAnd E before codingxThe following formula (5) is established as a second objective function:
L(g(f()),Ex)=||g(f())-Ex||2……(5)
wherein equation (5) represents the extraction of parameter pairs using selected semantics
Figure BDA0000735365060000081
E 'obtained by encoding and decoding'xAnd E before damagexModulo of the disparity vector of (5)) The smaller the value of (3), the more accurately the semantic of the sentence can be expressed by the obtained semantic extraction parameter, the initial value of the semantic extraction parameter contained in the formula (5) is set, and the second objective function is trained by using a gradient descent method to obtain a training value of the semantic extraction parameter.
The server acquires other questions or answers in the sample training library, takes the questions or answers as training data, establishes a second objective function according to the training process, trains the second objective function by using the obtained semantic extraction parameter W, b training value as an initial value for training the second objective function by using a BP algorithm, and recurs in sequence until all the questions and answers in the sample training library are trained, the whole training process is finished, and the final training value of the semantic extraction parameter W, b is obtained.
Taking the training value of the semantic extraction parameter obtained in the first step as an intermediate training value of the whole training process, taking the intermediate training value as an initial value of the training process in the second step, continuing training according to the second step to obtain a final semantic extraction parameter W, b, and storing the final semantic extraction parameter W, b, where the training process in the second step may be a training process that performs training based on a training condition that a difference value obtained by subtracting semantic proximity of a question from semantic proximity of other answers corresponding to the question from semantic proximity of the question corresponding to the best answer is increased in the above step 101, and a corresponding processing manner may refer to the detailed description in the step 101, and will not be described here again.
Optionally, when the semantic extraction parameters are trained, the questions and the answers in the sample training library may be trained respectively to obtain respective semantic extraction parameters of the questions and the answers, and correspondingly, the processing procedure in the first step may be as follows: training problem semantic extraction parameters in a preset problem semantic extraction formula based on training conditions for reducing the difference between sentences obtained after reverse processing of semantic extraction and semantic extraction is performed on the problem sequence and sentences before the semantic extraction according to all problems stored in a training sample library to obtain intermediate training values of the problem semantic extraction parameters; according to all answers stored in a training sample library, training answer semantic extraction parameters in a preset answer semantic extraction formula based on training conditions for reducing the difference degree between sentences obtained after reverse processing of semantic extraction and semantic extraction is sequentially performed on the sentences and the sentences before the semantic extraction, and obtaining intermediate training values of the answer semantic extraction parameters; the processing flow of the second step may be as follows: according to the corresponding relation of the question, the best answer and other answers stored in a training sample library, based on the training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers, taking the intermediate training value of the question semantic extraction parameter and the intermediate training value of the answer semantic extraction parameter as initial input values, and training the question semantic extraction parameter in the question semantic extraction formula and the answer semantic extraction parameter in the answer semantic extraction formula to obtain the training value of the question semantic extraction parameter and the training value of the answer semantic extraction parameter.
In implementation, in the training process of the first step, the questions or answers in the sample training library may adopt different W and b, and semantic extraction is performed according to equation (1), that is, the questions in the sample training library may adopt a pair of W and b (which may be denoted as W)1、b1) Performing semantic extraction, and adopting another pair of W and b (which can be marked as W) as the answer corresponding to the question in the sample training library2、b2) Semantic extraction is carried out, and an objective function is respectively established and trained according to the mode of the step one to obtain W1、b1And W2、b2The training value is used as the intermediate training value of the whole training process and is used as the initial value of the step two, and the training is continued according to the processing flow of the step two to obtain the final semantic extraction parameter W1、b1And W2、b2And storing the semantic vectors, wherein in the training process of the step two, the semantic vectors of the questions, the corresponding best answers and other answers are calculated, and when the semantic similarity is calculated according to the formula (2) according to the obtained semantic vectors, the semantic similarity comprises W1、b1And W2、b2The four semantic extraction parameters and the corresponding processing modes can be referred to the specific description in the step one and the step two, and are not described here again.
Step 102, when an answer request carrying a target question is received, respectively determining semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, a semantic extraction formula and a training value of a semantic extraction parameter.
The target question may be a question which is input by the user through the terminal and is required to obtain an answer, and the answer query library may be the sample training library or a library storing some answers acquired by the server from the internet, and is used for the server to select an answer matching the target question.
In implementation, after a user inputs a target question through a terminal and sends an answer request to a server, the server receives the answer request sent by the user, the server analyzes the answer request to obtain the target question carried in the target question, training values of semantic extraction parameters stored by the server are substituted into a formula (1), semantic vectors of the target question and each answer in an answer query library can be calculated according to the formula (1), after the semantic vectors of the target question and each answer in the answer query library are obtained, semantic proximity of each answer in the answer query library to the target question can be calculated according to the formula (2).
Optionally, the training is performed for the above questions and answers respectively; accordingly, the processing procedure when the server receives the answer request sent by the terminal may be as follows: when an answer request carrying a target question is received, according to the target question, all answers in an answer query library, the question semantic extraction formula, the answer semantic extraction formula, training values of the question semantic extraction parameters and training values of the answer semantic extraction parameters, semantic proximity of all the answers to the target question is respectively determined.
In implementation, after obtaining respective semantic extraction parameters of the question and the answer, and when receiving an answer request carrying a target question, the server may respectively calculate semantic vectors of the target question and the answers in the answer query library according to the formula (1) and the semantic vectors of the answers in the answer query library according to the semantic extraction parameters corresponding to the question and the answer, and after determining the respective semantic vectors, may calculate semantic similarity between the answers in the answer query library and the target question according to the formula (2).
And 103, selecting a target answer from the answers according to the semantic proximity of each answer and the target question, and feeding back the answer request.
The target answer may be an answer in which each answer in the answer query library matches the target question, one answer, or several answers.
In implementation, after obtaining the semantic proximity between each answer in the answer query library and the target question, the server may sort the obtained semantic proximity in descending order, and may select the answer corresponding to the largest semantic proximity as the target answer, or select the answers corresponding to the first several semantic proximity after sorting as the target answer, and after selecting the target answer, feed the target answer back to the user through the terminal.
Optionally, the obtained semantic proximity may be combined with some features based on vocabulary matching, and accordingly, the processing flow of step 103 may be as follows: and selecting a target answer from the answers according to the semantic proximity of the answers to the target question and the text proximity of the answers to the target question, and feeding back the answer request.
Where textual proximity may be the proximity of each answer and target question based on lexical matching.
In implementation, after obtaining the semantic proximity of each answer in the answer query library to the target question, the server stores the semantic proximity, calculates the text proximity of each answer in the answer query library to the target question based on vocabulary matching according to the formulas shown in formulas (6) to (16),
Figure BDA0000735365060000101
Figure BDA0000735365060000102
Figure BDA0000735365060000103
Figure BDA0000735365060000104
Figure BDA0000735365060000105
Figure BDA0000735365060000106
Figure BDA0000735365060000107
Figure BDA0000735365060000108
Figure BDA0000735365060000111
Figure BDA0000735365060000112
Figure BDA0000735365060000113
wherein, c (q)iA) may be qiNumber of occurrences in a, df (q)i) May be qiThe number of times of occurrence in each answer in the answer query library, | a | may be the number of words included in the answer a, | C | may be the number of words included in each answer in the answer query library, | C may be each answer in the answer query library, k1∈[1.2,2.0]B is 0.75, and avg | C | may be an average value of the number of words included in each answer in the answer query library, and after obtaining the similarity between each answer and the text of the target question, the similarity is combined with the determined similarity between each answer and the text of the target questionThe semantic similarity is put into a learning sorting frame together, for example, an SVM sorting algorithm, to obtain the comprehensive sorting of each answer and the target question in an answer query library, that is, the semantic similarity characteristic and the text similarity characteristic based on vocabulary matching shown in the above 11 formulas are comprehensively utilized to obtain the similarity between each answer and the target question, wherein the weights of the 12 characteristics can be manually assigned according to an empirical value, or the samples in a sample training library can be utilized to train according to the SVM sorting algorithm to obtain the weights corresponding to each characteristic, the answer corresponding to the maximum similarity is fed back to the user through a terminal, and the answers corresponding to the first few similarities in the sorting can also be fed back to the user through the terminal.
In the embodiment of the invention, according to the corresponding relation of a question, a best answer and other answers stored in a training sample library, training semantic extraction parameters in a preset semantic extraction formula on the basis of a training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers to obtain a training value of the semantic extraction parameters, when an answer request carrying a target question is received, respectively determining the semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, the semantic extraction formula and the training value of the semantic extraction parameters, selecting a target answer from the answers according to the semantic proximity of each answer to the target question, and feeding back the answer request. Therefore, the answer is selected based on the semantic proximity, the problem of vocabulary gap existing between the question and the answer is avoided, and the accuracy of the answer fed back aiming at the question can be improved.
EXAMPLE III
Based on the same technical concept, an embodiment of the present invention further provides an apparatus for feeding back answers, as shown in fig. 3, the apparatus includes:
the training module 310 is configured to train semantic extraction parameters in a preset semantic extraction formula according to corresponding relations among questions, best answers and other answers stored in a training sample library, based on a training condition that semantic proximity between a question and a corresponding best answer is greater than semantic proximity between a question and a corresponding other answer, and obtain a training value of the semantic extraction parameters;
a determining module 320, configured to, when an answer request with a target question is received, determine semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, the semantic extraction formula, and a training value of the semantic extraction parameter;
a feedback module 330, configured to select a target answer from the answers according to semantic proximity between the answers and the target question, and feed back the answer request.
Optionally, the training module 310 is configured to:
according to the corresponding relation of the question, the best answer and other answers stored in the training sample library, training semantic extraction parameters in a preset semantic extraction formula on the basis of a training condition of increasing the sum of the difference value of the semantic proximity of the question to the corresponding best answer minus the semantic proximity of the question to the corresponding other answers, and obtaining a training value of the semantic extraction parameters.
Optionally, the training module 310 is configured to:
training semantic extraction parameters in a preset semantic extraction formula based on training conditions for reducing the difference degree between sentences obtained after reverse processing of semantic extraction and semantic extraction is sequentially performed on the sentences and the sentences before the semantic extraction according to all the problems and all the answers stored in a training sample library to obtain intermediate training values of the semantic extraction parameters; wherein the statement is a question or an answer;
according to the corresponding relation of the question, the best answer and other answers stored in the training sample library, training the semantic extraction parameters in the semantic extraction formula by taking the middle training value of the semantic extraction parameters as an initial input value on the basis of the training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers, and obtaining the training values of the semantic extraction parameters.
Optionally, the training module 310 is configured to:
training problem semantic extraction parameters in a preset problem semantic extraction formula based on training conditions for reducing the difference between sentences obtained after reverse processing of semantic extraction and semantic extraction is performed on the problem sequence and sentences before the semantic extraction according to all problems stored in a training sample library to obtain intermediate training values of the problem semantic extraction parameters;
according to all answers stored in a training sample library, training answer semantic extraction parameters in a preset answer semantic extraction formula based on training conditions for reducing the difference degree between sentences obtained after reverse processing of semantic extraction and semantic extraction is sequentially performed on the sentences and the sentences before the semantic extraction, and obtaining intermediate training values of the answer semantic extraction parameters;
according to the corresponding relation of the question, the best answer and other answers stored in a training sample library, based on the training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers, taking the intermediate training value of the question semantic extraction parameter and the intermediate training value of the answer semantic extraction parameter as initial input values, and training the question semantic extraction parameter in the question semantic extraction formula and the answer semantic extraction parameter in the answer semantic extraction formula to obtain the training value of the question semantic extraction parameter and the training value of the answer semantic extraction parameter;
the determining module 320 is configured to:
when an answer request carrying a target question is received, according to the target question, all answers in an answer query library, the question semantic extraction formula, the answer semantic extraction formula, training values of the question semantic extraction parameters and training values of the answer semantic extraction parameters, semantic proximity of all the answers to the target question is respectively determined.
Optionally, the feedback module 330 is configured to:
and selecting a target answer from the answers according to the semantic proximity of the answers to the target question and the text proximity of the answers to the target question, and feeding back the answer request.
In the embodiment of the invention, according to the corresponding relation of a question, a best answer and other answers stored in a training sample library, training semantic extraction parameters in a preset semantic extraction formula on the basis of a training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers to obtain a training value of the semantic extraction parameters, when an answer request carrying a target question is received, respectively determining the semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, the semantic extraction formula and the training value of the semantic extraction parameters, selecting a target answer from the answers according to the semantic proximity of each answer to the target question, and feeding back the answer request. Therefore, the answer is selected based on the semantic proximity, the problem of vocabulary gap existing between the question and the answer is avoided, and the accuracy of the answer fed back aiming at the question can be improved.
It should be noted that: in the apparatus for feeding back an answer provided in the above embodiment, when feeding back an answer, only the division of the above functional modules is exemplified, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the apparatus for feeding back answers and the method for feeding back answers provided in the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Example four
Fig. 4 is a schematic structural diagram of a server according to an embodiment of the present invention. The server 1900, which may vary widely in configuration or performance, may include one or more Central Processing Units (CPUs) 1922 (e.g., one or more processors) and memory 1932, one or more storage media 1930 (e.g., one or more mass storage devices) storing applications 1942 or data 1944. Memory 1932 and storage medium 1930 can be, among other things, transient or persistent storage. The program stored in the storage medium 1930 may include one or more modules (not shown), each of which may include a series of instructions operating on a statistics server. Still further, the central processor 1922 may be configured to communicate with the storage medium 1930 to execute a series of instruction operations in the storage medium 1930 on the statistics server 1900.
The server 1900 may also include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input-output interfaces 1958, one or more keyboards 1956, and/or one or more operating systems 1941, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
Server 1900 may include memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors include instructions for:
training semantic extraction parameters in a preset semantic extraction formula according to corresponding relations among questions, best answers and other answers stored in a training sample library and based on training conditions that the semantic proximity of the questions and the corresponding best answers is larger than that of the questions and the corresponding other answers, and obtaining training values of the semantic extraction parameters;
when an answer request carrying a target question is received, respectively determining the semantic proximity of each answer and the target question according to the target question, each answer in an answer query library, the semantic extraction formula and the training value of the semantic extraction parameter;
and selecting a target answer from the answers according to the semantic proximity of the answers and the target question, and feeding back the answer request.
Optionally, the training of the semantic extraction parameters in the preset semantic extraction formula according to the corresponding relationship among the questions, the best answers and other answers stored in the training sample library based on the training condition that the semantic proximity between the question and the corresponding best answer is greater than the semantic proximity between the question and the corresponding other answer to obtain the training values of the semantic extraction parameters includes:
according to the corresponding relation of the question, the best answer and other answers stored in the training sample library, training semantic extraction parameters in a preset semantic extraction formula on the basis of a training condition of increasing the sum of the difference value of the semantic proximity of the question to the corresponding best answer minus the semantic proximity of the question to the corresponding other answers, and obtaining a training value of the semantic extraction parameters.
Optionally, the training of the semantic extraction parameters in the preset semantic extraction formula according to the corresponding relationship among the questions, the best answers and other answers stored in the training sample library based on the training condition that the semantic proximity between the question and the corresponding best answer is greater than the semantic proximity between the question and the corresponding other answer to obtain the training values of the semantic extraction parameters includes:
training semantic extraction parameters in a preset semantic extraction formula based on training conditions for reducing the difference degree between sentences obtained after reverse processing of semantic extraction and semantic extraction is sequentially performed on the sentences and the sentences before the semantic extraction according to all the problems and all the answers stored in a training sample library to obtain intermediate training values of the semantic extraction parameters; wherein the statement is a question or an answer;
according to the corresponding relation of the question, the best answer and other answers stored in the training sample library, training the semantic extraction parameters in the semantic extraction formula by taking the middle training value of the semantic extraction parameters as an initial input value on the basis of the training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers, and obtaining the training values of the semantic extraction parameters.
Optionally, the training, according to each question and each answer stored in the training sample library, on the basis of a training condition that reduces a difference between a sentence obtained by performing inverse processing of semantic extraction and semantic extraction on a sentence sequence and a sentence before semantic extraction, a semantic extraction parameter in a preset semantic extraction formula is trained to obtain an intermediate training value of the semantic extraction parameter, including:
training problem semantic extraction parameters in a preset problem semantic extraction formula based on training conditions for reducing the difference between sentences obtained after reverse processing of semantic extraction and semantic extraction is performed on the problem sequence and sentences before the semantic extraction according to all problems stored in a training sample library to obtain intermediate training values of the problem semantic extraction parameters;
according to all answers stored in a training sample library, training answer semantic extraction parameters in a preset answer semantic extraction formula based on training conditions for reducing the difference degree between sentences obtained after reverse processing of semantic extraction and semantic extraction is sequentially performed on the sentences and the sentences before the semantic extraction, and obtaining intermediate training values of the answer semantic extraction parameters;
the training method comprises the following steps of training semantic extraction parameters in a semantic extraction formula by taking a middle training value of the semantic extraction parameters as an initial input value according to corresponding relations among questions, best answers and other answers stored in a training sample library and based on a training condition that the semantic proximity between the questions and the corresponding best answers is greater than that between the questions and the corresponding other answers, and obtaining the training values of the semantic extraction parameters, wherein the training method comprises the following steps:
according to the corresponding relation of the question, the best answer and other answers stored in a training sample library, based on the training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers, taking the intermediate training value of the question semantic extraction parameter and the intermediate training value of the answer semantic extraction parameter as initial input values, and training the question semantic extraction parameter in the question semantic extraction formula and the answer semantic extraction parameter in the answer semantic extraction formula to obtain the training value of the question semantic extraction parameter and the training value of the answer semantic extraction parameter;
when an answer request carrying a target question is received, respectively determining semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, the semantic extraction formula and a training value of the semantic extraction parameter, wherein the semantic proximity comprises the following steps:
when an answer request carrying a target question is received, according to the target question, all answers in an answer query library, the question semantic extraction formula, the answer semantic extraction formula, training values of the question semantic extraction parameters and training values of the answer semantic extraction parameters, semantic proximity of all the answers to the target question is respectively determined.
Optionally, the selecting a target answer from the answers according to the semantic proximity between the answers and the target question, and feeding back the answer request includes:
and selecting a target answer from the answers according to the semantic proximity of the answers to the target question and the text proximity of the answers to the target question, and feeding back the answer request.
In the embodiment of the invention, according to the corresponding relation of a question, a best answer and other answers stored in a training sample library, training semantic extraction parameters in a preset semantic extraction formula on the basis of a training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers to obtain a training value of the semantic extraction parameters, when an answer request carrying a target question is received, respectively determining the semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, the semantic extraction formula and the training value of the semantic extraction parameters, selecting a target answer from the answers according to the semantic proximity of each answer to the target question, and feeding back the answer request. Therefore, the answer is selected based on the semantic proximity, the problem of vocabulary gap existing between the question and the answer is avoided, and the accuracy of the answer fed back aiming at the question can be improved.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. A method of feeding back answers, the method comprising:
training semantic extraction parameters in a preset semantic extraction formula according to corresponding relations among questions, best answers and other answers stored in a training sample library and based on training conditions of increasing the sum of the difference value of the semantic proximity of the questions and the corresponding best answers minus the semantic proximity of the questions and the corresponding other answers, and obtaining training values of the semantic extraction parameters; or training semantic extraction parameters in a preset semantic extraction formula based on training conditions for reducing the difference between sentences obtained after reverse processing of semantic extraction and semantic extraction is sequentially performed on the sentences and the sentences before the semantic extraction according to all the questions and all the answers stored in a training sample library to obtain intermediate training values of the semantic extraction parameters; wherein the statement is a question or an answer; according to the corresponding relation among the questions, the best answers and other answers stored in a training sample library, training the semantic extraction parameters in the semantic extraction formula by taking the middle training value of the semantic extraction parameters as an initial input value on the basis of the training condition that the semantic proximity between the questions and the corresponding best answers is greater than that between the questions and the corresponding other answers to obtain the training values of the semantic extraction parameters;
when an answer request carrying a target question is received, respectively determining the semantic proximity of each answer and the target question according to the target question, each answer in an answer query library, the semantic extraction formula and the training value of the semantic extraction parameter;
and selecting a target answer from the answers according to the semantic proximity of the answers and the target question, and feeding back the answer request.
2. The method according to claim 1, wherein the training of the semantic extraction parameters in the preset semantic extraction formula based on the training conditions for reducing the degree of difference between the sentence obtained by performing the inverse processing of semantic extraction and semantic extraction on the sentence sequence and the sentence before semantic extraction according to the questions and the answers stored in the training sample library to obtain the intermediate training values of the semantic extraction parameters comprises:
training problem semantic extraction parameters in a preset problem semantic extraction formula based on training conditions for reducing the difference between sentences obtained after reverse processing of semantic extraction and semantic extraction is performed on the problem sequence and sentences before the semantic extraction according to all problems stored in a training sample library to obtain intermediate training values of the problem semantic extraction parameters;
according to all answers stored in a training sample library, training answer semantic extraction parameters in a preset answer semantic extraction formula based on training conditions for reducing the difference degree between sentences obtained after reverse processing of semantic extraction and semantic extraction is sequentially performed on the sentences and the sentences before the semantic extraction, and obtaining intermediate training values of the answer semantic extraction parameters;
the training method comprises the following steps of training semantic extraction parameters in a semantic extraction formula by taking a middle training value of the semantic extraction parameters as an initial input value according to corresponding relations among questions, best answers and other answers stored in a training sample library and based on a training condition that the semantic proximity between the questions and the corresponding best answers is greater than that between the questions and the corresponding other answers, and obtaining the training values of the semantic extraction parameters, wherein the training method comprises the following steps:
according to the corresponding relation of the question, the best answer and other answers stored in a training sample library, based on the training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers, taking the intermediate training value of the question semantic extraction parameter and the intermediate training value of the answer semantic extraction parameter as initial input values, and training the question semantic extraction parameter in the question semantic extraction formula and the answer semantic extraction parameter in the answer semantic extraction formula to obtain the training value of the question semantic extraction parameter and the training value of the answer semantic extraction parameter;
when an answer request carrying a target question is received, respectively determining semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, the semantic extraction formula and a training value of the semantic extraction parameter, wherein the semantic proximity comprises the following steps:
when an answer request carrying a target question is received, according to the target question, all answers in an answer query library, the question semantic extraction formula, the answer semantic extraction formula, training values of the question semantic extraction parameters and training values of the answer semantic extraction parameters, semantic proximity of all the answers to the target question is respectively determined.
3. The method according to claim 1, wherein the selecting a target answer from the answers according to the semantic proximity of the answers to the target question and feeding back the answer request comprises:
and selecting a target answer from the answers according to the semantic proximity of the answers to the target question and the text proximity of the answers to the target question, and feeding back the answer request.
4. An apparatus for feeding back answers, the apparatus comprising:
the training module is used for training semantic extraction parameters in a preset semantic extraction formula based on a training condition of increasing the sum of the difference value of the semantic proximity of the question and the corresponding best answer minus the semantic proximity of the question and the corresponding other answers according to the corresponding relation of the question, the best answer and the other answers stored in the training sample library to obtain a training value of the semantic extraction parameters; or training semantic extraction parameters in a preset semantic extraction formula based on training conditions for reducing the difference between sentences obtained after reverse processing of semantic extraction and semantic extraction is sequentially performed on the sentences and the sentences before the semantic extraction according to all the questions and all the answers stored in a training sample library to obtain intermediate training values of the semantic extraction parameters; wherein the statement is a question or an answer; according to the corresponding relation among the questions, the best answers and other answers stored in a training sample library, training the semantic extraction parameters in the semantic extraction formula by taking the middle training value of the semantic extraction parameters as an initial input value on the basis of the training condition that the semantic proximity between the questions and the corresponding best answers is greater than that between the questions and the corresponding other answers to obtain the training values of the semantic extraction parameters;
the determining module is used for respectively determining the semantic proximity of each answer to the target question according to the target question, each answer in an answer query library, the semantic extraction formula and the training value of the semantic extraction parameter when receiving an answer request carrying the target question;
and the feedback module is used for selecting a target answer from the answers according to the semantic proximity of the answers to the target question and feeding back the answer request.
5. The apparatus of claim 4, wherein the training module is configured to:
training problem semantic extraction parameters in a preset problem semantic extraction formula based on training conditions for reducing the difference between sentences obtained after reverse processing of semantic extraction and semantic extraction is performed on the problem sequence and sentences before the semantic extraction according to all problems stored in a training sample library to obtain intermediate training values of the problem semantic extraction parameters;
according to all answers stored in a training sample library, training answer semantic extraction parameters in a preset answer semantic extraction formula based on training conditions for reducing the difference degree between sentences obtained after reverse processing of semantic extraction and semantic extraction is sequentially performed on the sentences and the sentences before the semantic extraction, and obtaining intermediate training values of the answer semantic extraction parameters;
according to the corresponding relation of the question, the best answer and other answers stored in a training sample library, based on the training condition that the semantic proximity of the question to the corresponding best answer is greater than that of the question to the corresponding other answers, taking the intermediate training value of the question semantic extraction parameter and the intermediate training value of the answer semantic extraction parameter as initial input values, and training the question semantic extraction parameter in the question semantic extraction formula and the answer semantic extraction parameter in the answer semantic extraction formula to obtain the training value of the question semantic extraction parameter and the training value of the answer semantic extraction parameter;
the determining module is configured to:
when an answer request carrying a target question is received, according to the target question, all answers in an answer query library, the question semantic extraction formula, the answer semantic extraction formula, training values of the question semantic extraction parameters and training values of the answer semantic extraction parameters, semantic proximity of all the answers to the target question is respectively determined.
6. The apparatus of claim 4, wherein the feedback module is configured to:
and selecting a target answer from the answers according to the semantic proximity of the answers to the target question and the text proximity of the answers to the target question, and feeding back the answer request.
7. A computer-readable storage medium, wherein at least one instruction, at least one program, is stored in the storage medium, and the at least one instruction, the at least one program, and the at least one program are loaded and executed by a processor to implement the method for feeding back an answer according to any one of claims 1 to 3.
8. A server, characterized in that the server comprises a processor and a memory, wherein the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the answer feedback method according to any one of claims 1 to 3.
CN201510316013.4A 2015-06-10 2015-06-10 Answer feedback method and device Active CN106294505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510316013.4A CN106294505B (en) 2015-06-10 2015-06-10 Answer feedback method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510316013.4A CN106294505B (en) 2015-06-10 2015-06-10 Answer feedback method and device

Publications (2)

Publication Number Publication Date
CN106294505A CN106294505A (en) 2017-01-04
CN106294505B true CN106294505B (en) 2020-07-07

Family

ID=57659324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510316013.4A Active CN106294505B (en) 2015-06-10 2015-06-10 Answer feedback method and device

Country Status (1)

Country Link
CN (1) CN106294505B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505205B (en) 2017-01-17 2023-06-06 华为技术有限公司 Man-machine dialogue system and method
CN107122421A (en) * 2017-04-05 2017-09-01 北京大学 Information retrieval method and device
US11501076B2 (en) * 2018-02-09 2022-11-15 Salesforce.Com, Inc. Multitask learning as question answering
CN110059174B (en) * 2019-04-28 2023-05-30 科大讯飞股份有限公司 Query guiding method and device
CN110110048B (en) * 2019-05-10 2023-06-02 科大讯飞股份有限公司 Query guiding method and device
CN110457440B (en) * 2019-08-09 2022-08-16 宝宝树(北京)信息技术有限公司 Answer feedback method, device, equipment and medium
CN111126862A (en) * 2019-12-26 2020-05-08 中国银行股份有限公司 Data processing method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566998A (en) * 2009-05-26 2009-10-28 华中师范大学 Chinese question-answering system based on neural network
CN103425635A (en) * 2012-05-15 2013-12-04 北京百度网讯科技有限公司 Method and device for recommending answers
CN104572617A (en) * 2014-12-30 2015-04-29 苏州驰声信息科技有限公司 Oral test answer deviation detection method and device
CN104636456A (en) * 2015-02-03 2015-05-20 大连理工大学 Question routing method based on word vectors

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8972321B2 (en) * 2010-09-29 2015-03-03 International Business Machines Corporation Fact checking using and aiding probabilistic question answering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566998A (en) * 2009-05-26 2009-10-28 华中师范大学 Chinese question-answering system based on neural network
CN103425635A (en) * 2012-05-15 2013-12-04 北京百度网讯科技有限公司 Method and device for recommending answers
CN104572617A (en) * 2014-12-30 2015-04-29 苏州驰声信息科技有限公司 Oral test answer deviation detection method and device
CN104636456A (en) * 2015-02-03 2015-05-20 大连理工大学 Question routing method based on word vectors

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于特征信息提取的中文自动文摘方法;叶星火等;《计算机应用与软件》;20080515;第25卷(第5期);31-31,50 *
面向篇章语料库的自动知识获取——潜在语义分析(LSA)的研究和应用;柯晓华等;《2013 International Conference on Education and Teaching》;20130315;568-572 *

Also Published As

Publication number Publication date
CN106294505A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN106294505B (en) Answer feedback method and device
US11734319B2 (en) Question answering method and apparatus
CN106815252B (en) Searching method and device
CN107480143B (en) Method and system for segmenting conversation topics based on context correlation
WO2021159632A1 (en) Intelligent questioning and answering method and apparatus, computer device, and computer storage medium
CN107590255B (en) Information pushing method and device
US11349680B2 (en) Method and apparatus for pushing information based on artificial intelligence
US20210125516A1 (en) Answer training device, answer training method, answer generation device, answer generation method, and program
CN110990533B (en) Method and device for determining standard text corresponding to query text
CN110134777B (en) Question duplication eliminating method and device, electronic equipment and computer readable storage medium
CN112084307B (en) Data processing method, device, server and computer readable storage medium
CN110895559A (en) Model training method, text processing method, device and equipment
CN110162596B (en) Training method and device for natural language processing, automatic question answering method and device
CN110825843A (en) Training method, question answering method, device and storage medium suitable for financial field
CN112434533B (en) Entity disambiguation method, entity disambiguation device, electronic device, and computer-readable storage medium
US11281714B2 (en) Image retrieval
CN112632248A (en) Question answering method, device, computer equipment and storage medium
CN110162769B (en) Text theme output method and device, storage medium and electronic device
CN109753646B (en) Article attribute identification method and electronic equipment
CN115906797A (en) Text entity alignment method, device, equipment and medium
CN116431912A (en) User portrait pushing method and device
CN111625619A (en) Query omission method and device, computer readable medium and electronic equipment
CN112115237B (en) Construction method and device of tobacco science and technology literature data recommendation model
CN114120341A (en) Resume document identification model training method, resume document identification method and device
CN114298182A (en) Resource recall method, device, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant