CN113064972A - Intelligent question and answer method, device, equipment and storage medium - Google Patents

Intelligent question and answer method, device, equipment and storage medium Download PDF

Info

Publication number
CN113064972A
CN113064972A CN202110389804.5A CN202110389804A CN113064972A CN 113064972 A CN113064972 A CN 113064972A CN 202110389804 A CN202110389804 A CN 202110389804A CN 113064972 A CN113064972 A CN 113064972A
Authority
CN
China
Prior art keywords
matrix
question
sentence
statement
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110389804.5A
Other languages
Chinese (zh)
Inventor
吴晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202110389804.5A priority Critical patent/CN113064972A/en
Publication of CN113064972A publication Critical patent/CN113064972A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to artificial intelligence and provides an intelligent question answering method, an intelligent question answering device, intelligent question answering equipment and an intelligent question answering storage medium. The method can extract a sentence to be asked from a question and answer request, acquire a candidate question sentence from a preset corpus, acquire a first sentence matrix of the sentence to be asked based on a preset vector table, acquire a second sentence matrix of the candidate question sentence based on the preset vector table, perform feature extraction on the first sentence matrix to obtain a first feature vector, perform feature extraction on the second sentence matrix to obtain a second feature vector, calculate the matching degree of the first feature vector and the second feature vector, perform normalization processing on the matching degree to obtain a similarity probability, determine the candidate question sentence with the maximum similarity probability as a target question sentence, and acquire an answer corresponding to the target question sentence as a reply sentence. The invention can improve the real-time performance and the accuracy of the intelligent question answering mode. In addition, the invention also relates to a blockchain technology, and the reply statement can be stored in a blockchain.

Description

Intelligent question and answer method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an intelligent question answering method, an intelligent question answering device, intelligent question answering equipment and an intelligent question answering storage medium.
Background
At present, after an accident occurs, many car owners may be confused about how to protect the accident scene, how to define accident responsibilities, and the like, and then consult a traffic accident handling center. With the rapid development of artificial intelligence, a transform-DSSM-based intelligent question-answering method is also generated, however, in the existing intelligent question-answering method, the structure of the transform-DSSM algorithm is complex, which results in low processing efficiency of the existing intelligent question-answering method, and meanwhile, the accuracy of the existing intelligent question-answering method is low.
Therefore, how to improve the real-time performance and accuracy of the intelligent question-answering mode becomes a problem which needs to be solved urgently.
Disclosure of Invention
In view of the above, it is desirable to provide an intelligent question-answering method, device, apparatus and storage medium, which can improve the real-time performance and accuracy of the intelligent question-answering method.
On one hand, the invention provides an intelligent question-answering method, which comprises the following steps:
when a question and answer request is received, extracting a sentence to be asked from the question and answer request, and acquiring a candidate question sentence from a preset corpus according to the question and answer request;
acquiring a first statement matrix of the statement to be asked based on a preset vector table, and acquiring a second statement matrix of the candidate question statement based on the preset vector table;
performing feature extraction on the first statement matrix to obtain a first feature vector, and performing feature extraction on the second statement matrix to obtain a second feature vector;
calculating the matching degree of the first feature vector and the second feature vector;
normalizing the matching degree to obtain the similarity probability of the statement to be asked and the candidate question statement;
determining the candidate question statement with the maximum similarity probability as a target question statement;
and acquiring an answer corresponding to the target question sentence as a reply sentence of the to-be-asked sentence.
According to an optional embodiment of the present invention, the obtaining of the candidate question sentences from the predetermined corpus according to the question and answer request includes:
acquiring a field corresponding to the sentence to be asked from the question and answer request;
acquiring a target list from the preset corpus according to the field;
determining all statements in the target list as the candidate question statements.
According to an optional embodiment of the present invention, the obtaining the first statement matrix of the statement to be asked based on the preset vector table includes:
performing word segmentation processing on the sentence to be asked to obtain sentence word segmentation;
determining the word segmentation position of the sentence segmentation in the sentence to be asked according to the sentence to be asked;
acquiring word segmentation vectors of the sentence word segmentation from the preset vector table;
and merging the word segmentation vectors according to the word segmentation positions to obtain the first sentence matrix.
According to an optional embodiment of the present invention, the extracting features of the first sentence matrix to obtain a first feature vector includes:
coding the first statement matrix by using a first feature coder to obtain a first coding matrix, and coding the first statement matrix by using a second feature coder to obtain a second coding matrix;
splicing the first coding matrix and the second coding matrix to obtain a spliced matrix;
performing characteristic integration on the spliced matrix by using a preset full connection layer to obtain an integrated matrix;
and calculating the average value in each dimension in the integration matrix to obtain the first feature vector.
According to an optional embodiment of the present invention, the encoding the first sentence matrix by using the first feature encoder to obtain the first encoding matrix includes:
acquiring information corresponding to the first statement matrix from a character vector table as a character matrix, and acquiring information corresponding to the first statement matrix from a position vector table as a position matrix;
calculating the sum of the character matrix and the position matrix to obtain a mapping matrix;
processing the mapping matrix by using a multi-head attention mechanism to obtain an attention matrix;
calculating the sum of the mapping matrix and the attention matrix to obtain a feature matrix;
normalizing the characteristic matrix to obtain a normalized matrix;
performing feature integration on the normalized matrix by using the preset full connection layer to obtain a target matrix;
calculating the sum of the normalization matrix and the target matrix, and performing normalization processing on the matrix obtained by calculation to obtain a calculation matrix;
and mapping the calculation matrix according to a preset feature vector table to obtain the first coding matrix.
According to an alternative embodiment of the present invention, the calculating the matching degree of the first feature vector and the second feature vector comprises:
calculating values of the first eigenvector and the second eigenvector by using a cosine distance formula to obtain a first similarity;
calculating the variance of the first feature vector according to the value of each dimension in the first feature vector to obtain a first variance, and calculating the variance of the second feature vector according to the value of each dimension in the second feature vector to obtain a second variance;
calculating the sum of the first variance and the second variance to obtain a total variance;
calculating a difference value between the first feature vector and the second feature vector to obtain a difference value vector, and calculating a variance of the difference value vector according to a value of each dimension in the difference value vector to obtain a third variance;
dividing the third variance by the total variance to obtain a second similarity;
and carrying out weighted sum operation on the first similarity and the second similarity according to a preset weight coefficient to obtain the matching degree.
According to an optional embodiment of the present invention, after obtaining the answer corresponding to the target question statement as the reply statement of the to-be-asked statement, the method further includes:
acquiring a request number of the question-answering request;
generating prompt information according to the request number and the reply statement;
encrypting the prompt message by adopting a symmetric encryption algorithm to obtain a ciphertext;
acquiring a trigger user of the question and answer request from the question and answer request;
and sending the ciphertext to the terminal equipment of the trigger user.
On the other hand, the invention also provides an intelligent question-answering device, which comprises:
the device comprises an acquisition unit, a query and answer generation unit and a query and answer generation unit, wherein the acquisition unit is used for extracting a question and answer to be asked from a question and answer request and acquiring a candidate question and answer from a preset corpus according to the question and answer request;
the obtaining unit is further configured to obtain a first statement matrix of the statement to be asked based on a preset vector table, and obtain a second statement matrix of the candidate question statement based on the preset vector table;
the extraction unit is used for extracting the characteristics of the first statement matrix to obtain a first characteristic vector and extracting the characteristics of the second statement matrix to obtain a second characteristic vector;
a calculating unit, configured to calculate a matching degree between the first feature vector and the second feature vector;
the processing unit is used for carrying out normalization processing on the matching degree to obtain the similarity probability of the statement to be asked and the candidate question statement;
a determining unit configured to determine the candidate question statement with the highest similarity probability as a target question statement;
the obtaining unit is further configured to obtain an answer corresponding to the target question statement as a reply statement of the to-be-asked statement.
In another aspect, the present invention further provides an electronic device, including:
a memory storing computer readable instructions; and
a processor executing computer readable instructions stored in the memory to implement the intelligent question-answering method.
In another aspect, the present invention further provides a computer-readable storage medium, in which computer-readable instructions are stored, and the computer-readable instructions are executed by a processor in an electronic device to implement the intelligent question answering method.
According to the technical scheme, the semantic representation capability of the first feature vector and the semantic representation capability of the second feature vector are improved by extracting the features of the first statement matrix and the second statement matrix, the similarity between the first feature vector and the second feature vector is calculated in various ways, the accuracy of the matching degree is effectively improved, and the accuracy of the reply statement is doubly improved. In addition, the invention does not need to process the question sentences and the candidate question sentences through a multilayer model structure, thereby improving the determining efficiency of the reply sentences and improving the real-time property of intelligent question answering.
Drawings
FIG. 1 is a flow chart of the preferred embodiment of the intelligent question answering method of the present invention.
FIG. 2 is a functional block diagram of a preferred embodiment of the intelligent question answering device of the present invention.
Fig. 3 is a schematic structural diagram of an electronic device implementing an intelligent question answering method according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flow chart of the intelligent question answering method according to the preferred embodiment of the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
The intelligent question-answering method is applied to an intelligent traffic scene, so that the construction of an intelligent city is promoted. The intelligent question answering method is applied to one or more electronic devices, wherein the electronic devices are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored computer readable instructions, and hardware of the electronic devices includes but is not limited to microprocessors, Application Specific Integrated Circuits (ASICs), Programmable Gate arrays (FPGAs), Digital Signal Processors (DSPs), embedded devices and the like.
The electronic device may be any electronic product capable of performing human-computer interaction with a user, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an interactive Internet Protocol Television (IPTV), a smart wearable device, and the like.
The electronic device may include a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network electronic device, an electronic device group consisting of a plurality of network electronic devices, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network electronic devices.
The network in which the electronic device is located includes, but is not limited to: the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
S10, when a question and answer request is received, extracting the question and answer to be asked from the question and answer request, and acquiring candidate question and answer from a preset corpus according to the question and answer request.
In at least one embodiment of the present invention, the question and answer request may be triggered by a worker in charge of the traffic accident processing center, and the question and answer request may also be triggered by a worker in the medical center, and the trigger of the question and answer request is not limited by the present invention.
The information carried by the question-answering request includes, but is not limited to: domain, sentence number, etc.
The statement to be asked is a statement which needs to be subjected to question query.
The preset corpus is stored with preset question sentences corresponding to a plurality of fields.
The candidate question sentences refer to all preset question sentences stored in the preset corpus.
In at least one embodiment of the present invention, the electronic device extracting the sentence to be asked from the question and answer request includes:
analyzing the message of the question-answering request to obtain the data information carried by the message;
acquiring information indicating a problem from the data information as a sentence number;
and obtaining the statement corresponding to the statement number from a library to be processed as the statement to be asked.
Wherein, the data information includes, but is not limited to: the statement number, etc.
The sentence number is used for indicating a question sentence.
The to-be-processed library stores a plurality of unprocessed question sentences and the numbers of the question sentences.
By analyzing the message, the efficiency of acquiring the data information can be improved, and the statement to be asked can be accurately acquired from the library to be processed through the mapping relation between the statement number and the statement.
In at least one embodiment of the present invention, the electronic device obtaining the candidate question sentences from the corpus according to the question and answer request includes:
acquiring a field corresponding to the sentence to be asked from the question and answer request;
acquiring a target list from the preset corpus according to the field;
determining all statements in the target list as the candidate question statements.
The field can be a traffic accident question and answer field, the field can also be a medical question and answer field, and the invention does not limit the concrete form of the field.
The target list stores a plurality of question sentences corresponding to the domain.
The field can be accurately determined through the question and answer request, and the target list can be accurately obtained from the preset corpus according to the field, so that the candidate question sentences can be accurately obtained.
S11, acquiring the first statement matrix of the statement to be asked based on a preset vector table, and acquiring the second statement matrix of the candidate question statement based on the preset vector table.
In at least one embodiment of the present invention, the predetermined vector table stores token vectors of a plurality of characters.
The first statement matrix is formed by the characterization vectors of each character in the statement to be asked, and the number of the row vectors in the first statement matrix is the same as the number of the characters in the statement to be asked.
The second sentence matrix is formed by the characterization vectors of each character in the candidate question sentences, and the number of row vectors in the second sentence matrix is the same as the number of characters in the candidate question sentences.
In at least one embodiment of the present invention, the obtaining, by the electronic device, the first statement matrix of the statement to be asked based on a preset vector table includes:
performing word segmentation processing on the sentence to be asked to obtain sentence word segmentation;
determining the word segmentation position of the sentence segmentation in the sentence to be asked according to the sentence to be asked;
acquiring word segmentation vectors of the sentence word segmentation from the preset vector table;
and merging the word segmentation vectors according to the word segmentation positions to obtain the first sentence matrix.
The sentence participles refer to the words in the sentence to be asked.
The word segmentation position refers to the arrangement position of the sentence segmentation in the sentence to be asked.
The generated first sentence matrix can have the position information of the sentence segmentation through the segmentation position, so that the generation accuracy of the first sentence matrix is improved.
Specifically, the electronic device performs word segmentation processing on the sentence to be asked, and obtaining sentence word segmentation includes:
segmenting the sentence to be asked according to a preset dictionary to obtain a plurality of segmentation paths of the sentence to be asked and segmentation participles of each segmentation path;
calculating the segmentation probability of each segmentation path according to the segmentation weight in the preset dictionary;
and determining the segmentation participle corresponding to the segmentation path with the maximum segmentation probability as the sentence participle.
The preset dictionary comprises a plurality of vocabularies and the word segmentation weight of each vocabulary.
Through the implementation mode, the sentence segmentation words can be quickly generated according to the requirement.
Specifically, the calculating, by the electronic device, the segmentation probability of each segmentation path according to the segmentation weight in the preset dictionary includes:
obtaining a segmentation weight of the segmentation word in each segmentation path in the preset dictionary;
and calculating the sum of the word segmentation weights obtained in each segmentation path to obtain the segmentation probability.
In at least one embodiment of the present invention, a manner in which the electronic device obtains the second sentence matrix of the candidate question sentence based on the preset vector table is the same as a manner in which the electronic device obtains the first sentence matrix of the question sentence based on the preset vector table, which is not described in detail herein.
And S12, performing feature extraction on the first statement matrix to obtain a first feature vector, and performing feature extraction on the second statement matrix to obtain a second feature vector.
In at least one embodiment of the invention, the first feature vector is used to indicate the semantics of the question sentence to be asked, and the second feature vector is used to indicate the semantics of the candidate question sentence.
In at least one embodiment of the present invention, the performing, by the electronic device, feature extraction on the first sentence matrix to obtain a first feature vector includes:
coding the first statement matrix by using a first feature coder to obtain a first coding matrix, and coding the first statement matrix by using a second feature coder to obtain a second coding matrix;
splicing the first coding matrix and the second coding matrix to obtain a spliced matrix;
performing characteristic integration on the spliced matrix by using a preset full connection layer to obtain an integrated matrix;
and calculating the average value in each dimension in the integration matrix to obtain the first feature vector.
The first coding matrix and the second coding matrix are obtained by coding the first statement matrix in different coding modes.
The semantic representation capability of the splicing matrix can be improved through the first feature encoder and the second feature encoder, and feature integration can be performed on the highly abstracted features through the preset full-connection layer, so that the generation accuracy of the first feature vector is improved.
Specifically, the encoding, by the electronic device, the first statement matrix by using a first feature encoder to obtain a first encoding matrix includes:
acquiring information corresponding to the first statement matrix from a character vector table as a character matrix, and acquiring information corresponding to the first statement matrix from a position vector table as a position matrix;
calculating the sum of the character matrix and the position matrix to obtain a mapping matrix;
processing the mapping matrix by using a multi-head attention mechanism to obtain an attention matrix;
calculating the sum of the mapping matrix and the attention matrix to obtain a feature matrix;
normalizing the characteristic matrix to obtain a normalized matrix;
performing feature integration on the normalized matrix by using the preset full connection layer to obtain a target matrix;
calculating the sum of the normalization matrix and the target matrix, and performing normalization processing on the matrix obtained by calculation to obtain a calculation matrix;
and mapping the calculation matrix according to a preset feature vector table to obtain the first coding matrix.
The character vector table stores a plurality of vectors of character representations.
The position vector table stores a plurality of position characterized vectors.
The mapping matrix can be accurately generated through the character vector table and the position vector table, the semantic representation capability of the attention matrix is improved through processing the mapping matrix through a multi-head attention mechanism, and the generation accuracy of the initial matrix can be improved through a series of operations on the attention matrix.
Specifically, the electronic device calculates a sum of the character matrix and the position matrix, and obtaining a mapping matrix includes:
for any element in the character matrix, determining an element corresponding to the any element in the position matrix as a corresponding element;
calculating the sum of any element and the corresponding element to obtain the element sum corresponding to any element;
and splicing the element sum according to the position of any element in the character matrix to obtain the mapping matrix.
Through the embodiment, the mapping matrix can be accurately determined.
Further, the manner in which the electronic device processes the mapping matrix using the multi-head attention mechanism belongs to the prior art, and is not described in detail herein.
Further, the manner of calculating the sum of the mapping matrix and the attention matrix by the electronic device is the same as the manner of calculating the sum of the character matrix and the position matrix by the electronic device, which is not repeated herein.
Further, the electronic device normalizes the feature matrix to obtain a normalized matrix, including:
adjusting any characteristic element in the characteristic matrix to a preset range to obtain a target element corresponding to the any characteristic element, and determining the proportion of the any characteristic element to the target element as an adjustment proportion;
adjusting other characteristic elements except any characteristic element in the characteristic matrix according to the adjustment proportion to obtain adjustment elements corresponding to the other characteristic elements;
and splicing the target element and the adjusting element to obtain the normalized matrix.
Wherein the preset range is generally set to [0, 1 ].
By adjusting the proportion, all elements in the feature matrix can be adjusted according to the same proportion, so that the normalization matrix is accurately generated.
Specifically, the encoding, by the electronic device, the first sentence matrix by using the second feature encoder to obtain the second encoding matrix includes:
performing convolution processing on the character matrix by using a preset convolution layer to obtain a convolution matrix;
and mapping the convolution matrix according to the preset feature vector table to obtain the second coding matrix.
Wherein the preset convolution layer includes: 3 convolution kernels are the first convolution layer of 5 x 5, 2 convolution kernels are the second convolution layer of 3 x 3, and 1 convolution kernel is the third convolution layer of 1 x 1.
In at least one embodiment of the present invention, a manner of extracting features of the second sentence matrix by the electronic device is the same as a manner of extracting features of the first sentence matrix by the electronic device, which is not described in detail herein.
And S13, calculating the matching degree of the first feature vector and the second feature vector.
In at least one embodiment of the present invention, the matching degree refers to a matching condition of the first feature vector and the second feature vector.
In at least one embodiment of the present invention, the electronic device calculating the degree of matching of the first feature vector and the second feature vector comprises:
calculating values of the first eigenvector and the second eigenvector by using a cosine distance formula to obtain a first similarity;
calculating the variance of the first feature vector according to the value of each dimension in the first feature vector to obtain a first variance, and calculating the variance of the second feature vector according to the value of each dimension in the second feature vector to obtain a second variance;
calculating the sum of the first variance and the second variance to obtain a total variance;
calculating a difference value between the first feature vector and the second feature vector to obtain a difference value vector, and calculating a variance of the difference value vector according to a value of each dimension in the difference value vector to obtain a third variance;
dividing the third variance by the total variance to obtain a second similarity;
and carrying out weighted sum operation on the first similarity and the second similarity according to a preset weight coefficient to obtain the matching degree.
The preset weight coefficient is set according to the importance of the first similarity and the second similarity in the matching degree, and the determination mode of the preset weight coefficient is not limited.
Through the first similarity and the second similarity, the accuracy of the matching degree can be effectively improved.
Specifically, the electronic device calculates a variance of the first feature vector according to a value of each dimension in the first feature vector to obtain a first variance, where a specific formula is as follows:
Figure BDA0003016153610000121
wherein s1 is the first variance, x1, x2, … xN are values of each dimension in the first feature vector, N is a total number of dimensions in the first feature vector, and M is an average value of values of all dimensions in the first feature vector.
Further, a manner in which the electronic device calculates the variance of the second eigenvector according to the value of each dimension in the second eigenvector is the same as a manner in which the electronic device calculates the variance of the first eigenvector according to the value of each dimension in the first eigenvector, which is not described in detail herein.
Further, the electronic device calculates a difference between the first feature vector and the second feature vector, and obtaining a difference vector includes:
acquiring any vector element in the first feature vector, and determining the element dimension of the any vector element in the first feature vector;
determining the value corresponding to the element dimension in the second feature vector as a corresponding dimension element;
and calculating the difference value between any vector element and the corresponding dimension element, and splicing the difference value to obtain the difference value vector.
Further, the electronic device performs weighting and operation on the first similarity and the second similarity according to a preset weight coefficient, and obtaining the matching degree includes:
acquiring a coefficient corresponding to the first similarity from the preset weight coefficient as a first coefficient, and acquiring a coefficient corresponding to the second similarity from the preset weight coefficient as a second coefficient;
calculating the product of the first similarity and the first coefficient to obtain a first numerical value, and calculating the product of the second similarity and the second coefficient to obtain a second numerical value;
and calculating the sum of the first numerical value and the second numerical value to obtain the matching degree.
S14, normalizing the matching degree to obtain the similarity probability of the statement to be asked and the candidate question statement.
In at least one embodiment of the present invention, the similarity probability refers to a similarity between the question sentence to be asked and the candidate question sentence. The sum of the similarity probabilities of the question sentence to be asked and all the candidate question sentences is 1.
In at least one embodiment of the present invention, the normalizing, by the electronic device, the matching degree to obtain a similarity probability between the to-be-asked question sentence and the candidate question sentence includes:
and carrying out normal distribution processing on the matching degree to obtain the similarity probability.
Wherein the probability sum of the similarity probabilities is 1.
By normalizing the matching degree, the target question sentence can be quickly determined according to the matching degree.
S15, determining the candidate question statement with the maximum similarity probability as a target question statement.
In at least one embodiment of the present invention, the target question sentence refers to a candidate question sentence with the highest similarity probability with the question sentence to be asked among all the candidate question sentences.
And S16, acquiring the answer corresponding to the target question sentence as the reply sentence of the question sentence to be asked.
It is emphasized that the reply statement may also be stored in a node of a block chain in order to further ensure the privacy and security of the reply statement.
In at least one embodiment of the present invention, the acquiring, by the electronic device, an answer corresponding to the target question sentence as a reply sentence of the to-be-asked sentence includes:
acquiring a target number of the target question statement;
writing the target number and the field into a preset query template to obtain a query statement;
and operating the query statement to obtain the reply statement.
Wherein the preset query template may be a structured query statement.
The query statement can be generated quickly through the preset query template, and the reply statement can be accurately acquired because the query statement contains the target number and the field.
In at least one embodiment of the present invention, after obtaining the answer corresponding to the target question statement as the reply statement of the to-be-asked statement, the method further includes:
acquiring a request number of the question-answering request;
generating prompt information according to the request number and the reply statement;
encrypting the prompt message by adopting a symmetric encryption algorithm to obtain a ciphertext;
acquiring a trigger user of the question and answer request from the question and answer request;
and sending the ciphertext to the terminal equipment of the trigger user.
And generating prompt information through the request number and the reply statement, so that the generated prompt information has the request number and the reply statement, the trigger user can rapidly learn the request corresponding to the reply statement, and the prompt information is encrypted through a symmetric encryption algorithm, so that the ciphertext can be rapidly generated, and the safety of the prompt information can be improved.
According to the technical scheme, the semantic representation capability of the first feature vector and the semantic representation capability of the second feature vector are improved by extracting the features of the first statement matrix and the second statement matrix, the similarity between the first feature vector and the second feature vector is calculated in various ways, the accuracy of the matching degree is effectively improved, and the accuracy of the reply statement is doubly improved. In addition, the invention does not need to process the question sentences and the candidate question sentences through a multilayer model structure, thereby improving the determining efficiency of the reply sentences and improving the real-time property of intelligent question answering.
FIG. 2 is a functional block diagram of the intelligent question answering device according to the preferred embodiment of the present invention. The smart question-answering device 11 includes an acquisition unit 110, an extraction unit 111, a calculation unit 112, a processing unit 113, a determination unit 114, a generation unit 115, an encryption unit 116, and a transmission unit 117. The module/unit referred to herein is a series of computer readable instruction segments that can be accessed by the processor 13 and perform a fixed function and that are stored in the memory 12. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
When receiving a question and answer request, the obtaining unit 110 extracts a question and answer to be asked from the question and answer request, and obtains candidate question and answer sentences from a preset corpus according to the question and answer request.
In at least one embodiment of the present invention, the question and answer request may be triggered by a worker in charge of the traffic accident processing center, and the question and answer request may also be triggered by a worker in the medical center, and the trigger of the question and answer request is not limited by the present invention.
The information carried by the question-answering request includes, but is not limited to: domain, sentence number, etc.
The statement to be asked is a statement which needs to be subjected to question query.
The preset corpus is stored with preset question sentences corresponding to a plurality of fields.
The candidate question sentences refer to all preset question sentences stored in the preset corpus.
In at least one embodiment of the present invention, the obtaining unit 110 extracts the question and answer sentence to be asked from the question and answer request, including:
analyzing the message of the question-answering request to obtain the data information carried by the message;
acquiring information indicating a problem from the data information as a sentence number;
and obtaining the statement corresponding to the statement number from a library to be processed as the statement to be asked.
Wherein, the data information includes, but is not limited to: the statement number, etc.
The sentence number is used for indicating a question sentence.
The to-be-processed library stores a plurality of unprocessed question sentences and the numbers of the question sentences.
By analyzing the message, the efficiency of acquiring the data information can be improved, and the statement to be asked can be accurately acquired from the library to be processed through the mapping relation between the statement number and the statement.
In at least one embodiment of the present invention, the obtaining unit 110 obtains the candidate question sentences from the corpus according to the question and answer request, including:
acquiring a field corresponding to the sentence to be asked from the question and answer request;
acquiring a target list from the preset corpus according to the field;
determining all statements in the target list as the candidate question statements.
The field can be a traffic accident question and answer field, the field can also be a medical question and answer field, and the invention does not limit the concrete form of the field.
The target list stores a plurality of question sentences corresponding to the domain.
The field can be accurately determined through the question and answer request, and the target list can be accurately obtained from the preset corpus according to the field, so that the candidate question sentences can be accurately obtained.
The obtaining unit 110 obtains a first sentence matrix of the question sentence to be asked based on a preset vector table, and obtains a second sentence matrix of the candidate question sentence based on the preset vector table.
In at least one embodiment of the present invention, the predetermined vector table stores token vectors of a plurality of characters.
The first statement matrix is formed by the characterization vectors of each character in the statement to be asked, and the number of the row vectors in the first statement matrix is the same as the number of the characters in the statement to be asked.
The second sentence matrix is formed by the characterization vectors of each character in the candidate question sentences, and the number of row vectors in the second sentence matrix is the same as the number of characters in the candidate question sentences.
In at least one embodiment of the present invention, the obtaining unit 110 obtains the first statement matrix of the to-be-asked statement based on a preset vector table, where the obtaining unit includes:
performing word segmentation processing on the sentence to be asked to obtain sentence word segmentation;
determining the word segmentation position of the sentence segmentation in the sentence to be asked according to the sentence to be asked;
acquiring word segmentation vectors of the sentence word segmentation from the preset vector table;
and merging the word segmentation vectors according to the word segmentation positions to obtain the first sentence matrix.
The sentence participles refer to the words in the sentence to be asked.
The word segmentation position refers to the arrangement position of the sentence segmentation in the sentence to be asked.
The generated first sentence matrix can have the position information of the sentence segmentation through the segmentation position, so that the generation accuracy of the first sentence matrix is improved.
Specifically, the obtaining unit 110 performs word segmentation on the to-be-asked sentence, and obtaining sentence word segmentation includes:
segmenting the sentence to be asked according to a preset dictionary to obtain a plurality of segmentation paths of the sentence to be asked and segmentation participles of each segmentation path;
calculating the segmentation probability of each segmentation path according to the segmentation weight in the preset dictionary;
and determining the segmentation participle corresponding to the segmentation path with the maximum segmentation probability as the sentence participle.
The preset dictionary comprises a plurality of vocabularies and the word segmentation weight of each vocabulary.
Through the implementation mode, the sentence segmentation words can be quickly generated according to the requirement.
Specifically, the calculating, by the obtaining unit 110, the segmentation probability of each segmentation path according to the segmentation weight in the preset dictionary includes:
obtaining a segmentation weight of the segmentation word in each segmentation path in the preset dictionary;
and calculating the sum of the word segmentation weights obtained in each segmentation path to obtain the segmentation probability.
In at least one embodiment of the present invention, a manner in which the obtaining unit 110 obtains the second sentence matrix of the candidate question sentence based on the preset vector table is the same as a manner in which the obtaining unit 110 obtains the first sentence matrix of the question sentence based on the preset vector table, which is not described in detail herein.
The extracting unit 111 performs feature extraction on the first sentence matrix to obtain a first feature vector, and performs feature extraction on the second sentence matrix to obtain a second feature vector.
In at least one embodiment of the invention, the first feature vector is used to indicate the semantics of the question sentence to be asked, and the second feature vector is used to indicate the semantics of the candidate question sentence.
In at least one embodiment of the present invention, the extracting unit 111 performs feature extraction on the first sentence matrix, and obtaining a first feature vector includes:
coding the first statement matrix by using a first feature coder to obtain a first coding matrix, and coding the first statement matrix by using a second feature coder to obtain a second coding matrix;
splicing the first coding matrix and the second coding matrix to obtain a spliced matrix;
performing characteristic integration on the spliced matrix by using a preset full connection layer to obtain an integrated matrix;
and calculating the average value in each dimension in the integration matrix to obtain the first feature vector.
The first coding matrix and the second coding matrix are obtained by coding the first statement matrix in different coding modes.
The semantic representation capability of the splicing matrix can be improved through the first feature encoder and the second feature encoder, and feature integration can be performed on the highly abstracted features through the preset full-connection layer, so that the generation accuracy of the first feature vector is improved.
Specifically, the obtaining, by the extracting unit 111, a first encoding matrix by using a first feature encoder includes:
acquiring information corresponding to the first statement matrix from a character vector table as a character matrix, and acquiring information corresponding to the first statement matrix from a position vector table as a position matrix;
calculating the sum of the character matrix and the position matrix to obtain a mapping matrix;
processing the mapping matrix by using a multi-head attention mechanism to obtain an attention matrix;
calculating the sum of the mapping matrix and the attention matrix to obtain a feature matrix;
normalizing the characteristic matrix to obtain a normalized matrix;
performing feature integration on the normalized matrix by using the preset full connection layer to obtain a target matrix;
calculating the sum of the normalization matrix and the target matrix, and performing normalization processing on the matrix obtained by calculation to obtain a calculation matrix;
and mapping the calculation matrix according to a preset feature vector table to obtain the first coding matrix.
The character vector table stores a plurality of vectors of character representations.
The position vector table stores a plurality of position characterized vectors.
The mapping matrix can be accurately generated through the character vector table and the position vector table, the semantic representation capability of the attention matrix is improved through processing the mapping matrix through a multi-head attention mechanism, and the generation accuracy of the initial matrix can be improved through a series of operations on the attention matrix.
Specifically, the calculating, by the extracting unit 111, a sum of the character matrix and the position matrix to obtain a mapping matrix includes:
for any element in the character matrix, determining an element corresponding to the any element in the position matrix as a corresponding element;
calculating the sum of any element and the corresponding element to obtain the element sum corresponding to any element;
and splicing the element sum according to the position of any element in the character matrix to obtain the mapping matrix.
Through the embodiment, the mapping matrix can be accurately determined.
Further, the manner in which the extracting unit 111 processes the mapping matrix by using the multi-head attention mechanism belongs to the prior art, and is not described in detail herein.
Further, the way of calculating the sum of the mapping matrix and the attention matrix by the extracting unit 111 is the same as the way of calculating the sum of the character matrix and the position matrix by the extracting unit 111, which is not described again in the present invention.
Further, the extracting unit 111 performs normalization processing on the feature matrix to obtain a normalized matrix, including:
adjusting any characteristic element in the characteristic matrix to a preset range to obtain a target element corresponding to the any characteristic element, and determining the proportion of the any characteristic element to the target element as an adjustment proportion;
adjusting other characteristic elements except any characteristic element in the characteristic matrix according to the adjustment proportion to obtain adjustment elements corresponding to the other characteristic elements;
and splicing the target element and the adjusting element to obtain the normalized matrix.
Wherein the preset range is generally set to [0, 1 ].
By adjusting the proportion, all elements in the feature matrix can be adjusted according to the same proportion, so that the normalization matrix is accurately generated.
Specifically, the extracting unit 111 performs encoding processing on the first sentence matrix by using a second feature encoder, and obtaining a second encoding matrix includes:
performing convolution processing on the character matrix by using a preset convolution layer to obtain a convolution matrix;
and mapping the convolution matrix according to the preset feature vector table to obtain the second coding matrix.
Wherein the preset convolution layer includes: 3 convolution kernels are the first convolution layer of 5 x 5, 2 convolution kernels are the second convolution layer of 3 x 3, and 1 convolution kernel is the third convolution layer of 1 x 1.
In at least one embodiment of the present invention, a manner of extracting features of the second sentence matrix by the extracting unit 111 is the same as a manner of extracting features of the first sentence matrix by the extracting unit 111, which is not described herein again.
The calculating unit 112 calculates a matching degree of the first feature vector and the second feature vector.
In at least one embodiment of the present invention, the matching degree refers to a matching condition of the first feature vector and the second feature vector.
In at least one embodiment of the present invention, the calculating unit 112 calculates the matching degree of the first feature vector and the second feature vector includes:
calculating values of the first eigenvector and the second eigenvector by using a cosine distance formula to obtain a first similarity;
calculating the variance of the first feature vector according to the value of each dimension in the first feature vector to obtain a first variance, and calculating the variance of the second feature vector according to the value of each dimension in the second feature vector to obtain a second variance;
calculating the sum of the first variance and the second variance to obtain a total variance;
calculating a difference value between the first feature vector and the second feature vector to obtain a difference value vector, and calculating a variance of the difference value vector according to a value of each dimension in the difference value vector to obtain a third variance;
dividing the third variance by the total variance to obtain a second similarity;
and carrying out weighted sum operation on the first similarity and the second similarity according to a preset weight coefficient to obtain the matching degree.
The preset weight coefficient is set according to the importance of the first similarity and the second similarity in the matching degree, and the determination mode of the preset weight coefficient is not limited.
Through the first similarity and the second similarity, the accuracy of the matching degree can be effectively improved.
Specifically, the calculating unit 11 calculates a variance of the first feature vector according to a value of each dimension in the first feature vector, to obtain a first variance, where a specific formula is as follows:
Figure BDA0003016153610000211
wherein s1 is the first variance, x1, x2, … xN are values of each dimension in the first feature vector, N is a total number of dimensions in the first feature vector, and M is an average value of values of all dimensions in the first feature vector.
Further, the way of calculating the variance of the second feature vector by the calculating unit 11 according to the value of each dimension in the second feature vector is the same as the way of calculating the variance of the first feature vector by the calculating unit 11 according to the value of each dimension in the first feature vector, and the details are not repeated herein.
Further, the calculating unit 11 calculates a difference between the first feature vector and the second feature vector, and obtaining a difference vector includes:
acquiring any vector element in the first feature vector, and determining the element dimension of the any vector element in the first feature vector;
determining the value corresponding to the element dimension in the second feature vector as a corresponding dimension element;
and calculating the difference value between any vector element and the corresponding dimension element, and splicing the difference value to obtain the difference value vector.
Further, the calculating unit 11 performs a weighted sum operation on the first similarity and the second similarity according to a preset weight coefficient, and obtaining the matching degree includes:
acquiring a coefficient corresponding to the first similarity from the preset weight coefficient as a first coefficient, and acquiring a coefficient corresponding to the second similarity from the preset weight coefficient as a second coefficient;
calculating the product of the first similarity and the first coefficient to obtain a first numerical value, and calculating the product of the second similarity and the second coefficient to obtain a second numerical value;
and calculating the sum of the first numerical value and the second numerical value to obtain the matching degree.
The processing unit 113 normalizes the matching degree to obtain the similarity probability between the question sentence to be asked and the candidate question sentence.
In at least one embodiment of the present invention, the similarity probability refers to a similarity between the question sentence to be asked and the candidate question sentence. The sum of the similarity probabilities of the question sentence to be asked and all the candidate question sentences is 1.
In at least one embodiment of the present invention, the processing unit 113 normalizes the matching degree, and obtaining the similarity probability between the question sentence to be asked and the candidate question sentence includes:
and carrying out normal distribution processing on the matching degree to obtain the similarity probability.
Wherein the probability sum of the similarity probabilities is 1.
By normalizing the matching degree, the target question sentence can be quickly determined according to the matching degree.
The determination unit 114 determines the candidate question sentence having the largest similarity probability as a target question sentence.
In at least one embodiment of the present invention, the target question sentence refers to a candidate question sentence with the highest similarity probability with the question sentence to be asked among all the candidate question sentences.
The obtaining unit 110 obtains an answer corresponding to the target question sentence as a reply sentence of the to-be-asked sentence.
It is emphasized that the reply statement may also be stored in a node of a block chain in order to further ensure the privacy and security of the reply statement.
In at least one embodiment of the present invention, the acquiring unit 110 acquiring the answer corresponding to the target question sentence as the reply sentence of the to-be-asked sentence includes:
acquiring a target number of the target question statement;
writing the target number and the field into a preset query template to obtain a query statement;
and operating the query statement to obtain the reply statement.
Wherein the preset query template may be a structured query statement.
The query statement can be generated quickly through the preset query template, and the reply statement can be accurately acquired because the query statement contains the target number and the field.
In at least one embodiment of the present invention, after obtaining the answer corresponding to the target question sentence as the reply sentence of the to-be-asked sentence, the obtaining unit 110 obtains the request number of the question-and-answer request;
the generating unit 115 generates a prompt message according to the request number and the reply statement;
the encryption unit 116 encrypts the prompt message by using a symmetric encryption algorithm to obtain a ciphertext;
the obtaining unit 110 obtains a trigger user of the question and answer request from the question and answer request;
the transmitting unit 117 transmits the ciphertext to the terminal device of the trigger user.
And generating prompt information through the request number and the reply statement, so that the generated prompt information has the request number and the reply statement, the trigger user can rapidly learn the request corresponding to the reply statement, and the prompt information is encrypted through a symmetric encryption algorithm, so that the ciphertext can be rapidly generated, and the safety of the prompt information can be improved.
According to the technical scheme, the semantic representation capability of the first feature vector and the semantic representation capability of the second feature vector are improved by extracting the features of the first statement matrix and the second statement matrix, the similarity between the first feature vector and the second feature vector is calculated in various ways, the accuracy of the matching degree is effectively improved, and the accuracy of the reply statement is doubly improved. In addition, the invention does not need to process the question sentences and the candidate question sentences through a multilayer model structure, thereby improving the determining efficiency of the reply sentences and improving the real-time property of intelligent question answering.
Fig. 3 is a schematic structural diagram of an electronic device according to a preferred embodiment of the method for implementing intelligent question answering.
In one embodiment of the present invention, the electronic device 1 includes, but is not limited to, a memory 12, a processor 13, and computer readable instructions, such as a smart question and answer program, stored in the memory 12 and executable on the processor 13.
It will be appreciated by a person skilled in the art that the schematic diagram is only an example of the electronic device 1 and does not constitute a limitation of the electronic device 1, and that it may comprise more or less components than shown, or some components may be combined, or different components, e.g. the electronic device 1 may further comprise an input output device, a network access device, a bus, etc.
The Processor 13 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The processor 13 is an operation core and a control center of the electronic device 1, and is connected to each part of the whole electronic device 1 by various interfaces and lines, and executes an operating system of the electronic device 1 and various installed application programs, program codes, and the like.
Illustratively, the computer readable instructions may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 13 to implement the present invention. The one or more modules/units may be a series of computer readable instruction segments capable of performing specific functions, which are used for describing the execution process of the computer readable instructions in the electronic device 1. For example, the computer-readable instructions may be divided into an acquisition unit 110, an extraction unit 111, a calculation unit 112, a processing unit 113, a determination unit 114, a generation unit 115, an encryption unit 116, and a transmission unit 117.
The memory 12 may be used for storing the computer readable instructions and/or modules, and the processor 13 implements various functions of the electronic device 1 by executing or executing the computer readable instructions and/or modules stored in the memory 12 and invoking data stored in the memory 12. The memory 12 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. The memory 12 may include non-volatile and volatile memories, such as: a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other storage device.
The memory 12 may be an external memory and/or an internal memory of the electronic device 1. Further, the memory 12 may be a memory having a physical form, such as a memory stick, a TF Card (Trans-flash Card), or the like.
The integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the above embodiments may be implemented by hardware that is configured to be instructed by computer readable instructions, which may be stored in a computer readable storage medium, and when the computer readable instructions are executed by a processor, the steps of the method embodiments may be implemented.
Wherein the computer readable instructions comprise computer readable instruction code which may be in source code form, object code form, an executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying said computer readable instruction code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM).
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
With reference to fig. 1, the memory 12 in the electronic device 1 stores computer-readable instructions to implement an intelligent question answering method, and the processor 13 can execute the computer-readable instructions to implement:
when a question and answer request is received, extracting a sentence to be asked from the question and answer request, and acquiring a candidate question sentence from a preset corpus according to the question and answer request;
acquiring a first statement matrix of the statement to be asked based on a preset vector table, and acquiring a second statement matrix of the candidate question statement based on the preset vector table;
performing feature extraction on the first statement matrix to obtain a first feature vector, and performing feature extraction on the second statement matrix to obtain a second feature vector;
calculating the matching degree of the first feature vector and the second feature vector;
normalizing the matching degree to obtain the similarity probability of the statement to be asked and the candidate question statement;
determining the candidate question statement with the maximum similarity probability as a target question statement;
and acquiring an answer corresponding to the target question sentence as a reply sentence of the to-be-asked sentence.
Specifically, the processor 13 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer readable instructions, which is not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The computer readable storage medium has computer readable instructions stored thereon, wherein the computer readable instructions when executed by the processor 13 are configured to implement the steps of:
when a question and answer request is received, extracting a sentence to be asked from the question and answer request, and acquiring a candidate question sentence from a preset corpus according to the question and answer request;
acquiring a first statement matrix of the statement to be asked based on a preset vector table, and acquiring a second statement matrix of the candidate question statement based on the preset vector table;
performing feature extraction on the first statement matrix to obtain a first feature vector, and performing feature extraction on the second statement matrix to obtain a second feature vector;
calculating the matching degree of the first feature vector and the second feature vector;
normalizing the matching degree to obtain the similarity probability of the statement to be asked and the candidate question statement;
determining the candidate question statement with the maximum similarity probability as a target question statement;
and acquiring an answer corresponding to the target question sentence as a reply sentence of the to-be-asked sentence.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. The plurality of units or devices may also be implemented by one unit or device through software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. An intelligent question-answering method is characterized by comprising the following steps:
when a question and answer request is received, extracting a sentence to be asked from the question and answer request, and acquiring a candidate question sentence from a preset corpus according to the question and answer request;
acquiring a first statement matrix of the statement to be asked based on a preset vector table, and acquiring a second statement matrix of the candidate question statement based on the preset vector table;
performing feature extraction on the first statement matrix to obtain a first feature vector, and performing feature extraction on the second statement matrix to obtain a second feature vector;
calculating the matching degree of the first feature vector and the second feature vector;
normalizing the matching degree to obtain the similarity probability of the statement to be asked and the candidate question statement;
determining the candidate question statement with the maximum similarity probability as a target question statement;
and acquiring an answer corresponding to the target question sentence as a reply sentence of the to-be-asked sentence.
2. The intelligent question-answering method according to claim 1, wherein the obtaining of candidate question sentences from a preset corpus according to the question-answering request comprises:
acquiring a field corresponding to the sentence to be asked from the question and answer request;
acquiring a target list from the preset corpus according to the field;
determining all statements in the target list as the candidate question statements.
3. The intelligent question answering method according to claim 1, wherein the obtaining of the first sentence matrix of the sentence to be asked based on a preset vector table comprises:
performing word segmentation processing on the sentence to be asked to obtain sentence word segmentation;
determining the word segmentation position of the sentence segmentation in the sentence to be asked according to the sentence to be asked;
acquiring word segmentation vectors of the sentence word segmentation from the preset vector table;
and merging the word segmentation vectors according to the word segmentation positions to obtain the first sentence matrix.
4. The intelligent question-answering method according to claim 1, wherein the extracting the features of the first sentence matrix to obtain a first feature vector comprises:
coding the first statement matrix by using a first feature coder to obtain a first coding matrix, and coding the first statement matrix by using a second feature coder to obtain a second coding matrix;
splicing the first coding matrix and the second coding matrix to obtain a spliced matrix;
performing characteristic integration on the spliced matrix by using a preset full connection layer to obtain an integrated matrix;
and calculating the average value in each dimension in the integration matrix to obtain the first feature vector.
5. The intelligent question-answering method according to claim 4, wherein the encoding the first sentence matrix by using the first feature encoder to obtain the first encoding matrix comprises:
acquiring information corresponding to the first statement matrix from a character vector table as a character matrix, and acquiring information corresponding to the first statement matrix from a position vector table as a position matrix;
calculating the sum of the character matrix and the position matrix to obtain a mapping matrix;
processing the mapping matrix by using a multi-head attention mechanism to obtain an attention matrix;
calculating the sum of the mapping matrix and the attention matrix to obtain a feature matrix;
normalizing the characteristic matrix to obtain a normalized matrix;
performing feature integration on the normalized matrix by using the preset full connection layer to obtain a target matrix;
calculating the sum of the normalization matrix and the target matrix, and performing normalization processing on the matrix obtained by calculation to obtain a calculation matrix;
and mapping the calculation matrix according to a preset feature vector table to obtain the first coding matrix.
6. The intelligent question-answering method according to claim 1, wherein the calculating the degree of matching of the first eigenvector and the second eigenvector comprises:
calculating values of the first eigenvector and the second eigenvector by using a cosine distance formula to obtain a first similarity;
calculating the variance of the first feature vector according to the value of each dimension in the first feature vector to obtain a first variance, and calculating the variance of the second feature vector according to the value of each dimension in the second feature vector to obtain a second variance;
calculating the sum of the first variance and the second variance to obtain a total variance;
calculating a difference value between the first feature vector and the second feature vector to obtain a difference value vector, and calculating a variance of the difference value vector according to a value of each dimension in the difference value vector to obtain a third variance;
dividing the third variance by the total variance to obtain a second similarity;
and carrying out weighted sum operation on the first similarity and the second similarity according to a preset weight coefficient to obtain the matching degree.
7. The intelligent question-answering method according to claim 1, wherein after obtaining the answer corresponding to the target question sentence as the reply sentence of the question sentence, the method further comprises:
acquiring a request number of the question-answering request;
generating prompt information according to the request number and the reply statement;
encrypting the prompt message by adopting a symmetric encryption algorithm to obtain a ciphertext;
acquiring a trigger user of the question and answer request from the question and answer request;
and sending the ciphertext to the terminal equipment of the trigger user.
8. An intelligent question answering device, characterized in that the intelligent question answering device comprises:
the device comprises an acquisition unit, a query and answer generation unit and a query and answer generation unit, wherein the acquisition unit is used for extracting a question and answer to be asked from a question and answer request and acquiring a candidate question and answer from a preset corpus according to the question and answer request;
the obtaining unit is further configured to obtain a first statement matrix of the statement to be asked based on a preset vector table, and obtain a second statement matrix of the candidate question statement based on the preset vector table;
the extraction unit is used for extracting the characteristics of the first statement matrix to obtain a first characteristic vector and extracting the characteristics of the second statement matrix to obtain a second characteristic vector;
a calculating unit, configured to calculate a matching degree between the first feature vector and the second feature vector;
the processing unit is used for carrying out normalization processing on the matching degree to obtain the similarity probability of the statement to be asked and the candidate question statement;
a determining unit configured to determine the candidate question statement with the highest similarity probability as a target question statement;
the obtaining unit is further configured to obtain an answer corresponding to the target question statement as a reply statement of the to-be-asked statement.
9. An electronic device, characterized in that the electronic device comprises:
a memory storing computer readable instructions; and
a processor executing computer readable instructions stored in the memory to implement the intelligent question answering method according to any one of claims 1 to 7.
10. A computer-readable storage medium characterized by: the computer-readable storage medium stores therein computer-readable instructions which are executed by a processor in an electronic device to implement the intelligent question answering method according to any one of claims 1 to 7.
CN202110389804.5A 2021-04-12 2021-04-12 Intelligent question and answer method, device, equipment and storage medium Pending CN113064972A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110389804.5A CN113064972A (en) 2021-04-12 2021-04-12 Intelligent question and answer method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110389804.5A CN113064972A (en) 2021-04-12 2021-04-12 Intelligent question and answer method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113064972A true CN113064972A (en) 2021-07-02

Family

ID=76566380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110389804.5A Pending CN113064972A (en) 2021-04-12 2021-04-12 Intelligent question and answer method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113064972A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114328908A (en) * 2021-11-08 2022-04-12 腾讯科技(深圳)有限公司 Question and answer sentence quality inspection method and device and related products
CN116340365A (en) * 2023-05-17 2023-06-27 北京创新乐知网络技术有限公司 Cache data matching method, cache data matching device and terminal equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180189385A1 (en) * 2016-12-29 2018-07-05 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for locating an answer based on question and answer
CN110969143A (en) * 2019-12-19 2020-04-07 深圳壹账通智能科技有限公司 Evidence obtaining method and system based on image recognition, computer equipment and storage medium
CN111881279A (en) * 2020-07-28 2020-11-03 平安科技(深圳)有限公司 Transformer model-based question answering method, question answering device and storage device
CN112287069A (en) * 2020-10-29 2021-01-29 平安科技(深圳)有限公司 Information retrieval method and device based on voice semantics and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180189385A1 (en) * 2016-12-29 2018-07-05 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for locating an answer based on question and answer
CN110969143A (en) * 2019-12-19 2020-04-07 深圳壹账通智能科技有限公司 Evidence obtaining method and system based on image recognition, computer equipment and storage medium
CN111881279A (en) * 2020-07-28 2020-11-03 平安科技(深圳)有限公司 Transformer model-based question answering method, question answering device and storage device
CN112287069A (en) * 2020-10-29 2021-01-29 平安科技(深圳)有限公司 Information retrieval method and device based on voice semantics and computer equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114328908A (en) * 2021-11-08 2022-04-12 腾讯科技(深圳)有限公司 Question and answer sentence quality inspection method and device and related products
CN116340365A (en) * 2023-05-17 2023-06-27 北京创新乐知网络技术有限公司 Cache data matching method, cache data matching device and terminal equipment
CN116340365B (en) * 2023-05-17 2023-09-08 北京创新乐知网络技术有限公司 Cache data matching method, cache data matching device and terminal equipment

Similar Documents

Publication Publication Date Title
CN113032528B (en) Case analysis method, case analysis device, case analysis equipment and storage medium
CN112395886B (en) Similar text determination method and related equipment
CN113656547B (en) Text matching method, device, equipment and storage medium
CN113435196B (en) Intention recognition method, device, equipment and storage medium
CN113064972A (en) Intelligent question and answer method, device, equipment and storage medium
CN113408278B (en) Intention recognition method, device, equipment and storage medium
CN114090794A (en) Event map construction method based on artificial intelligence and related equipment
CN110298328B (en) Test data forming method, test data forming apparatus, electronic device, and medium
CN113793696B (en) Novel medicine side effect occurrence frequency prediction method, system, terminal and readable storage medium based on similarity
CN113268597A (en) Text classification method, device, equipment and storage medium
CN113705468A (en) Digital image identification method based on artificial intelligence and related equipment
CN113420545B (en) Abstract generation method, device, equipment and storage medium
CN111986771A (en) Medical prescription query method and device, electronic equipment and storage medium
CN112989044B (en) Text classification method, device, equipment and storage medium
CN116468043A (en) Nested entity identification method, device, equipment and storage medium
CN113420143B (en) Method, device, equipment and storage medium for generating document abstract
CN112949305B (en) Negative feedback information acquisition method, device, equipment and storage medium
CN113627186B (en) Entity relation detection method based on artificial intelligence and related equipment
CN113326365B (en) Reply sentence generation method, device, equipment and storage medium
CN113343700A (en) Data processing method, device, equipment and storage medium
CN113282218A (en) Multi-dimensional report generation method, device, equipment and storage medium
CN113962221A (en) Text abstract extraction method and device, terminal equipment and storage medium
CN113269179A (en) Data processing method, device, equipment and storage medium
CN113434895B (en) Text decryption method, device, equipment and storage medium
CN113283677A (en) Index data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination