CN115774996A - Question-following generation method and device for intelligent interview and electronic equipment - Google Patents

Question-following generation method and device for intelligent interview and electronic equipment Download PDF

Info

Publication number
CN115774996A
CN115774996A CN202211548972.5A CN202211548972A CN115774996A CN 115774996 A CN115774996 A CN 115774996A CN 202211548972 A CN202211548972 A CN 202211548972A CN 115774996 A CN115774996 A CN 115774996A
Authority
CN
China
Prior art keywords
question
answer
candidate
knowledge
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211548972.5A
Other languages
Chinese (zh)
Other versions
CN115774996B (en
Inventor
戴科彬
闻洪海
陈少波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Duomian Beijing Technology Co ltd
Tongdao Jingying Tianjin Information Technology Co ltd
Yingshi Internet Beijing Information Technology Co ltd
Original Assignee
Duomian Beijing Technology Co ltd
Tongdao Jingying Tianjin Information Technology Co ltd
Yingshi Internet Beijing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Duomian Beijing Technology Co ltd, Tongdao Jingying Tianjin Information Technology Co ltd, Yingshi Internet Beijing Information Technology Co ltd filed Critical Duomian Beijing Technology Co ltd
Priority to CN202211548972.5A priority Critical patent/CN115774996B/en
Publication of CN115774996A publication Critical patent/CN115774996A/en
Application granted granted Critical
Publication of CN115774996B publication Critical patent/CN115774996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Machine Translation (AREA)

Abstract

The invention discloses a question-hunting generation method and device for intelligent interviews and electronic equipment, and relates to the technical field of commercial interview information data processing. The method comprises the following steps: analyzing answers of the candidate, and identifying a plurality of semantic entities from the answers; extracting the relation of a plurality of semantic entities to obtain entity relation information; acquiring standard answers corresponding to questions, and determining answer result information of the candidate based on the standard answers and the entity relation information; and determining a question-hunting strategy based on the answering result information, determining question-hunting knowledge points in a knowledge graph corresponding to the questions according to the question-hunting strategy, and generating question-hunting questions corresponding to the question-hunting knowledge points. The invention can carry out targeted question hunting, so that the question hunting questions help interviewers to dig the knowledge depth and the knowledge breadth of the candidate, and finally determine the post matching degree.

Description

Question-following generation method and device for intelligent interview and electronic equipment
Technical Field
The invention relates to the technical field of data processing of business interview information, in particular to a question-hunting generation method and device of intelligent interview and electronic equipment.
Background
In an intelligent interview in the current market, a plurality of question-pursuing dimensions of each source question are usually preset, a model is used for judging a target dimension needing to be pursued, and the question-pursuing under the target dimension is recommended to an interviewer.
The inventor finds that the current scheme can only carry out one-way questioning according to a specific system in the interviewing process, lacks effective understanding of the answer content of the candidate and carries out targeted questioning, and the questioning questions cannot help the interviewer to excavate the knowledge depth and the breadth of the candidate and finally determine the matching degree of the candidate and the post.
Disclosure of Invention
In order to solve the technical problems or at least partially solve the technical problems, embodiments of the present invention provide a method, an apparatus, and an electronic device for generating question-chasing questions of an intelligent interview, in which a knowledge graph is used to establish a connection between knowledge points and knowledge points, each question-chasing can determine a question-chasing direction based on the answering situation of a candidate, and perform targeted question-chasing for sub-knowledge points, peer knowledge points, unreferenced knowledge points, and the like, so that the question-chasing questions can help an interviewer to mine the knowledge depth and breadth of the candidate, and finally determine the matching degree between the candidate and a post.
The embodiment of the invention provides a question-chasing generation method for an intelligent interview, which comprises the following steps:
analyzing answers answered by candidates aiming at the questions, and identifying a plurality of semantic entities from the answered answers; extracting attribute relations among the semantic entities to obtain entity relation information among the semantic entities; acquiring a standard answer corresponding to the question, and determining response result information of the candidate based on the standard answer and the entity relation information; the answer result information is used for representing the mastery condition of the candidate on a plurality of knowledge points related to the question; and determining at least one question-following strategy based on the answer result information, determining question-following knowledge points in the knowledge graph corresponding to the questions according to the question-following strategy, and generating question-following questions corresponding to the question-following knowledge points.
The embodiment of the invention also provides a question-chasing generation device for the intelligent interview, which comprises the following components:
the identification module is used for analyzing answers answered by candidates aiming at the questions and identifying a plurality of semantic entities from the answers; the extraction module is used for extracting the attribute relation among the semantic entities to obtain entity relation information of the semantic entities; the determining module is used for acquiring a standard answer corresponding to the question and determining answer result information of the candidate based on the standard answer and the entity relation information; the answer result information is used for representing the mastering conditions of the candidate on a plurality of knowledge points related to the question; and the generating module is used for determining at least one question-following strategy based on the answering result information, determining question-following knowledge points in the knowledge graph corresponding to the questions according to the question-following strategy, and generating question-following questions corresponding to the question-following knowledge points.
An embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors; storage means for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement a challenge question generation method as described above.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the question-asking generating method described above.
An embodiment of the present invention further provides a computer program product, which includes a computer program or instructions, and when the computer program or instructions are executed by a processor, the computer program or instructions implement the question-following question generating method as described above.
Compared with the prior art, the technical scheme provided by the embodiment of the invention has the following advantages:
(1) The relation between the knowledge points and the knowledge points is established in a knowledge map form, each question can determine a question-chasing direction based on the answering condition of the candidate, and the question-chasing is specifically made for the sub-knowledge points, the same-level knowledge points, the unreferenced knowledge points and the like, so that the question-chasing questions can help interviewers to mine the knowledge depth and the knowledge breadth of the candidate, and finally the matching degree between the candidate and the post is determined.
(2) Aiming at the knowledge points related to each question, the prior knowledge of the knowledge points is utilized to enhance the effect of semantic entity recognition, and the standard answer core words in various conditions can be effectively adapted.
Drawings
The above and other features, advantages and aspects of various embodiments of the present invention will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flowchart of a question generation method for intelligent interview in the embodiment of the present invention;
FIG. 2 is a flowchart of an enhancement method for performing semantic entity recognition based on a dictionary + rule approach using prior knowledge in an embodiment of the present invention;
FIG. 3 is a flowchart of an enhancement method for semantic entity recognition based on machine learning/deep learning with a priori knowledge in an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a question-chasing generation apparatus for an intelligent interview in an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device in an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present invention. It should be understood that the drawings and the embodiments of the present invention are illustrative only and are not intended to limit the scope of the present invention.
It should be understood that the various steps recited in the method embodiments of the present invention may be performed in a different order and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the invention is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present invention are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in the present invention are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that reference to "one or more" unless the context clearly dictates otherwise.
The main problems existing in the intelligent interview in the current market are that the interview process is too mechanical, unidirectional question hunting can be performed only according to preset assessment dimensions in the interview process, a tree-shaped or net-shaped structure of a knowledge system is ignored, and the relation between knowledge points and knowledge points is lost to a certain extent, so that effective understanding of answer contents of candidates is lacked, and targeted question hunting is performed. Therefore, the embodiment of the invention discloses a knowledge graph-based question-chasing method, which establishes the relation between knowledge points and knowledge points in a knowledge graph form, each question can be inquired aiming at one or more knowledge points, each question-chasing can be inquired aiming at sub-knowledge points, peer knowledge points, unreferenced knowledge points and the like, and the knowledge depth and the knowledge breadth of a candidate can be mined.
Referring to fig. 1, an embodiment of the present invention provides a flowchart of a question generation method for an intelligent interview.
Step S110, analyzing answers answered by the candidate aiming at the questions, and identifying a plurality of semantic entities from the answered answers.
And performing semantic entity recognition on the candidate answers, and recognizing any semantic entity to be used. Semantic entities are word combinations of nouns in documents/files/sentences describing exact objects of the real world, such as names of people, organizations, places, and all other entities identified by names, and more extensive entities include numbers, dates, currencies, addresses, etc., and entity types in different fields can be greatly different, for example, important entity types in the medical field usually include gene names, protein structure attribute names, compound names, drug names, and disease names, etc.
Semantic entity recognition can be implemented based on a dictionary + two-way maximum matching rule, and can also be implemented based on any sequence labeling algorithm, including but not limited to a recurrent neural network (e.g., RNN, LSTM) and a pre-trained model (BERT).
For example, the candidate answer is "transaction isolation level has committed reads and repeatable reads, committed read level can solve dirty read problem, repeatable read level can solve magic read and non-repeatable read problem", from which the following semantic entities are extracted: "transaction isolation level, committed read level, repeatable read level, dirty read, non-repeatable read, magic read".
And step S120, extracting attribute relations of the semantic entities to obtain entity relation information of the semantic entities.
And combining the semantic entity recognition result, forming a semantic entity pair by any two semantic entities, and judging the relation of each semantic entity pair. The relationship determination may be implemented using any of a variety of text classification algorithms, including but not limited to, based on traditional machine learning (e.g., LR, SVM), based on convolutional neural networks (e.g., textCNN), based on recurrent neural networks (e.g., RNN, LSTM), based on pre-trained models (e.g., BERT).
Following the example in step S110, the following relationships are extracted for the identified semantic entities: "committed read level belongs to transaction isolation level", "repeatable read level belongs to transaction isolation level", "committed read level can resolve magic reads", "repeatable read level can resolve magic reads", and "repeatable read level can resolve non-repeatable reads".
And step S130, acquiring a standard answer corresponding to the question, and determining answer result information of the candidate based on the standard answer and the entity relation information.
And the answer result information is used for representing the mastery condition of the candidate on a plurality of knowledge points related to the question.
The standard answers include all semantic entity relations, generally speaking, the questions relate to a plurality of knowledge points, one or more semantic entity relation answers correspond to one knowledge point, whether the extracted semantic entity relations are consistent with the semantic entity relations in the standard answers or not is mainly judged, the candidate is well mastered by the knowledge points corresponding to consistent parts, the candidate is wrongly mastered by the knowledge points corresponding to inconsistent parts, and the candidate is lost by the knowledge points corresponding to lost parts.
Continuing with the above example, the standard answers include: "uncommitted read level belongs to transaction isolation level", "committed read level belongs to transaction isolation level", "repeatable read level belongs to transaction isolation level", "serialization level belongs to transaction isolation level", "committed read level can resolve dirty reads", "repeatable read level can resolve non-repeatable reads", "serialization level can resolve dirty reads", "serialization level can resolve non-repeatable reads", and "serialization level can resolve magic reads". Comparing the standard answer with the semantic entity relationship in step S120, it can be found that there are 4 pieces of agreement ("committed read level belongs to transaction isolation level", "repeatable read level can solve dirty read"), 1 piece of disagreement ("committed read level can solve magic read"), 5 pieces of non-mention ("uncommitted read level belongs to transaction isolation level", "serialization level can solve dirac read", "serialization level can solve non-repeatable read", and "serialization level can solve magic read"). It is shown that the candidate has errors and misses at the knowledge point in the "uncommitted read level" direction at the transaction isolation level and has misses at the knowledge point in the "serialization level" direction at the transaction isolation level.
Optionally, the entity relationship information includes entity relationship information of multiple attributes, and the attributes are determined based on the titles. Generally, semantic entity relationships can be classified according to attributes, which are determined according to answers corresponding to topics. Continuing with the above example, the subject standard answer is related to several levels included in the transaction isolation level, and the problem that each level can solve, so that it can be determined that the attribute includes two relationship attributes of "belong to relationship" and "solve relationship". After the attributes are determined, semantic entity relationships are classified according to the attributes.
Further, the step may determine the response result information according to the following manner:
combing the standard answers to obtain semantic entity relationship answers of the attributes; and aiming at each attribute, matching the entity relationship information of the attribute with the semantic entity relationship answer to obtain the answering result information of the candidate under the attribute.
Continuing with the above example, the standard answers include: semantic entity relationship answer to "belong to relationship" 4 groups: "uncommitted read level belongs to transaction isolation level", "committed read level belongs to transaction isolation level", "repeatable read level belongs to transaction isolation level", "serialization level belongs to transaction isolation level"; semantic entity relationship answer 6 set for "solve relationship": "committed read level can resolve dirty reads", "repeatable read level can resolve irreproducible reads", "serialization level can resolve irreproducible reads", and "serialization level can resolve magic reads". The entity relationship information comprises: semantic entity relationship answer 2 group of "belongs to relationship": "committed read level belongs to transaction isolation level" and "repeatable read level belongs to transaction isolation level"; semantic entity relationship answer to "solve relationship" 3 groups: "committed read level can resolve magic reads", "repeatable read level can resolve magic reads", and "repeatable read level can resolve nonrepeatable reads".
And comparing the two to obtain answer result information of the candidate under each attribute. Namely "belonging to the relationship" with missing knowledge point "uncommitted reading level" and "serialization level", "solving the problem that the relationship" with missing knowledge point "serialization level can solve", and "solving the relationship" with wrong knowledge point "submitted reading level can solve the problem".
And step S140, determining at least one question-following strategy based on the answer result information, determining question-following knowledge points in a knowledge graph corresponding to the questions according to the question-following strategy, and generating question-following questions corresponding to the question-following knowledge points.
In the scheme, multiple question-chasing strategies are provided for selection, and each question-chasing strategy simultaneously considers the knowledge consistency of the answer of the candidate and the knowledge points related to the standard answers and the divergence and expansion of the associated knowledge points. The question-chasing strategy comprises a peer knowledge point question-chasing strategy (namely, breadth-first, searching peer knowledge points related to the candidate from knowledge points which are not mastered by the candidate), a sub-knowledge point question-chasing strategy (namely, depth-first, searching sub-knowledge points related to the candidate from knowledge points which are mastered by the candidate well), and an error knowledge point question-chasing strategy (namely, error clarification, re-claiming question for knowledge points which are mastered by the candidate incorrectly).
Specifically, the corresponding relationship between the candidate answer result information and the question-chasing strategy can be established in advance: the strategy for inquiring the child knowledge points corresponding to the good knowledge point mastery, the strategy for inquiring the knowledge points corresponding to the same level with the knowledge point mastery deficiency, and the strategy for inquiring the wrong knowledge points corresponding to the knowledge point mastery deficiency errors.
Furthermore, a tree-shaped knowledge graph among knowledge points is pre-established in the scheme, wherein the knowledge graph comprises the hierarchical relation among the knowledge points and the recommended questions corresponding to the knowledge points. Firstly, knowledge points corresponding to the questions can be positioned in the knowledge map, after the question hunting strategy is determined, the corresponding question hunting knowledge points can be searched in the knowledge map according to the question hunting strategy, and the question hunting questions can be selected from the recommended questions corresponding to the question hunting knowledge points.
Further, since step S140 and step S130 are decoupled, the query strategy can be flexibly adjusted according to the actual service scenario. The determination of the knowledge points to be asked can be realized by the following ways:
determining a question-chasing strategy corresponding to each attribute according to the answering result information under each attribute; for each attribute, determining a question knowledge point in a knowledge graph corresponding to the question according to a question strategy corresponding to the attribute; and determining question-pursuing questions in a question bank based on the question-pursuing knowledge points.
Continuing with the above example, if the knowledge points of the "uncommitted reading level" and the "serialization level" of the candidate grasp the missing, the peer knowledge point pursuit policies of the "uncommitted reading level" and the "serialization level" are triggered to search the peer knowledge points associated with the knowledge points. For example, a candidate does not answer "serialization level belongs to transaction isolation level", may trigger a question of "do you know about serialization? What does serialization mean? ".
If the knowledge point of the 'problem solvable by the submitted reading level' of the candidate is well mastered, a sub-knowledge point pursuit strategy of 'problem solvable by the repeatable reading level' is triggered. For example, a candidate answered "the submitted read level can resolve dirty reads," can trigger a question of how the submitted read level is implemented "or" what is dirty reads? ".
And if the knowledge points of the candidate 'problem that can be solved by the repeatable reading level' know wrong, triggering an error knowledge point pursuit strategy of the 'problem that can be solved by the repeatable reading level'. Such as a candidate answering the wrong, "repeatable reading level can solve magic reading," can trigger a question "can repeatable reading level solve which questions? ".
The technical scheme provided by the embodiment of the invention establishes the relation between the knowledge points and the knowledge points in a knowledge map form, each question can determine the question-hunting direction based on the answering condition of the candidate, and the pointed question hunting is carried out aiming at the sub knowledge points, the same-level knowledge points, the unreferenced knowledge points and the like, so that the question-hunting questions can help the interviewer to dig the knowledge depth and the breadth of the candidate and finally determine the matching degree between the candidate and the post.
Each topic in the question bank is information with standard answers or points of investigation, which may be collectively referred to as a priori knowledge. The inventor discovers in the process of realizing the invention that for knowledge points related to each question, the prior knowledge of the knowledge points can be added into a semantic entity recognition step, and answers of candidates are preprocessed (text correction, text supplement and text similar word replacement), so that the accuracy of semantic entity recognition is enhanced.
As an optional implementation manner of the embodiment of the present invention, before performing step S110, the method further includes:
extracting knowledge points related to the questions to obtain priori knowledge information; the prior knowledge information comprises semantic entities, common prefixes, common suffixes and polysemous word meanings in the standard answers.
For example, the standard answer of the question "which system the automobile chassis is composed of" is "drive train, brake train, driving train, steering train", four semantic entities in the standard answer all have a common suffix "train", and the candidate usually ignores the common suffix information when answering the question, such as directly answering "drive train, brake train, driving train, steering", and at this time, the semantic entities can be directly positioned by using the priori knowledge to mention "drive train" which can be equivalent to "drive train" of the semantic entity;
for example, the title "which plants are all in the Rosaceae family" has standard answers of "apple, china rose, and Rose", wherein "apple" is a polysemous word meaning that one plant has a mobile phone model, and "apple" in this title means that the plant is not a mobile phone model. If the semantic entity appears to refer to apple when the candidate answers the question, the semantic entity can be directly positioned to refer to apple to the semantic entity in the plant by using the priori knowledge.
As some optional implementation manners of the embodiment of the present invention, there are two main types of conventional semantic entity recognition: namely a dictionary + rule approach and a machine learning/deep learning approach. The priori knowledge obtained by analyzing the questions can enhance the accuracy of semantic entity recognition in both the two modes. Fig. 2 shows a method for enhancing the prior knowledge in the dictionary + rule mode, and fig. 3 shows a method for enhancing the prior knowledge in the machine learning/deep learning mode.
As shown in fig. 2, the method for semantic entity recognition includes:
step S210, determining core words in the standard answer based on the priori knowledge information, performing segmentation with the finest granularity on each core word to obtain a plurality of first segmentation words included in each core word, and calculating the weight of each first segmentation word.
Specifically, the core word is determined according to the semantic entity in the prior knowledge information. And for each core word, performing word segmentation by using a finest granularity word segmentation rule, and calculating the weight of each first word segmentation. For example, taking "which systems the chassis of the vehicle is composed of" as an example, the standard answer is "a transmission system, a braking system, a driving system, and a steering system", including four core words of "the transmission system", "the braking system", "the driving system", and "the steering system", taking "the braking system" as an example, performing the finest participle will obtain two first participles of "braking", and "steering", and calculate the weight of each first participle. The calculation of the weights uses the following formula,
Figure BDA0003981316940000101
the method comprises the steps that FreqD is the frequency of occurrence of a first participle in all documents, freqQ is the frequency of occurrence of the first participle in a standard answer, bonusPref ix is a reward parameter of the first participle in the standard answer at least as prefixes of two core words, bonusSuff ix is a reward parameter of the first participle in the standard answer at least as suffixes of the two core words, and the reward parameter is a preset constant value larger than 1. The above parameters can be obtained by a priori knowledge.
Step S220, performing word segmentation on the answer to obtain a plurality of second word segmentations; a length threshold of the second participle is determined based on a length of the first participle.
And performing word segmentation on the answer to the candidate answer by using an N-Gram model to obtain a second word segmentation combination in the answer to the candidate, wherein the lower limit of the length value range (namely the numerical value of N) of each second word segmentation is the minimum length-2 of all the first word segmentation, and the upper limit is the maximum length +2 of all the first word segmentation. For example, the longest participle of the first participle is "brake", the shortest participle is "line", so the upper limit of N takes the value 2+2= -4, the lower limit of N takes the value 1-2= -1, so N for this example takes a meaningful value of 1 to 4.
For example, if the candidate answers "drive-by-drive", the following N-Gram combination is calculated: transmission, automatic, running, transmission automatic, automatic running, transmission automatic, automatic running, and automatic running.
Step S230, according to the weight of each first participle in the core words, sorting the first participles in the core words, sequentially calculating the similarity between each first participle in the core words and each second participle according to the weight, screening out candidate second participles with the similarity higher than a first preset similarity threshold, and outputting a plurality of first matching results, where the first matching results include the first participle, the candidate second participles, the similarities of the first participle and the candidate second participles, the similarity types, and the positions of the candidate second participles in answers to the answers.
For each core word, the plurality of first participles of the core word are ranked according to the weights calculated in step S210, and continuing with the above example, for the core word "braking system", the weight of the first participle "braking" is 0.9, and the weight of the first participle "braking system" is 0.1, then the first participles of the core word "braking system" are ranked as "braking" or "braking" system ".
Furthermore, the similarity of each first participle and each second participle of answers answered by the candidate is sequentially calculated according to the weight sequence, and the second participles meeting the conditions are screened out according to a similarity threshold value. Preferably, the similarity includes edit distance similarity, semantic similarity and pronunciation similarity, wherein the edit distance is the minimum number of operations for describing the conversion of one character string into another character string, and the operations include insertion, deletion and replacement; the semantic similarity is calculated by vector similarity based on word embedding; the pronunciation similarity is to judge the similarity of two Chinese characters in multiple aspects of initial consonant, vowel and tone.
More preferably, when the first segmentation ranked at the top is calculated, all the second segmentation of the candidate answer is compared; when the first segmentation ranked at the second position is calculated, the first segmentation is compared with a second segmentation adjacent to a target second segmentation of which the answer of the candidate is matched with the first segmentation; and so on. For example, the "brake system" will first use the first word "brake" ranked first to compare with each second word "automatic drive" that is the answer to the candidate, where the pronunciation similarity of "brake" and "automatic" is 0.99 above the threshold; the second ranked first segment "is then used to compare with the second segment (i.e.," go "," drive ") adjacent to the target second segment" auto "in the candidate answer, each similarity being below a threshold. Thus, by comparing the threshold values, the candidate second participle "automatic" for the first participle "brake" is screened out, which can be recorded as: (brake, automatic, 3,4, 0.99, similar sound), wherein 3,4 represents the 3 rd and 4 th character positions of automatic, which is transmission automatic driving; there is no candidate second participle for the first participle "line", which can be denoted as ("line", [ ],0, none). Since the first participle "is without the candidate second participle, the candidate second participle" automatic "alone constitutes the matching result corresponding to the core word" brake system "in the answered answer.
Further, the five fields of the matching result in this step can be summarized as follows: the core vocabulary participles of the standard answers, the N-Gram participles in the answers answered by the candidate, the positions of the N-Gram participles in the original texts answered by the candidate, the similarity between the core vocabulary participles of the standard answers and the N-Gram participles in the answers answered by the candidate, and the types of the similarity (including pronunciation similarity, edit distance similarity, semantic similarity, participle deletion and the like).
Step S240, merging the first matching results into a second matching result, calculating a weighted similarity between the core word and the second matching result based on the weight of each first participle in the core word and the similarity between each first participle and the corresponding candidate second participle, and screening out the second matching result with the weighted similarity higher than a second preset similarity threshold.
For example, the matching results ("brake", "auto", "3, 4", 0.99, similar pronunciation), ("system", "0", 0, none) in step S230 are combined to obtain the matching result corresponding to the core word. And for each core word, multiplying the weight of each first participle included in the core word by the similarity of the candidate second participle corresponding to the first participle, and then summing to obtain the weighted similarity between each core word and the matching result. Namely: the weight of the first participle "brake" of the core word "brake system" is 0.9, the similarity of the corresponding candidate second participle "automatic" is 0.99, the weight of the first participle "system" is 0.1, and no corresponding candidate second participle exists, so that the weighted similarity score of the final core word "brake system" and the matching result is 0.9 + 0.99+0.1 + 0=0.891. The matching results are selected and combined ("brake system", "auto", "3, 4, 0.891, [ similar sound ]) assuming a value above a predetermined threshold of 0.85.
And step S250, determining a plurality of semantic entities in the answer based on the screened second matching result.
On the basis of the above steps S210 to S250, further, if candidate second participles at the same position in the answer appear in a plurality of second matching results, a matching conflict appears; optimizing the weighted similarity between the core word and the second matching result based on a conflict matching algorithm until the candidate second participle at the same position in the answer appears in only one second matching result, and taking the second matching results with the highest weighted similarity and the preset number as final matching results; and determining a plurality of semantic entities in the answered answers based on the screened final matching results.
In this step, the conflict means that the character at the same position in the answer appears in the second participles. For example, the standard answer is "chassis system", and the candidate answer is "chassis subdivision suspension system, steering system", and step S240 outputs three matching results: ("chassis system", "chassis", [1,2],0.9, [ participle missing ]), ("chassis system", "chassis thin", [1,2,3],0.88, [ edit distance similar, pronunciation similar ]), ("chassis system", "chassis thin", [1,2,3,4],0.85, [ edit distance similar, pronunciation similar ]), which are all trying to match the "chassis system" in the standard answer, and there is a position conflict, such as the position list of the first matching result [1,2] and the position list of the second matching result [1,2,3] having the same element [1,2]. In this step, the most suitable M matching results are selected from the N matching results, and the following conditions are satisfied: 1. matching conflict does not exist between every two M matching results; 2. the average similarity score of the M matching results is highest. Preferably, the optimal value is solved by adopting the idea of division and dynamic programming through a matching conflict algorithm.
As shown in fig. 3, the method for semantic entity recognition includes:
step S310, based on the prior knowledge information and the similarity of the first participle and the second participle, performing text correction on the answer to obtain a corrected answer; wherein the text amendment comprises correction of similar pronunciation of the text, expansion of prefix and suffix of the text and replacement of similar words of the text.
In this step, the modification includes correction of the text similar pronunciation of the answer, expansion of the text suffix, and replacement of the text similar word. Specifically, the method comprises the following steps:
for example, if the standard answer is "brake system", the candidate answer is "automatic driving in gear", and step S240 outputs a matching result ("brake system", "automatic", "3, 4", 0.891, [ similar sound generation "), which is combined from the following 2 sub-matching results: the answer of ' transmission automatic driving ' is corrected to ' transmission braking driving ' according to the answer of ' braking ', ' automatic ', [3,4],0.99 and similar pronunciation ', (' tie ', ' split ', ' 0 ', ' none '), and the answer of ' transmission automatic driving ' is corrected to ' transmission braking driving '.
For example, if the standard answer is "before destruction and after destruction", the answer answered by the candidate is "before destruction and" after destruction ", step S240 outputs the matching result (" before destruction "," 1,2,3 "," 1.0, [ similar edit distance ]), ("after destruction", "1, 2, 4", "0.9, [ similar edit distance ]), and it can be seen that the two matches share" destruction ", so the candidate answer" before destruction and "after destruction" is expanded to "before destruction and after destruction".
For example, similar words are replaced for the text of the answer of the candidate, the similar words are used as a class of prior knowledge and are provided by the expert when the expert enters the question, and if the answer of the candidate is a 'anonymous pipeline', the 'anonymous pipeline' is replaced.
Step S320, inputting the modified answer to a deep learning model in a character form, inputting the final matching result to the deep learning model in a vocabulary form, and outputting a plurality of semantic entities in the identified answer.
Specifically, the deep learning model is described by taking a FLAT algorithm model as an example. If the standard answer is ' braking system ' and ' driving system ', the answer of the candidate is ' automatic driving ', and if the collision matching optimization has two matching results (' braking system ', ' automatic ', [1,2],0.891, [ similar pronunciation ]), (' driving system ', ' driving ', ' 3,4],0.9, [ ]), the answer of the candidate is corrected to ' braking driving ' after the similar pronunciation error correction. The data input into the FLAT algorithm model comprises two parts: the first part is character-level information, i.e. comprising four characters ("system", 1), ("move", 2), ("line", 3), ("go", 4) where the meaning of the three fields are (character, character at the beginning of the original text, character at the end of the original text), respectively. The second part is the lexical level information of the matching result, i.e. comprising the following two words ("brake", 1, 2), (drive, 3, 4), wherein the meaning of the three fields are (word, word at the beginning of the original text, word at the end of the original text) respectively. The output result of the FLAT algorithm model is a sequence label of the information at the first partial character level, such as adding a fourth field, ("brake", 1, "B-LOC"), ("action", 2, "E-LOC"), ("Row", 3, "B-LOC"), ("drive", 4, "E-LOC"), wherein "B-LOC" identifies the beginning of an entity and "E-LOC" represents the result of an entity, so the final result is to output two entities, "brake" and "drive".
In one embodiment, referring to fig. 4, a schematic structural diagram of a question-following generating device is provided. The apparatus can be used to execute the question generation method shown in any one of fig. 1-3, and the apparatus includes: an identification module 410, an extraction module 420, a determination module 430, and a monk formation module 440; wherein the content of the first and second substances,
the identification module 410 is used for analyzing answers answered by the candidate aiming at the questions and identifying a plurality of semantic entities from the answers;
an extracting module 420, configured to perform relationship extraction on the multiple semantic entities to obtain entity relationship information of the multiple semantic entities;
a determining module 430, configured to obtain a standard answer corresponding to the question, and determine response result information of the candidate based on the standard answer and the entity relationship information; the answer result information is used for representing the mastery condition of the candidate on a plurality of knowledge points related to the question;
the generating module 440 is configured to determine at least one question-following strategy based on the answer result information, determine a question-following knowledge point in the knowledge graph corresponding to the question according to the question-following strategy, and generate a question-following question corresponding to the question-following knowledge point; the inquiry strategies comprise peer knowledge point inquiry strategies, sub knowledge point inquiry strategies and error knowledge point inquiry strategies.
Optionally, the apparatus further includes an extraction module, configured to extract knowledge points related to the questions to obtain prior knowledge information; the prior knowledge information comprises semantic entities, common prefixes, common suffixes and polysemous word meanings in the answers.
Optionally, the recognition module 410 is further configured to determine a core word in the standard answer based on the priori knowledge information, perform finest-granularity word segmentation on the core word to obtain a plurality of first word segments, and calculate a weight of each first word segment; performing word segmentation on the answer to obtain a plurality of second words, wherein the length of each second word is determined based on the length of the corresponding first word; sorting the first participles in the core words according to the weight of each first participle in the core words, sequentially calculating the similarity between each first participle in the core words and each second participle according to the weight, screening out candidate second participles with the similarity higher than a first preset similarity threshold, and outputting a plurality of first matching results, wherein the first matching results comprise the first participles, the candidate second participles, the similarity between the first participles and the candidate second participles, the similarity types and the positions of the candidate second participles in answers; merging the first matching results into second matching results, calculating the weighted similarity between the core word and the second matching results based on the weight of each first participle in the core word and the similarity between each first participle and the corresponding candidate second participle, and screening out the second matching results of which the weighted similarity is higher than a second preset similarity threshold; and determining a plurality of semantic entities in the answered answers based on the screened second matching results.
Optionally, the identifying module 410 is further configured to, if candidate second participles at the same position in the answered answer appear in a plurality of second matching results, generate a matching conflict; optimizing the weighted similarity between the core word and the second matching result based on a conflict matching algorithm until the candidate second participle at the same position in the answer appears in only one second matching result, and taking the second matching results with the highest weighted similarity and the preset number as final matching results; and determining a plurality of semantic entities in the answered answers based on the screened final matching results.
Optionally, the recognition module 410 is further configured to perform text correction on the answer based on the priori knowledge information and the similarity between the first participle and the second participle to obtain a corrected answer; wherein the text amendment comprises correction of similar pronunciation of the text, expansion of the front and back suffixes of the text and replacement of similar words of the text. And inputting the corrected answer to a deep learning model in a character form, inputting the final matching result to the deep learning model in a vocabulary form, and outputting a plurality of semantic entities in the identified answer.
Optionally, the generating module 440 is further configured to trigger a sub-knowledge point question-asking strategy if the answer result information indicates that knowledge points are well mastered, preferentially determine a question-asking sub-knowledge point in a knowledge graph corresponding to the question, and generate a question-asking question corresponding to the question-asking sub-knowledge point; if the answer result information represents that the knowledge point is wrongly mastered, triggering a reissue question-pursuing strategy, and preferentially reissuing the question-pursuing knowledge point in the knowledge graph corresponding to the question; if the answer result information represents that knowledge points are mastered and lost, triggering a peer knowledge point question-chasing strategy, preferentially determining the peer knowledge points to be asked in a knowledge graph corresponding to the questions, and generating question-chasing questions corresponding to the peer knowledge points to be asked.
It should be noted that, the implementation principle and the technical effect of the technical solutions corresponding to the challenge question generating device provided in the embodiments of the present invention and capable of being used for executing the above method embodiments are similar, and are not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device in an embodiment of the present invention. Referring now specifically to FIG. 5, a block diagram of an electronic device 500 suitable for use in implementing embodiments of the present invention is shown. The electronic device 500 in the embodiment of the present invention may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), a wearable electronic device, and the like, and a stationary terminal such as a digital TV, a desktop computer, a smart home device, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various suitable actions and processes to implement the methods of embodiments described herein in accordance with programs stored in Read Only Memory (ROM) 502 or programs loaded into Random Access Memory (RAM) 503 from storage 508. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. May be implemented alternatively or with more or fewer devices.
The foregoing description is only exemplary of the preferred embodiments of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other combinations of features described above or equivalents thereof without departing from the spirit of the disclosure. For example, the above features and (but not limited to) features having similar functions disclosed in the present invention are mutually replaced to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Also, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the invention. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A question generating method for intelligent interview is characterized by comprising the following steps:
analyzing answers answered by candidates aiming at the questions, and identifying a plurality of semantic entities from the answered answers;
extracting attribute relations among the semantic entities to obtain entity relation information among the semantic entities;
acquiring a standard answer corresponding to the question, and determining response result information of the candidate based on the standard answer and the entity relation information; the answer result information is used for representing the mastery condition of the candidate on a plurality of knowledge points related to the question;
and determining at least one question-following strategy based on the answer result information, determining question-following knowledge points in a knowledge graph corresponding to the questions according to the question-following strategy, and generating question-following questions corresponding to the question-following knowledge points.
2. The method of claim 1, wherein prior to parsing an answer to a topic by a candidate, the method further comprises:
extracting knowledge points related to the questions to obtain priori knowledge information; the prior knowledge information comprises semantic entities, common prefixes, common suffixes and polysemous word meanings in the standard answers.
3. The method of claim 1, wherein the step of parsing an answer answered by a candidate for a topic, and identifying semantic entities from the answered answer comprises:
determining a core word in a standard answer based on prior knowledge information, performing finest-granularity word segmentation on the core word to obtain a plurality of first word segments, and calculating the weight of each first word segment;
performing word segmentation on the answer to obtain a plurality of second words, wherein the length of each second word is determined based on the length of the corresponding first word;
sorting the first participles in the core words according to the weight of each first participle in the core words, sequentially calculating the similarity between each first participle in the core words and each second participle according to the weight, screening out candidate second participles with the similarity higher than a first preset similarity threshold, and outputting a plurality of first matching results, wherein the first matching results comprise the first participles, the candidate second participles, the similarity between the first participles and the candidate second participles, the similarity types and the positions of the candidate second participles in answers;
combining the first matching results into second matching results, calculating the weighted similarity between the core word and the second matching results based on the weight of each first participle in the core word and the similarity between each first participle and the corresponding candidate second participle, and screening out the second matching results of which the weighted similarity is higher than a second preset similarity threshold;
and determining a plurality of semantic entities in the answered answers based on the screened second matching results.
4. The method of claim 3, further comprising:
if the candidate second participles at the same position in the answer appear in a plurality of second matching results, a matching conflict appears;
optimizing the weighted similarity between the core word and the second matching result based on a conflict matching algorithm until the candidate second participle at the same position in the answer appears in only one second matching result, and taking the second matching results with the highest weighted similarity and the preset number as final matching results;
and determining a plurality of semantic entities in the answer based on the screened final matching result.
5. The method of any one of claims 3 or 4, wherein the step of parsing an answer answered by a candidate for a topic, and identifying semantic entities from the answered answer comprises:
based on the prior knowledge information and the similarity of the first participle and the second participle, performing text correction on the answer to obtain a corrected answer; the text amendment comprises correction of similar pronunciation of the text, expansion of prefix and suffix of the text and replacement of similar words of the text;
and inputting the corrected answering answer to a deep learning model in a character form, inputting the final matching result to the deep learning model in a vocabulary form, and outputting a plurality of semantic entities in the recognized answering answer.
6. The method according to claim 1, wherein the step of determining at least one question-following strategy based on the answer result information, determining question-following knowledge points in the knowledge graph corresponding to the question according to the question-following strategy, and generating the question-following question corresponding to the question-following knowledge points comprises:
if the answering result information represents that the knowledge points are well mastered, triggering a sub-knowledge point question-asking strategy, preferentially determining the question-asking sub-knowledge points in the knowledge graph corresponding to the question, and generating a question-asking question corresponding to the question-asking sub-knowledge points;
if the answer result information represents that the knowledge point is wrongly mastered, triggering a reissue question-pursuing strategy, and preferentially reissuing the question-pursuing knowledge point in the knowledge graph corresponding to the question;
if the answer result information represents that knowledge points are mastered and lost, triggering a peer knowledge point question-chasing strategy, preferentially determining the peer knowledge points to be asked in a knowledge graph corresponding to the questions, and generating question-chasing questions corresponding to the peer knowledge points to be asked.
7. The utility model provides a question generation device that pursues of intelligence interview which characterized in that includes:
the identification module is used for analyzing answers answered by candidates aiming at the questions and identifying a plurality of semantic entities from the answers;
the extraction module is used for extracting the attribute relation among the semantic entities to obtain entity relation information of the semantic entities;
the determining module is used for acquiring a standard answer corresponding to the question and determining response result information of the candidate based on the standard answer and the entity relationship information; the answer result information is used for representing the mastery condition of the candidate on a plurality of knowledge points related to the question;
and the generating module is used for determining at least one question-following strategy based on the answer result information, determining question-following knowledge points in a knowledge graph corresponding to the questions according to the question-following strategy, and generating question-following questions corresponding to the question-following knowledge points.
8. The apparatus of claim 7, wherein the identification module is further configured to:
determining a core word in a standard answer based on prior knowledge information, performing finest-granularity word segmentation on the core word to obtain a plurality of first word segments, and calculating the weight of each first word segment; performing word segmentation on the answer to obtain a plurality of second words, wherein the length of each second word is determined based on the length of the corresponding first word; sorting the first participles in the core words according to the weight of each first participle in the core words, sequentially calculating the similarity between each first participle in the core words and each second participle according to the weight, screening out candidate second participles with the similarity higher than a first preset similarity threshold, and outputting a plurality of first matching results, wherein the first matching results comprise the first participles, the candidate second participles, the similarity between the first participles and the candidate second participles, and the similarity types; combining the first matching results into second matching results, calculating the weighted similarity between the core word and the second matching results based on the weight of each first participle in the core word and the similarity between each first participle and the corresponding candidate second participle, and screening out the second matching results of which the weighted similarity is higher than a second preset similarity threshold; and determining a plurality of semantic entities in the answer based on the screened second matching result.
9. The apparatus of claim 8, wherein the recognition module is further configured to generate a matching conflict if a candidate second word at the same position in the answer is found in a plurality of second matching results; based on a conflict matching algorithm, optimizing the weighted similarity between the core word and the second matching result until the candidate second participles at the same position in the answer only appear in one second matching result, and taking the second matching results with the highest weighted similarity and preset number as final matching results; and determining a plurality of semantic entities in the answered answers based on the screened final matching results.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method recited in any of claims 1-6.
CN202211548972.5A 2022-12-05 2022-12-05 Intelligent interview topdressing problem generation method and device and electronic equipment Active CN115774996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211548972.5A CN115774996B (en) 2022-12-05 2022-12-05 Intelligent interview topdressing problem generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211548972.5A CN115774996B (en) 2022-12-05 2022-12-05 Intelligent interview topdressing problem generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN115774996A true CN115774996A (en) 2023-03-10
CN115774996B CN115774996B (en) 2023-07-25

Family

ID=85391370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211548972.5A Active CN115774996B (en) 2022-12-05 2022-12-05 Intelligent interview topdressing problem generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115774996B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116485597A (en) * 2023-04-17 2023-07-25 北京正曦科技有限公司 Standardized training method based on post capability model

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018010501A (en) * 2016-07-14 2018-01-18 株式会社ユニバーサルエンターテインメント Interview system
CN109978339A (en) * 2019-02-27 2019-07-05 平安科技(深圳)有限公司 AI interviews model training method, device, computer equipment and storage medium
CN111126553A (en) * 2019-12-25 2020-05-08 平安银行股份有限公司 Intelligent robot interviewing method, equipment, storage medium and device
CN111445200A (en) * 2020-02-25 2020-07-24 平安国际智慧城市科技股份有限公司 Interviewing method and device based on artificial intelligence, computer equipment and storage medium
CN113392187A (en) * 2021-06-17 2021-09-14 上海出版印刷高等专科学校 Automatic scoring and error correction recommendation method for subjective questions
CN113946651A (en) * 2021-09-27 2022-01-18 盛景智能科技(嘉兴)有限公司 Maintenance knowledge recommendation method and device, electronic equipment, medium and product
CN114048327A (en) * 2021-11-15 2022-02-15 浙江工商大学 Automatic subjective question scoring method and system based on knowledge graph
CN114528894A (en) * 2020-11-09 2022-05-24 无锡近屿智能科技有限公司 Training method of follow-up model and follow-up method of surface test questions
CN115048532A (en) * 2022-06-16 2022-09-13 中国第一汽车股份有限公司 Intelligent question-answering robot for automobile maintenance scene based on knowledge graph and design method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018010501A (en) * 2016-07-14 2018-01-18 株式会社ユニバーサルエンターテインメント Interview system
CN109978339A (en) * 2019-02-27 2019-07-05 平安科技(深圳)有限公司 AI interviews model training method, device, computer equipment and storage medium
CN111126553A (en) * 2019-12-25 2020-05-08 平安银行股份有限公司 Intelligent robot interviewing method, equipment, storage medium and device
CN111445200A (en) * 2020-02-25 2020-07-24 平安国际智慧城市科技股份有限公司 Interviewing method and device based on artificial intelligence, computer equipment and storage medium
CN114528894A (en) * 2020-11-09 2022-05-24 无锡近屿智能科技有限公司 Training method of follow-up model and follow-up method of surface test questions
CN113392187A (en) * 2021-06-17 2021-09-14 上海出版印刷高等专科学校 Automatic scoring and error correction recommendation method for subjective questions
CN113946651A (en) * 2021-09-27 2022-01-18 盛景智能科技(嘉兴)有限公司 Maintenance knowledge recommendation method and device, electronic equipment, medium and product
CN114048327A (en) * 2021-11-15 2022-02-15 浙江工商大学 Automatic subjective question scoring method and system based on knowledge graph
CN115048532A (en) * 2022-06-16 2022-09-13 中国第一汽车股份有限公司 Intelligent question-answering robot for automobile maintenance scene based on knowledge graph and design method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈金菊等: "基于道路法规知识图谱的多轮自动问答研究", 《现代情报》, pages 98 - 110 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116485597A (en) * 2023-04-17 2023-07-25 北京正曦科技有限公司 Standardized training method based on post capability model
CN116485597B (en) * 2023-04-17 2024-05-07 北京正曦科技有限公司 Standardized training method based on post capability model

Also Published As

Publication number Publication date
CN115774996B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN111353030B (en) Knowledge question and answer retrieval method and device based on knowledge graph in travel field
KR102194837B1 (en) Method and apparatus for answering knowledge-based question
US8983977B2 (en) Question answering device, question answering method, and question answering program
CN109271537B (en) Text-to-image generation method and system based on distillation learning
CN111708873A (en) Intelligent question answering method and device, computer equipment and storage medium
CN114064918B (en) Multi-modal event knowledge graph construction method
CN112035730B (en) Semantic retrieval method and device and electronic equipment
US20130159277A1 (en) Target based indexing of micro-blog content
CN110321537B (en) Method and device for generating file
CN109271524B (en) Entity linking method in knowledge base question-answering system
CN114780691B (en) Model pre-training and natural language processing method, device, equipment and storage medium
CN112685550B (en) Intelligent question-answering method, intelligent question-answering device, intelligent question-answering server and computer readable storage medium
CN109522396B (en) Knowledge processing method and system for national defense science and technology field
Amali et al. Classification of cyberbullying Sinhala language comments on social media
CN114817570A (en) News field multi-scene text error correction method based on knowledge graph
CN112434164A (en) Network public opinion analysis method and system considering topic discovery and emotion analysis
CN107844531B (en) Answer output method and device and computer equipment
CN112613293A (en) Abstract generation method and device, electronic equipment and storage medium
CN115774996B (en) Intelligent interview topdressing problem generation method and device and electronic equipment
CN106570196B (en) Video program searching method and device
CN114722176A (en) Intelligent question answering method, device, medium and electronic equipment
CN112711944B (en) Word segmentation method and system, and word segmentation device generation method and system
CN117216214A (en) Question and answer extraction generation method, device, equipment and medium
CN112417174A (en) Data processing method and device
CN115828854A (en) Efficient table entity linking method based on context disambiguation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant