CN109271483A - The problem of based on progressive more arbiters generation method - Google Patents

The problem of based on progressive more arbiters generation method Download PDF

Info

Publication number
CN109271483A
CN109271483A CN201811039231.8A CN201811039231A CN109271483A CN 109271483 A CN109271483 A CN 109271483A CN 201811039231 A CN201811039231 A CN 201811039231A CN 109271483 A CN109271483 A CN 109271483A
Authority
CN
China
Prior art keywords
answer
arbiter
vector
article
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811039231.8A
Other languages
Chinese (zh)
Other versions
CN109271483B (en
Inventor
苏舒婷
潘嵘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201811039231.8A priority Critical patent/CN109271483B/en
Publication of CN109271483A publication Critical patent/CN109271483A/en
Application granted granted Critical
Publication of CN109271483B publication Critical patent/CN109271483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

The present invention relates to the technical fields that problem generates, more particularly, to generation method the problem of being based on progressive more arbiters.The present invention uses generation confrontation network, generator is for generating problem, arbiter is used for evaluation problem, three kinds of arbiters are devised herein, wherein, whether true and false arbiter is for judging the problem with clear and coherent and reasonable, and attribute arbiter further judges whether the problem belongs to classification corresponding to answer, and question and answer arbiter further judges whether the problem can be answered by corresponding answer.The present invention is for the question and answer mismatch problem in text generation task, answer attributes information is added in the encoder in generator and decoder herein, and devise progressive more arbiters, successively reinforce the degree of restraint of answer from easy to difficult, first guarantee the semantic quality for the problem of generating, then the enquirement type of restricted problem, the direct answer of last restricted problem, reinforces the matching degree of question and answer.

Description

The problem of based on progressive more arbiters generation method
Technical field
The present invention relates to the technical fields that problem generates, more particularly, to raw the problem of being based on progressive more arbiters At method.
Background technique
The task belongs to a kind of text generation task, corresponding problem is generated to article and specified answer, so that the problem It can be answered in original text with the answer.It can be used for interrogation system, tutorship system, children's stories enquirement, the life of fact question and answer data At etc..It can also be used as data and go means again, give question-answering task EDS extended data set.Data set can when training for problem generation To use question and answer data, when actual use, extracted wherein to entity, by being named Entity recognition to article It can be used as answer to be putd question to.
Traditional way is to be extracted critical entities based on syntax tree or knowledge base by rule, be then filled with rule In the template set, generate specified format the problem of.Method common at present is then the Encoder- based on text generation Decoder frame, Encoder carries out coding study to article, and also carries out coding study to answer with another network, Decoder is decoded generation problem to the coding of article and answer.
Problem generates task can also combine study with other tasks, and problem generation is combined with question-answering task, It is separately added into regular terms on the loss function of respective model, carries out paired-associate learning;Problem can also be generated to model as life It grows up to be a useful person, using Question-Answering Model as arbiter, carries out dual training.Problem generates combined with summarization generation task, can share The higher level parameters of Encoder and the low layer parameter of Decoder, i.e., the parameter of shared mid-level net network layers, and close to input and defeated Network layer out keeps oneself unique parameter, carries out multi-task learning.
Bleu the and rouge index of text generation can be used in the evaluation index that problem generates task, generates for measuring The similarity of problem and real problems.It further needs exist for sampling some data progress manual evaluations, assesses the smoothness of generation problem Degree, semantic rationality, answer matches degree and the diversity of enquirement.
In existing research, the fluency and semantic rationality for generating problem can achieve a more satisfactory effect Fruit, the space but answer matches degree still has greatly improved, current main way is to carry out coding study to answer, as answering The output that case constraint is added to Decoder is predicted to generate the distribution of word up, is added and answers on the basis of encoder-decoder Case constraint can increase substantially answer matches degree really, but this constraint is not strong enough, cannot be fully solved answer mismatch The problem of, it is also necessary to further strengthen constraint.
On generating confrontation research, if arbiter will be fairly simple, relatively using two classifiers as arbiter Relatively good training, precision would generally be more than generator, and generator is with coordination more difficult between arbiter;If made with Question-Answering Model For arbiter, then arbiter will be more complicated, also not so good mode transfer type.
Summary of the invention
The present invention in order to overcome at least one of the drawbacks of the prior art described above, provides asking based on progressive more arbiters Generation method is inscribed, emphasis solves the problems, such as the unmatched difficult point of question and answer in generation.
The technical scheme is that generator is utilized using the pointer-generator model in summarization generation Copy mechanism extracts original text details and solves the problem of that oov is repeatedly generated using the solution of coverage mechanism, and is changed Into;Answer constraint is mainly reflected in decoder goes prediction word to be distributed using answer vector, is also added and answers in encoder Case constraint, after carrying out coding study to article, goes to be adjusted the coding of article with the constraint of answer, concentrates concern and answers The relevant part of case;
Arbiter devises three kinds of successively progressive arbiters, is that true and false arbiter, attribute arbiter and question and answer are sentenced respectively Other device, judges the authenticity of generation problem with true and false arbiter first, in the differentiation result of true and false arbiter situation up to standard Under, then judged with attribute arbiter generation problem type whether with answer matches, in the differentiation result of attribute arbiter In the case where up to standard, then with question and answer arbiter judge whether generation problem can be answered with the answer;The difficulty of arbiter Degree from easy to difficult, and possesses progressive relationship, and the arbiter result of only front reaches defined threshold value, just will do it next Otherwise the differentiation of step first continues the arbiter for training front;It goes to train arbiter from there through the progressive sequence of level, so that raw At the problem of slowly become more preferable, first reach simulated effect, then achieve the effect that question and answer type matching, finally reach and ask The effect of topic and answer exact matching.
The present invention devises three kinds of arbiters, wherein true and false arbiter for judge the problem whether with it is clear and coherent with rationally, Attribute arbiter further judges whether the problem belongs to classification corresponding to answer, and question and answer arbiter further judges that this is asked Whether topic can be answered by corresponding answer.
Compared with prior art, beneficial effect is: the present invention is for the question and answer mismatch problem in text generation task, originally Answer attributes information is added in text in encoder and decoder, and devises progressive more arbiters, successively adds from easy to difficult The degree of restraint of strong answer, first guarantees the semantic quality for the problem of generating, then the enquirement type of restricted problem, and finally constraint is asked The direct answer of topic.
1) progressive arbiter is designed, arbiter has progressive relationship when in use, and functionally also has Progressive relationship.
2) data enhancing is carried out using arbiter, Question-Answering Model both can be used as arbiter and remove supervision generator, can also be with Enhancing data are provided for generator, there is dual function.
3) reinforce answer type constraint in generator.
Detailed description of the invention
Fig. 1 is pointer-generator Maker model figure.
Fig. 2 is FastText disaggregated model figure.
Fig. 3 is r-net Question-Answering Model figure.
Fig. 4 is arbiter illustraton of model.
Specific embodiment
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;In order to better illustrate this embodiment, attached Scheme certain components to have omission, zoom in or out, does not represent the size of actual product;To those skilled in the art, The omitting of some known structures and their instructions in the attached drawings are understandable.Being given for example only property of positional relationship is described in attached drawing Illustrate, should not be understood as the limitation to this patent.
As shown in Figure 1, generator uses pointer-generator model, attention machine is utilized in Decoder System is replicated original text details using copy mechanism and generates oov word, utilize coverage to pay close attention to different the source language messages Mechanism repeatedly generates to punish, and improves to coverage mechanism, improves the payment method repeatedly generated;
Model structure includes Encoder model and Decoder model.
The Encoder model:
Coding study is carried out to article term vector with a Bi-LSTM network first, entity then is named to answer Identification, obtains corresponding entity type, and carry out embedding and obtain the answer type vector of low-dimensional, takes answer in article The output of corresponding term vector, answer entity vector sum original text LSTM is stitched together, then is compiled with a Bi-LSTM network Code study finally carries out that weight averages is waited to obtain answer vector;Answer entity vector is added herein can reinforce the pact of answer type Beam;
Next an attention vector is calculated article coding using answer vector, and carries out softmax normalizing Change, the coding vector of article is updated using attention vector obtained above, is compiled to improve the relevant original text with answer Code;
Decoder model:
Decoder also uses a Bi-LSTM model, when training, at each step, with attention mechanism One context vector is generated to encoder, then inputs the true word of a upper step and context vector together Into decoder;
Copy mechanism has certain probability using the vocabulary probability distribution of prediction, stays a part of probability copy directly from original text Some word, and directly the attention probability of encoder is used to obtain the probability of the word as copy, it is final to predict are as follows:
Coverage mechanism is then at each step, the attention information for the past of adding up, the word of counterweight returning to customs note into Row punishment:
The loss function of final mask are as follows:
And when test, we do not have true word as supervision, just directly use the probability of a upper step The word that vector is generated, using the term vector of the word as the input of the LSTM of Decoder.
Arbiter:
In generating confrontation network every time, it is common practice that want the probability distribution for predicting word using generator, take Word of the word of maximum probability as generation, thus the text sequence generated, then using the text of generation as arbiter Input, with the result of arbiter go train generator need to pass through extensive chemical then can thus there is gradient dispersed problem The problems such as the methods of habit goes to calculate gradient, and intensified learning is usually not easy to train, excessive there are the space action.
Therefore, it attempts continuous gradient returning to generator herein, avoids the discrete brought training problem of gradient.? Under each step of generator decoder, summation is weighted to all term vectors of vocabulary using vocabulary ProbabilityDistribution Vector, is obtained This term vector is replaced the term vector of prediction word as the input of arbiter by the term vector of one weighting.
In this way, on the one hand can solve gradient dispersed problem, it is raw on the one hand also to supervise generator using weighting term vector At a good word probability distribution, available one preferably input of the weighting term vector as arbiter can be more preferable Arbiter must be fought, and using weighting term vector also than using one-hot vector to have more family semantic information abundant.
Three kinds of arbiters are devised herein, and possess the progressive function of level.
Arbiter is successively as follows:
True and false arbiter:
As shown in Fig. 2, carrying out two classification using simplified FastText disaggregated model to problem vector, taking each step Under problem vector carry out equal weight averages, carry out linear combination, then positive example probability is predicted by sigmoid function:
Its loss function defines the log-likelihood function that is negative:
Attribute arbiter:
Front has obtained the entity class of answer, carries out more classification to problem, classification corresponding to problem is exactly answer Entity class;Next also classified using FastText disaggregated model to problem, while using hierarchical classification skill, used Huffman number encodes classification, using the standard softmax of the hierarchical structure substitution flattening of tree, can accelerate to train;
Its loss function is defined as polytypic intersection entropy function:
Question and answer arbiter:
Question-Answering Model selects r-net model;Concrete model as shown in figure 3, first respectively to article and problem with LSTM into Row modeling, calculates attention probability to problem at each step of article, obtains interaction vector of the article about problem, and And gate mechanism is added and filters unessential information, then by a LSTM e-learning, a self- then is done to article Attention most removes prediction the answer starting position in article and end position again by two networks respectively;
Its loss function is defined as the cross entropy of answer starting position and end position in original text:
Finally, the loss function of arbiter entirety is defined as:
Wherein, α, β, γ are weight corresponding to these three arbiter loss functions respectively, and weight is arranged by small change Greatly, i.e., if the arbiter result of front is up to standard, it is reduced by the training weight of the arbiter, raising is subsequent to be sentenced The training weight of other device, focuses more on subsequent arbiter, and arbiter below is promoted on the basis of guaranteeing front arbiter Effect.
Fig. 4 is arbiter illustraton of model.
Training method: pre-training first is carried out to generator and arbiter respectively herein, then in conjunction with training of getting up.
Pre-training generator: direct pre-training pointer-generator model, input is article and answer, and output is The problem of generation, loss function are defined as the cross entropy of the problem of generating and true problem.
Pre-training arbiter: pre-training attribute arbiter and question and answer arbiter herein, training attribute arbiter use are asked Topic and answer attributes, loss function are the cross entropies for predicting attribute and real property;Training question and answer arbiter uses question and answer data In article, problem and answer, loss function be predict answer position and true answer position cross entropy.
Combined training: the alternately arbiter of the generator of n batch of training and m batch, if arbiter accuracy rate It is excessively high, then turn the frequency of training of arbiter down, or improve the frequency of training of generator.
When training generator, the parameter of arbiter is first fixed, input is article and answer, first passes through problem and generates mould Type predicts the probability distribution of vocabulary, and probability corresponding to true word is taken to go to calculate loss function;Then using the probability multiplied by Vocabulary term vector obtains weighting term vector, and weighting term vector is input in arbiter model, and it is 1 that arbiter flag, which is arranged, Two parts loss function is added the loss function as generator, updates the ginseng of generator by the loss function of computational discrimination device Number.
And when training arbiter, the parameter of generator is also fixed, setting arbiter flag is 0, in true and false arbiter In, output classification the problem of the problem of input is true problem or generation, the output classification of true problem is 1, generation It is 0;In attribute arbiter, input is true problem, and output classification is corresponding answer classification;In question and answer arbiter In, input is article and true problem, and output is start-stop position of the answer in article.
Data enhancing:
In addition, carrying out data enhancing using the Question-Answering Model in arbiter herein to generate task to problem.Specific practice It is as follows:
Problem generates model and generates problem: after our first pre-training great questions generate model, inputting article and answer, output Then the problem of generation, calculates the problem of generating with bleu the and rouge index of true problem, is averaged to obtain a matching Then a threshold value is arranged in metric, if matching metric is lower than threshold value, illustrate the problem of generating with true problem phase It is relatively low like spending, then it is possible that and unmatched with answer.These articles are formed one with unmatched problem by us New data predict answer using Question-Answering Model again.
Question-Answering Model predicts answer: the problem of input article is with generating, and output answer, will in the initial position probability of original text The probability of starting position, as prediction probability, while being also provided with a threshold value multiplied by the probability of end position, if prediction probability Higher than the threshold value, illustrate that the problem maximum probability can find answer in article, newly by article, problem and new answer composition one Data generate the enhancing data of model as problem.
Re -training problem generates model: using initial data and enhancing data, training problem generates model together, still Have reinforced partly data may quality and bad, therefore different weights is arranged to initial data and enhancing data here, is arranged Slightly larger than enhancing data, the loss function that final loss function is defined as two parts data adds the loss function weight of initial data Quan He:
Wherein S1 indicates that raw data set, S2 indicate enhancing data, and α is the weight of raw data set, and β is enhancing data Weight.
The advantage of enhancing data is to can use data with existing or goes to expand a large amount of new data without labeled data, leads to Cross the probability of the high Question-Answering Model of card guarantee predict answer reliability, although may have a small amount of noise data, A large amount of reliable data can be obtained, this batch data is added in training data, and the robustness of model can be improved.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this Made any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of invention Protection scope within.

Claims (4)

1. the problem of being based on progressive more arbiters generation method, which comprises the following steps:
Generator is extracted original text details using copy mechanism and is solved using the pointer-generator model in summarization generation Certainly oov problem is solved the problems, such as to repeatedly generate, and is improved using coverage mechanism;Answer constraint is mainly reflected in It goes prediction word to be distributed using answer vector in decoder, answer constraint is also added in encoder, article is encoded It after study, goes to be adjusted the coding of article with the constraint of answer, concentrates and pay close attention to part relevant to answer;
Arbiter devises three kinds of successively progressive arbiters, is true and false arbiter, attribute arbiter and question and answer arbiter respectively, The authenticity for judging generation problem with true and false arbiter first, in the case where the differentiation result of true and false arbiter is up to standard, then Judged with attribute arbiter generation problem type whether with answer matches, it is also up to standard in the differentiation result of attribute arbiter In the case of, then with question and answer arbiter judge whether generation problem can be answered with the answer;The difficulty of arbiter is from easy To difficulty, and possess progressive relationship, the arbiter result of only front reaches defined threshold value, just will do it sentencing for next step Not, otherwise first continue the arbiter of trained front;It goes to train arbiter from there through the progressive sequence of level, so that is generated asks Topic slowly becomes more preferable, first reaches simulated effect, then achieve the effect that question and answer type matching, finally reaches problem and answer The effect of case exact matching.
2. the problem of being based on progressive more arbiters generation method according to claim 1, it is characterised in that: the life It grows up to be a useful person using pointer-generator model, pays close attention to different original text letters using attention mechanism in Decoder Breath is replicated original text details using copy mechanism and generates oov word, repeatedly generated using coverage mechanism to punish, And coverage mechanism is improved, the payment method repeatedly generated is improved;
Model structure includes Encoder model and Decoder model.
3. the problem of being based on progressive more arbiters generation method according to claim 2, it is characterised in that: described Encoder model:
Coding study is carried out to article term vector with a Bi-LSTM network first, Entity recognition then is named to answer, Corresponding entity type is obtained, and carries out embedding and obtains the answer type vector of low-dimensional, takes answer corresponding in article The output of term vector, answer entity vector sum original text LSTM be stitched together, then carry out coding theory with a Bi-LSTM network It practises, finally carries out that weight averages is waited to obtain answer vector;Answer entity vector is added herein can reinforce the constraint of answer type;
Next an attention vector is calculated article coding using answer vector, and carries out softmax normalization, The coding vector of article is updated using attention vector obtained above, to improve the relevant original text coding with answer;
Decoder model:
Decoder also uses a Bi-LSTM model, when training, at each step, with attention mechanism pair Encoder generates a context vector, is then input to the true word of a upper step and context vector together In decoder;
Copy mechanism has certain probability using the vocabulary probability distribution of prediction, stay a part of probability directly from original text copy some Word, and directly the attention probability of encoder is used to obtain the probability of the word as copy, it is final to predict are as follows:
Coverage mechanism then at each step, punished by the word of the attention information for the past of adding up, counterweight returning to customs note It penalizes:
The loss function of final mask are as follows:
And when test, we do not have true word as supervision, just directly use the probability vector of a upper step The word generated, using the term vector of the word as the input of the LSTM of Decoder.
4. the problem of being based on progressive more arbiters generation method according to claim 1, it is characterised in that: described sentences Other device is successively as follows:
True and false arbiter:
Using simplified FastText disaggregated model, two classification are carried out to problem vector, take problem vector under each step into The weight averages such as row carry out linear combination, then predict positive example probability by sigmoid function:
Its loss function defines the log-likelihood function that is negative:
Attribute arbiter:
Front has obtained the entity class of answer, carries out more classification to problem, classification corresponding to problem is exactly the reality of answer Body classification;Next also classified using FastText disaggregated model to problem, while using hierarchical classification skill, use Hough Graceful several pairs of classifications encode, and using the standard softmax of the hierarchical structure substitution flattening of tree, can accelerate to train;
Its loss function is defined as polytypic intersection entropy function:
Question and answer arbiter:
Question-Answering Model selects r-net model;Article and problem are modeled with LSTM respectively first, at each step of article Attention probability is calculated to problem, obtains interaction vector of the article about problem, and it is inessential that the filtering of gate mechanism is added Information, then by a LSTM e-learning, a self-attention then is done to article, most passes through two networks difference Remove prediction the answer starting position in article and end position again;
Its loss function is defined as the cross entropy of answer starting position and end position in original text:
Finally, the loss function of arbiter entirety is defined as:
Wherein, α, β, γ are weight corresponding to these three arbiter loss functions respectively, and weight setting is changed from small to big, i.e., If the arbiter result of front is up to standard, it is reduced by the training weight of the arbiter, improves subsequent arbiter Training weight, focus more on subsequent arbiter, the effect of arbiter below promoted on the basis of guaranteeing front arbiter.
CN201811039231.8A 2018-09-06 2018-09-06 Problem generation method based on progressive multi-discriminator Active CN109271483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811039231.8A CN109271483B (en) 2018-09-06 2018-09-06 Problem generation method based on progressive multi-discriminator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811039231.8A CN109271483B (en) 2018-09-06 2018-09-06 Problem generation method based on progressive multi-discriminator

Publications (2)

Publication Number Publication Date
CN109271483A true CN109271483A (en) 2019-01-25
CN109271483B CN109271483B (en) 2022-03-15

Family

ID=65188554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811039231.8A Active CN109271483B (en) 2018-09-06 2018-09-06 Problem generation method based on progressive multi-discriminator

Country Status (1)

Country Link
CN (1) CN109271483B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109947931A (en) * 2019-03-20 2019-06-28 华南理工大学 Text automatic abstracting method, system, equipment and medium based on unsupervised learning
CN110110060A (en) * 2019-04-24 2019-08-09 北京百度网讯科技有限公司 A kind of data creation method and device
CN110175332A (en) * 2019-06-03 2019-08-27 山东浪潮人工智能研究院有限公司 A kind of intelligence based on artificial neural network is set a question method and system
CN110347792A (en) * 2019-06-25 2019-10-18 腾讯科技(深圳)有限公司 Talk with generation method and device, storage medium, electronic equipment
CN110427461A (en) * 2019-08-06 2019-11-08 腾讯科技(深圳)有限公司 Intelligent answer information processing method, electronic equipment and computer readable storage medium
CN110781275A (en) * 2019-09-18 2020-02-11 中国电子科技集团公司第二十八研究所 Question answering distinguishing method based on multiple characteristics and computer storage medium
CN111125325A (en) * 2019-12-06 2020-05-08 山东浪潮人工智能研究院有限公司 FAQ generation system and method based on GAN network
CN111125333A (en) * 2019-06-06 2020-05-08 北京理工大学 Generation type knowledge question-answering method based on expression learning and multi-layer covering mechanism
CN111143454A (en) * 2019-12-26 2020-05-12 腾讯科技(深圳)有限公司 Text output method and device and readable storage medium
CN111460127A (en) * 2020-06-19 2020-07-28 支付宝(杭州)信息技术有限公司 Method and device for training machine reading model
WO2020224220A1 (en) * 2019-05-07 2020-11-12 平安科技(深圳)有限公司 Knowledge graph-based question answering method, electronic device, apparatus, and storage medium
CN112307773A (en) * 2020-12-02 2021-02-02 上海交通大学 Automatic generation method of custom problem data of machine reading understanding system
CN112487139A (en) * 2020-11-27 2021-03-12 平安科技(深圳)有限公司 Text-based automatic question setting method and device and computer equipment
CN112989007A (en) * 2021-04-20 2021-06-18 平安科技(深圳)有限公司 Knowledge base expansion method and device based on countermeasure network and computer equipment
CN113343645A (en) * 2020-03-03 2021-09-03 北京沃东天骏信息技术有限公司 Information extraction model establishing method and device, storage medium and electronic equipment
CN113743825A (en) * 2021-09-18 2021-12-03 无锡融合大数据创新中心有限公司 Education teaching level evaluation system and method based on big data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180392A (en) * 2017-05-18 2017-09-19 北京科技大学 A kind of electric power enterprise tariff recovery digital simulation method
US20180144208A1 (en) * 2016-11-18 2018-05-24 Salesforce.Com, Inc. Adaptive attention model for image captioning
SG11201804174TA (en) * 2015-11-18 2018-06-28 Alibaba Group Holding Ltd Order clustering and malicious information combating method and apparatus
CN108415977A (en) * 2018-02-09 2018-08-17 华南理工大学 One is read understanding method based on the production machine of deep neural network and intensified learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG11201804174TA (en) * 2015-11-18 2018-06-28 Alibaba Group Holding Ltd Order clustering and malicious information combating method and apparatus
US20180144208A1 (en) * 2016-11-18 2018-05-24 Salesforce.Com, Inc. Adaptive attention model for image captioning
CN107180392A (en) * 2017-05-18 2017-09-19 北京科技大学 A kind of electric power enterprise tariff recovery digital simulation method
CN108415977A (en) * 2018-02-09 2018-08-17 华南理工大学 One is read understanding method based on the production machine of deep neural network and intensified learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JUNWEI BAO ETC.: "Question Generation With Doubly Adversarial Nets", 《IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING》 *
SUSHT: "问题生成调研", 《知乎HTTPS://ZHUANLAN.ZHIHU.COM/P/40505260》 *
宁丹丹: "基于语义单元替换的仿写技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109947931B (en) * 2019-03-20 2021-05-14 华南理工大学 Method, system, device and medium for automatically abstracting text based on unsupervised learning
CN109947931A (en) * 2019-03-20 2019-06-28 华南理工大学 Text automatic abstracting method, system, equipment and medium based on unsupervised learning
CN110110060A (en) * 2019-04-24 2019-08-09 北京百度网讯科技有限公司 A kind of data creation method and device
WO2020224220A1 (en) * 2019-05-07 2020-11-12 平安科技(深圳)有限公司 Knowledge graph-based question answering method, electronic device, apparatus, and storage medium
CN110175332A (en) * 2019-06-03 2019-08-27 山东浪潮人工智能研究院有限公司 A kind of intelligence based on artificial neural network is set a question method and system
CN111125333A (en) * 2019-06-06 2020-05-08 北京理工大学 Generation type knowledge question-answering method based on expression learning and multi-layer covering mechanism
CN111125333B (en) * 2019-06-06 2022-05-27 北京理工大学 Generation type knowledge question-answering method based on expression learning and multi-layer covering mechanism
CN110347792A (en) * 2019-06-25 2019-10-18 腾讯科技(深圳)有限公司 Talk with generation method and device, storage medium, electronic equipment
CN110347792B (en) * 2019-06-25 2022-12-20 腾讯科技(深圳)有限公司 Dialog generation method and device, storage medium and electronic equipment
CN110427461B (en) * 2019-08-06 2023-04-07 腾讯科技(深圳)有限公司 Intelligent question and answer information processing method, electronic equipment and computer readable storage medium
CN110427461A (en) * 2019-08-06 2019-11-08 腾讯科技(深圳)有限公司 Intelligent answer information processing method, electronic equipment and computer readable storage medium
CN110781275A (en) * 2019-09-18 2020-02-11 中国电子科技集团公司第二十八研究所 Question answering distinguishing method based on multiple characteristics and computer storage medium
CN110781275B (en) * 2019-09-18 2022-05-10 中国电子科技集团公司第二十八研究所 Question answering distinguishing method based on multiple characteristics and computer storage medium
CN111125325A (en) * 2019-12-06 2020-05-08 山东浪潮人工智能研究院有限公司 FAQ generation system and method based on GAN network
CN111125325B (en) * 2019-12-06 2024-01-30 山东浪潮科学研究院有限公司 FAQ generation system and method based on GAN network
CN111143454B (en) * 2019-12-26 2021-08-03 腾讯科技(深圳)有限公司 Text output method and device and readable storage medium
CN111143454A (en) * 2019-12-26 2020-05-12 腾讯科技(深圳)有限公司 Text output method and device and readable storage medium
CN113343645A (en) * 2020-03-03 2021-09-03 北京沃东天骏信息技术有限公司 Information extraction model establishing method and device, storage medium and electronic equipment
CN111460127A (en) * 2020-06-19 2020-07-28 支付宝(杭州)信息技术有限公司 Method and device for training machine reading model
CN112487139A (en) * 2020-11-27 2021-03-12 平安科技(深圳)有限公司 Text-based automatic question setting method and device and computer equipment
CN112487139B (en) * 2020-11-27 2023-07-14 平安科技(深圳)有限公司 Text-based automatic question setting method and device and computer equipment
CN112307773A (en) * 2020-12-02 2021-02-02 上海交通大学 Automatic generation method of custom problem data of machine reading understanding system
CN112989007A (en) * 2021-04-20 2021-06-18 平安科技(深圳)有限公司 Knowledge base expansion method and device based on countermeasure network and computer equipment
CN113743825A (en) * 2021-09-18 2021-12-03 无锡融合大数据创新中心有限公司 Education teaching level evaluation system and method based on big data

Also Published As

Publication number Publication date
CN109271483B (en) 2022-03-15

Similar Documents

Publication Publication Date Title
CN109271483A (en) The problem of based on progressive more arbiters generation method
Wu et al. Are you talking to me? reasoned visual dialog generation through adversarial learning
CN110083705A (en) A kind of multi-hop attention depth model, method, storage medium and terminal for target emotional semantic classification
CN106354710A (en) Neural network relation extracting method
CN110472642A (en) Fine granularity Image Description Methods and system based on multistage attention
CN109003678A (en) A kind of generation method and system emulating text case history
CN110717843A (en) Reusable law strip recommendation framework
CN110390397A (en) A kind of text contains recognition methods and device
CN110826639B (en) Zero sample image classification method trained by full data
CN109829049A (en) The method for solving video question-answering task using the progressive space-time attention network of knowledge base
CN110347819A (en) A kind of text snippet generation method based on positive negative sample dual training
CN109223002A (en) Self-closing disease illness prediction technique, device, equipment and storage medium
CN108932517A (en) A kind of multi-tag clothes analytic method based on fining network model
CN109886072A (en) Face character categorizing system based on two-way Ladder structure
CN117390497B (en) Category prediction method, device and equipment based on large language model
CN114090815A (en) Training method and training device for image description model
CN112416956A (en) Question classification method based on BERT and independent cyclic neural network
Kleinberg et al. How humans impair automated deception detection performance
CN112215001A (en) Rumor identification method and system
CN110348516A (en) Data processing method, device, storage medium and electronic equipment
CN117390141A (en) Agricultural socialization service quality user evaluation data analysis method
CN117521012A (en) False information detection method based on multi-mode context hierarchical step alignment
CN112215346A (en) Implementation method of humanoid general artificial intelligence
CN116775860A (en) Unsupervised opinion abstract generation method and system based on antagonism framework
CN116956816A (en) Text processing method, model training method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant