CN113935481B - Countermeasure testing method for natural language processing model under condition of limited times - Google Patents

Countermeasure testing method for natural language processing model under condition of limited times Download PDF

Info

Publication number
CN113935481B
CN113935481B CN202111188633.6A CN202111188633A CN113935481B CN 113935481 B CN113935481 B CN 113935481B CN 202111188633 A CN202111188633 A CN 202111188633A CN 113935481 B CN113935481 B CN 113935481B
Authority
CN
China
Prior art keywords
local
model
countermeasure
test
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111188633.6A
Other languages
Chinese (zh)
Other versions
CN113935481A (en
Inventor
杨俊安
张雨
邵堃
刘辉
呼鹏江
娄睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202111188633.6A priority Critical patent/CN113935481B/en
Publication of CN113935481A publication Critical patent/CN113935481A/en
Application granted granted Critical
Publication of CN113935481B publication Critical patent/CN113935481B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3334Selection or weighting of terms from queries, including natural language queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3338Query expansion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a confrontation test method under the condition of limited times aiming at a natural language processing model, which mainly comprises the following three steps: a local confrontation test step, which is mainly used for carrying out confrontation test on a local white box model and can obtain a sufficient number of local confrontation test samples and corresponding replacement word position information; a target countermeasure testing step, which is mainly to perform countermeasure testing on the target black box model, and takes a local countermeasure testing sample obtained by the local countermeasure testing as a starting point of the countermeasure testing on the target model and utilizes the position information of the replacement word; and adjusting the local model, namely adjusting the local model in real time by using the output sample with the label obtained by the target countermeasure test, so that the local model is closer to the target model, thereby improving the mobility of the local countermeasure test sample. The invention reduces the cost of effective countermeasure test samples for natural language processing model security and robustness verification.

Description

Countermeasure testing method for natural language processing model under condition of limited times
[ technical field ] A method for producing a semiconductor device
The invention belongs to the technical field of artificial intelligence safety, and particularly relates to a confrontation test method for a natural language processing model under the condition of limited times.
[ background of the invention ]
Deep neural networks have wide application in the fields of computer vision, natural language processing, speech recognition and the like. Although the deep neural network has excellent performance, recent research proves that the deep learning model exposes great security risk when confronted with adversarial attack, and on the other hand, the adversarial attack can also improve the robustness and the interpretability of the deep learning model. This makes the study of challenge tests to simulate a challenge attack particularly important. However, in the black box test, for example, in terms of dealing with network attack capability and system vulnerability exploration for a deep learning model, since an attacker for the black box test cannot access the internal structure and specific parameters of the target model, the countermeasure test can only be guided by manipulating the input and output of the target model, in order to find a required number of countermeasure test samples, a large number of model queries are usually required, and each query requires time to be executed, which often requires high cost.
With the wide application of natural language processing models in various industries. Therefore, for the natural language processing model, the confrontation test sample under the condition of limited query times is obtained with high efficiency and low cost, and the method has extremely important significance for verifying, optimizing and improving the safety and robustness of the natural language processing model.
[ summary of the invention ]
Aiming at the defects of the prior art, the invention discloses a countermeasure testing method for a natural language processing model under the condition of limited times in a black box testing environment. The main idea of the invention is to fully utilize the information of the confrontation test sample generated by the local model, transfer part of the process of the confrontation test aiming at the target model to the local model and finish the confrontation test ahead, thereby saving the confrontation test cost of the target model and improving the black box test efficiency.
The invention discloses a confrontation test method aiming at a natural language processing model under the condition of limited times, which comprises the following three steps:
local countermeasure testing step: performing local countermeasure testing on the local model and generating a local countermeasure testing sample;
target confrontation test step: taking the local countermeasure test sample obtained in the local countermeasure test step as a starting point, and continuously performing target countermeasure test on the target model on the basis of the local countermeasure test sample to obtain a final countermeasure test sample;
adjusting and optimizing a local model: and adding the final confrontation test sample obtained in the target confrontation test step into a training set of the local model, retraining the local model, and adjusting and optimizing the local model.
In the local confrontation test step, the confrontation test is mainly performed on one local white-box model, and a sufficient number of local confrontation test samples and corresponding replacement word position information can be obtained in the step. In the target countermeasure testing step, countermeasure testing is mainly performed on the target black box model, and the local countermeasure testing sample obtained by the local countermeasure testing is used as a starting point of the countermeasure testing on the target model and the replacement word position information of the local countermeasure testing sample is utilized. In the local model tuning step, the local model is tuned in real time by using the labeled output sample obtained by the target countermeasure test, so that the local model is closer to the target model, and the mobility of the local countermeasure test sample is improved. It is thus possible to obtain effective challenge test samples for black box testing with less limited number of queries to the target model.
As a more preferable technical solution, in the local countermeasure test step, the local countermeasure test for generating the countermeasure test sample for the local model includes:
a sentence in a given dataset contains n words, i.e. x i =[ω 0 ,ω 1 ,…,ω m ,…,ω n ]Wherein n is a positive integer of not less than 1, x i Denotes the sentence numbered i, ω k Representing the word with k in the sentence, wherein k is more than or equal to 0 and less than or equal to n, and k is an integer;
each word is in a selected search space, such as: synonyms, sememes, word embedding spaces, etc., there will typically be an unequal number of alternatives, such as: omega m When there are multiple alternative words, their alternative word space can be represented as
Figure BDA0003300310220000031
Finding out the candidate replacement word with the highest target label prediction score in the space of the replaceable words, namely ^ based on the candidate replacement word>
Figure BDA0003300310220000032
Figure BDA0003300310220000033
Representing a sentence x i Best alternative for the m-th position word, W max A combination form after each word with the replaceable words is replaced by the optimal replacement word is shown; then, a proper optimal alternative word combination is screened out by a combination optimization method aiming at the local model, the combination is used for replacing words at corresponding positions of the original sentence, and a candidate confrontation test sample x is generated i '. Repeating the steps to obtain the required number of candidate confrontation test samples. Meanwhile, the position information of the replacement words of each candidate confrontation test sample is recorded, such as: p is a radical of i = (j, ..., k) as candidate confrontation test sample x i ' replacement of original sample x i The position order number set of the word(s).
As a more preferable technical solution, on the basis of the above technical solution, the target countermeasure test in the target countermeasure test step includes:
if the local countermeasure test sample obtained in the local countermeasure test step can directly act on the target model, so that the target model is in error prediction, namely the countermeasure test is successful, returning a successful countermeasure test sample; if the target model is predicted to be successful, namely, the countermeasure test fails, the optimization is continued by a combined optimization method by taking the local countermeasure test sample as a starting point until a successful countermeasure test sample is found.
To facilitate understanding of this step, assume x i ' for the local countermeasure test sample failing the countermeasure test, the replacement word position information p obtained in the local countermeasure test step is used i = (j, ..., k), i.e. find x i ' alternative word space for words at jth to kth positions in; by querying the targetModels to obtain candidate alternatives with the highest predicted score for the target model, e.g.
Figure BDA0003300310220000041
Directly mix x i ' Replacing the word into ` at the jth through kth positions>
Figure BDA0003300310220000042
To obtain x i If the challenge test is successful, returning a successful challenge test sample; if the challenge test fails, then at x i "the remaining positions continue to select alternative words for replacement.
As a more preferable technical solution, on the basis of the above technical solution, the tuning and optimizing the local model in the local model tuning step includes:
extracting candidate countermeasure test samples x obtained in the local countermeasure test step that directly make prediction of the target model erroneous a ", and challenge test sample x obtained in the target challenge test step i ″;
Extracting a sample x a "and x i "prediction score of target model obtained in search process;
these samples x labeled with the target model prediction scores a "and x i "adding to the training dataset of the local model retrains the local model.
As a more preferable technical solution, on the basis of the above technical solution, the local model may be CNN or LSTM.
As a more preferable technical solution, on the basis of the above technical solution, the implementation task of the target model is the same as that of the local model.
As a more preferable technical solution, on the basis of the above technical solution, the target model is a BERT model.
Compared with the prior art, the invention has the following remarkable advantages:
(1) The usability is high, the countermeasure test is carried out under the black box test condition, the black box countermeasure test can be initiated only by knowing the confidence information output by the model by an attacker, the specific structure, parameters and the like of the model do not need to be known, and the method is closer to a real scene.
(2) The effectiveness is high, the advantages of white box testing of the local model are utilized, the training of the model is more targeted, and therefore the generated confrontation test sample for the black box testing has higher effectiveness, and the verification efficiency of the safety and the robustness of the natural language processing model is favorably optimized.
(3) The cost is low, and because the information of the confrontation test sample generated by the local model is fully utilized, partial processes of the confrontation test aiming at the target model are transferred to the local model and completed in advance, the inquiry times of the target model are low, the confrontation test cost of the target model is saved, and the black box test cost of the natural language processing model is reduced.
[ description of the drawings ]
FIG. 1 is a diagram of a preferred embodiment of the countermeasure testing method under a limited number of conditions for a natural language processing model according to the present disclosure.
FIG. 2 is a diagram of a local model architecture used in a preferred embodiment of the present disclosure.
FIG. 3 is a graph showing the ratio of direct migration of local confrontation test samples as the number of feedback tuning samples increases in accordance with a preferred embodiment of the present disclosure.
[ detailed description ] embodiments
For convenience of understanding, the present embodiment is a preferred embodiment of the countermeasure testing method of the natural language processing model disclosed in the present invention under the condition of limited query times, so as to describe the structure and the inventive point in detail, but not to limit the scope of the invention as claimed in the appended claims.
The invention is described in further detail below with reference to the figures and the embodiments.
A preferred embodiment of the present invention is shown in fig. 1, and mainly includes three steps, the first step is a local confrontation test step (in the illustration, "local confrontation test"), which is mainly a confrontation test in white-box mode using input data for one local model, and since the structure of the local model is known, a sufficient number of local confrontation test samples and corresponding replacement word position information can be obtained quickly; the second step is a target countermeasure test (in the figure, "target countermeasure test"), which is mainly a countermeasure test in a black box mode for a target model, and the step uses a local countermeasure test sample obtained by the local countermeasure test as a starting point of the countermeasure test for the target model and uses the replacement word position information thereof; the third step is a local model tuning step (shown as "feedback tuning"), which uses the labeled output sample obtained from the target confrontation test to tune the local model in real time, so that the local model is closer to the target model, thereby improving the mobility of the local confrontation test sample. The embodiment can more efficiently obtain high-quality countermeasure test samples for the safety and robustness black box verification of the natural language processing model, is beneficial to optimizing the network attack coping capability of the natural language processing model and quickly analyzing the possible bugs in the system.
A more preferred embodiment of the countermeasure testing method for the natural language processing model under the condition of limited times comprises the following steps:
(1) And a local countermeasure test step, namely a countermeasure test process for generating a countermeasure test sample aiming at the local model. In particular, a sentence in a given dataset contains n words, i.e. x i =[ω 0 ,ω 1 ,…,ω w ,…,ω n ]Each word is in a selected search space, such as: synonym dictionaries, sememes or word embedding spaces, etc., there will usually be an unequal number of alternative words, e.g., ω m Can be expressed as
Figure BDA0003300310220000061
Finding the candidate replacement word with the highest predicted score for the target tag in each of the alternative word spaces, i.e., < in >>
Figure BDA0003300310220000062
Figure BDA0003300310220000063
Representing a sentence x i The optimal replacement word of the m-th position word; then, a proper optimal alternative word combination is screened out by a combination optimization method aiming at the local model, the combination is used for replacing words at corresponding positions of the original sentence, and a candidate confrontation test sample x is generated i '. Repeating the steps to obtain the required number of candidate confrontation test samples.
At the same time, the position information of the alternative words of each candidate confrontation test sample is recorded, such as p i = (j, ..., k) as candidate confrontation test sample x i ' replacement of original sample x i A set of position ordinals for the word.
(2) And a target countermeasure testing step, namely a process of performing countermeasure testing on the target model. Specifically, if the local countermeasure test sample obtained in the local countermeasure test step can directly act on the target model, so that the target model is predicted to be wrong, namely, the countermeasure test is successful, a successful countermeasure test sample is returned; if the target model is predicted correctly, namely the countermeasure test fails, the optimization is continued by a combined optimization method by taking the local countermeasure test sample as a starting point until a successful countermeasure test sample is found. Let x be i ' for the local countermeasure test sample failing the countermeasure test, the replacement word position information p obtained by the local countermeasure test step is used i = (j, ..., k), i.e. find x i ' alternative word space for words at jth to kth positions in; by querying the target model, candidate replacement words with the highest predicted score of the target model are obtained, e.g.
Figure BDA0003300310220000071
Directly mix x i ' Replacing the word into ` at the jth through kth positions>
Figure BDA0003300310220000072
To obtain x i If the challenge test is successful, returning a successful challenge test sample; if the challenge test fails, then at x i "the remaining positions continue to select alternative words for replacement.
(3) Local model tuning stepAnd adding the final confrontation test sample obtained in the target confrontation test step into a training set of the local model, and retraining the local model. Specifically, after word replacement of the original input sample, if the target model prediction can be failed, a successful countermeasure test sample x is returned a "the confrontation test samples obtained in the target confrontation test step, regardless of whether the confrontation test is successful or not, obtain the prediction scores of the target model in the search process, and these samples x with the target model prediction labels t The local model is trained in a training dataset added to the local model.
In a more preferred embodiment, 200 local confrontation test samples are first generated in the local model, and then it is detected in the 200 local confrontation test samples which samples can be directly migrated to the target model, so that the target model makes a wrong prediction, and such samples are called as samples which can be directly migrated. And taking the local confrontation test sample which cannot be directly migrated as a starting point of the confrontation test target model, and further obtaining a final confrontation test sample. And then adding the confrontation test sample aiming at the target model confrontation test into the training set of the local model, and retraining the local model. The above process was repeated until 1000 challenge test samples were generated.
With a traditional object model, such as: BERT, for example, conducted the challenge test through the above procedure, obtained a comparison of the results of the challenge test of this example with other methods as shown in Table 1.
TABLE 1 comparison of the results of the challenge test in this example with other methods
Figure BDA0003300310220000081
As shown in Table 1, the highest success rate of challenge tests and the least number of queries and lower modification rates were obtained on the two data sets IMDB and SST-2, respectively. Particularly on the IMDB data set, the success rate of the countermeasure test reaches 99.3%, and the query frequency of the target model is far lower than that of other methods. Therefore, by utilizing the embodiment of the invention, the efficiency of the countermeasure test in the black box test can be effectively improved, and the cost of the capability of the target model for dealing with the external attack in the test, evaluation and analysis is greatly reduced. Moreover, by analyzing the confrontation test sample in which the confrontation is successful, system bugs existing in the target model can be further analyzed and explored, and an effective security strategy of the target model is formulated.
As a more preferred embodiment, FIG. 2 shows the structure of a local model based on LSMT with a hidden layer of 128 dimensions and using a 300-dimensional pre-trained GloVe word embedding method.
To detect the migratability of the tuned local model, 1000 samples are randomly sampled from the test set, candidate confrontation test samples are searched using the local model, and the migratability of the candidate confrontation test samples is black-boxed on the target model. Fig. 3 shows a variation curve of the ratio of direct migration of the local confrontation test samples with the increase of the number of the feedback tuning samples, and the transmission rates of the two data sets are gradually increased after the model tuning. In the SST data set, when the number of feedback samples reaches 600, the mobility is stabilized at about 26%, and the number of queries on the target model is reduced. On the IMDB data set, the transmission rate increased from the initial 10.6% to 13.4%. These results verify the effectiveness of tuning the local model with the perturbed samples.
The present invention is not limited to the above embodiments, but all the modifications of the technical features of the present invention are possible within the scope of the present invention, and all the equivalent changes or modifications of the structure, features and principles described in the present patent application are possible within the scope of the present invention.

Claims (7)

1. A countermeasure testing method for a natural language processing model under a limited number of conditions is characterized by comprising the following steps:
local countermeasure testing step: performing local countermeasure testing on the local model and generating a local countermeasure testing sample;
target confrontation test step: taking the local countermeasure test sample obtained in the local countermeasure test step as a starting point, and continuously performing target countermeasure test on the target model on the basis of the local countermeasure test sample to obtain a final countermeasure test sample;
local model tuning: adding the final confrontation test sample obtained in the target confrontation test step into a training set of the local model, retraining the local model, and adjusting and optimizing the local model;
wherein the local countermeasure testing step comprises: processing a data set of sentences containing n words to obtain a required number of candidate confrontation test samples;
the tuning of the local model in the local model tuning step includes:
extracting candidate countermeasure test samples x obtained in a local countermeasure test step that directly make the target model prediction erroneous a ", and challenge test sample x obtained in the target challenge test step i ″;
Extracting a sample x a "and x i "prediction score of target model obtained in search process;
these samples x labeled with the target model prediction scores a "and x i "adding to the training dataset of the local model retrains the local model.
2. The method of claim 1, wherein processing the data set of sentences containing n words to obtain a desired number of candidate confrontation test samples comprises:
a sentence in a given dataset, containing n words, is represented as: x is the number of i =[ω 0 ,ω 1 ,…,ω m ,…,ω n ]Wherein x is i Denotes the sentence numbered i, ω k Representing a sentence x i K is more than or equal to 0 and less than or equal to n for the word at the kth position;
if the replaceable word exists in the selected search space, finding the one with the highest target label prediction score in the replaceable word space by querying the local modelCandidate replacement words; screening out a proper optimal alternative word combination by a combination optimization method aiming at a local model, replacing words at corresponding positions of an original sentence with the combination to generate a candidate confrontation test sample x i ′,
Repeating the steps to obtain the required number of candidate confrontation test samples, and simultaneously recording the position information of the replacement words of each candidate confrontation test sample.
3. The method according to claim 1 or 2, wherein the target countermeasure test in the target countermeasure test step includes:
if the local countermeasure test sample obtained in the local countermeasure test step can directly act on the target model, so that the target model is predicted to be wrong, returning a successful countermeasure test sample; if the countermeasure test fails, the local countermeasure test sample is taken as a starting point, and optimization is continued through a combined optimization method until a successful countermeasure test sample is found.
4. The method of claim 3, wherein the type of search space selected by the token comprises a synonym dictionary, an sememe, or a word embedding space.
5. The method of claim 4, wherein the local model is any one of:
(1)CNN;(2)LSTM。
6. the method of claim 5, wherein the objective model is implemented for the same task as the local model.
7. The method of claim 6, wherein the target model is a BERT model.
CN202111188633.6A 2021-10-12 2021-10-12 Countermeasure testing method for natural language processing model under condition of limited times Active CN113935481B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111188633.6A CN113935481B (en) 2021-10-12 2021-10-12 Countermeasure testing method for natural language processing model under condition of limited times

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111188633.6A CN113935481B (en) 2021-10-12 2021-10-12 Countermeasure testing method for natural language processing model under condition of limited times

Publications (2)

Publication Number Publication Date
CN113935481A CN113935481A (en) 2022-01-14
CN113935481B true CN113935481B (en) 2023-04-18

Family

ID=79278555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111188633.6A Active CN113935481B (en) 2021-10-12 2021-10-12 Countermeasure testing method for natural language processing model under condition of limited times

Country Status (1)

Country Link
CN (1) CN113935481B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114531283B (en) * 2022-01-27 2023-02-28 西安电子科技大学 Method, system, storage medium and terminal for measuring robustness of intrusion detection model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334497A (en) * 2018-02-06 2018-07-27 北京航空航天大学 The method and apparatus for automatically generating text
CN109117482A (en) * 2018-09-17 2019-01-01 武汉大学 A kind of confrontation sample generating method towards the detection of Chinese text emotion tendency
CN111652267A (en) * 2020-04-21 2020-09-11 清华大学 Method and device for generating countermeasure sample, electronic equipment and storage medium
CN111652290A (en) * 2020-05-15 2020-09-11 深圳前海微众银行股份有限公司 Detection method and device for confrontation sample
CN112765355A (en) * 2021-01-27 2021-05-07 江南大学 Text anti-attack method based on improved quantum behavior particle swarm optimization algorithm

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11275841B2 (en) * 2018-09-12 2022-03-15 Adversa Ai Ltd Combination of protection measures for artificial intelligence applications against artificial intelligence attacks
CN109492355B (en) * 2018-11-07 2021-09-07 中国科学院信息工程研究所 Software anti-analysis method and system based on deep learning
CN112016686B (en) * 2020-08-13 2023-07-21 中山大学 Antagonistic training method based on deep learning model
CN112149609A (en) * 2020-10-09 2020-12-29 中国人民解放军空军工程大学 Black box anti-sample attack method for electric energy quality signal neural network classification model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334497A (en) * 2018-02-06 2018-07-27 北京航空航天大学 The method and apparatus for automatically generating text
CN109117482A (en) * 2018-09-17 2019-01-01 武汉大学 A kind of confrontation sample generating method towards the detection of Chinese text emotion tendency
CN111652267A (en) * 2020-04-21 2020-09-11 清华大学 Method and device for generating countermeasure sample, electronic equipment and storage medium
CN111652290A (en) * 2020-05-15 2020-09-11 深圳前海微众银行股份有限公司 Detection method and device for confrontation sample
CN112765355A (en) * 2021-01-27 2021-05-07 江南大学 Text anti-attack method based on improved quantum behavior particle swarm optimization algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
仝鑫 ; 王罗娜 ; 王润正 ; 王靖亚 ; .面向中文文本分类的词级对抗样本生成方法.信息网络安全.2020,(第09期),全文. *
陈晋音 ; 叶林辉 ; 郑海斌 ; 杨奕涛 ; 俞山青 ; .面向语音识别***的黑盒对抗攻击方法.小型微型计算机***.2020,(第05期),全文. *

Also Published As

Publication number Publication date
CN113935481A (en) 2022-01-14

Similar Documents

Publication Publication Date Title
Sun et al. Joint type inference on entities and relations via graph convolutional networks
Pintas et al. Feature selection methods for text classification: a systematic literature review
Ravichandran et al. Learning surface text patterns for a question answering system
CN111062376A (en) Text recognition method based on optical character recognition and error correction tight coupling processing
CN110378489B (en) Knowledge representation learning model based on solid hyperplane projection
Wang et al. A comprehensive survey of grammar error correction
CN111062397A (en) Intelligent bill processing system
Murty et al. Characterizing intrinsic compositionality in transformers with tree projections
CN114237621B (en) Semantic code searching method based on fine granularity co-attention mechanism
CN113935481B (en) Countermeasure testing method for natural language processing model under condition of limited times
CN114429132A (en) Named entity identification method and device based on mixed lattice self-attention network
WO2021257160A1 (en) Model selection learning for knowledge distillation
CN113946687A (en) Text backdoor attack method with consistent labels
CN114153942B (en) Event time sequence relation extraction method based on dynamic attention mechanism
Hakimov et al. Evaluating architectural choices for deep learning approaches for question answering over knowledge bases
Huang et al. Pepc: A deep parallel convolutional neural network model with pre-trained embeddings for dga detection
CN112015760B (en) Automatic question-answering method and device based on candidate answer set reordering and storage medium
CN111581365B (en) Predicate extraction method
Liu et al. Improving cross-domain slot filling with common syntactic structure
CN114579605B (en) Table question-answer data processing method, electronic equipment and computer storage medium
Iori et al. The direction of technical change in AI and the trajectory effects of government funding
Zhang et al. Refsql: A retrieval-augmentation framework for text-to-sql generation
CN111767388B (en) Candidate pool generation method
Li Query spelling correction
Shahbazi et al. Joint neural entity disambiguation with output space search

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant