CN110276377B - Confrontation sample generation method based on Bayesian optimization - Google Patents

Confrontation sample generation method based on Bayesian optimization Download PDF

Info

Publication number
CN110276377B
CN110276377B CN201910414533.7A CN201910414533A CN110276377B CN 110276377 B CN110276377 B CN 110276377B CN 201910414533 A CN201910414533 A CN 201910414533A CN 110276377 B CN110276377 B CN 110276377B
Authority
CN
China
Prior art keywords
value
disturbance
image
vector
delta
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910414533.7A
Other languages
Chinese (zh)
Other versions
CN110276377A (en
Inventor
刘林兴
冯建文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201910414533.7A priority Critical patent/CN110276377B/en
Publication of CN110276377A publication Critical patent/CN110276377A/en
Application granted granted Critical
Publication of CN110276377B publication Critical patent/CN110276377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Security & Cryptography (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Probability & Statistics with Applications (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for generating a confrontation sample based on Bayesian optimization, and the existing black box attack method needs to acquire optimization information by inquiring a large number of models. The method takes an original picture as input, and determines a position to be optimized by calculating the gradient of the structural similarity of a disturbed picture and the original picture; then carrying out sampling optimization in the selected position by using Bayesian optimization to obtain a disturbance value which can increase the loss function in the position; and selecting a plurality of positions in an iterative mode, and optimizing to obtain a disturbance value until the classification result of the disturbed image is changed or the maximum iteration number is reached, and stopping. The method and the device can effectively reduce the frequency of inquiring the target DNN model, and the number of the disturbance pixel points is small.

Description

Confrontation sample generation method based on Bayesian optimization
Technical Field
The invention belongs to the field of computer digital image processing, and particularly relates to a countermeasure sample generation method.
Background
Deep learning has made a major breakthrough in solving complex problems that have been difficult to solve in the past, and has been applied to, for example, reconstructing brain circuits, analyzing mutations in DNA, predicting the active structures of potential drug molecules, analyzing particle accelerator data, and the like. Deep Neural Networks (DNNs) have also become the method of choice to address many of the challenging tasks of speech recognition and natural language understanding.
While DNNs perform various computer vision tasks with surprising precision, DNNs are extremely vulnerable to counter-attacks in the form of adding minute image perturbations to the image that are barely perceptible to the human visual system. Such an attack may cause the DNN classifier to completely change its predictions about the image, with the attacked model being highly trusted about the wrong predictions. Moreover, the same image perturbation may spoof multiple neural network classifiers. Such perturbed pictures that can alter the prediction of the DNN classifier are referred to as countermeasure samples.
Current methods of generating challenge samples can be broadly divided into two categories: white box attacks and black box attacks. The white-box attack assumes that all knowledge of the existing target model, including its parameter values, architecture, training method, etc., and even the training data of the target model is known, and uses this knowledge to generate countermeasure samples to spoof the target model. For example, the FGSM calculates gradient information of a target model, adds a small perturbation construction countermeasure sample with the same size to each pixel value, calculates a forward derivative of the model, and perturbs the construction countermeasure sample of a limited number of pixel points. White-box attacks have the advantage of being computationally fast, but require gradient information to the target network. The black box attack method does not need to utilize gradient and parameter knowledge of a network, the target model is deceived by inputting a prediction label of an confrontation sample query output by the target model and generating the confrontation sample by utilizing the information. For example, the One Pixel attach method uses a concept of differential evolution, a countermeasure sample is generated by observing a prediction probability label of a target model, a target network can be misled only by changing One Pixel point, and the Boundary attach method can generate the countermeasure sample only by utilizing a classification output result of the network. However, the lack of gradient information results in a high evaluation cost, such as 3 million evaluations required by One Pixel Attacks method, and million evaluations required by Boundary Attacks method.
Disclosure of Invention
The invention mainly aims to provide a black box attack method for generating a countermeasure sample based on Bayesian optimization aiming at the problem that the existing black box attack method brings a large amount of query overhead. The method searches in a solution space by using Bayesian optimization, iteratively finds a specific disturbance in the solution space, and can change the classification result of a classifier on the disturbed image after the disturbance is added to an original image.
The black box attacking method used by the invention comprises the following steps:
step one, acquiring the real category y of an original image xcAnd its probability Mc
Taking an original image x as the input of a target DNN classifier taking theta as a parameter to obtain a probability output vector M (x; theta) of the original image; taking the category corresponding to the maximum value in the probability output vector as the category prediction y of the original imagecMaximum value of the probability output vector is Mc
Step two, determining an objective function to be optimized
Generating a countermeasure sample by using an iterative method, and only disturbing a certain dimension of an image vector in each iteration in order to reduce the complexity of calculation; setting a disturbance value as z, and assigning the disturbance value z to a corresponding dimension of delta x; the disturbance value satisfies that < z < epsilon to ensure the image quality, and epsilon is a set threshold value; inputting x + delta x into a deep neural network DNN classifier with a parameter theta to obtain a prediction output vector M (x + delta x; theta); divide y by M (x + Deltax; theta)cMaximum probability value outside class MtThe corresponding category is ytThe objective function is defined as b (z) log (M)c)- log(Mt) (ii) a The optimization target is B (z) less than or equal to 0, so that the classification result of the target DNN classifier on the disturbed image is changed; Δ x is an all-0 perturbation vector with the same dimensions as x;
step three, determining the coordinates and channels needing to be optimized in the iteration
In the Tth iteration, the current disturbance image x' x + delta x and the random image x are calculatedGGradient of structural similarity of
Figure GDA0002821969250000021
Selecting a dimension s corresponding to the minimum gradient value as a required optimization dimension; x is the number ofGIs a random vector sampled from a gaussian distribution with the same dimension as x;
step four, Bayesian optimization is used in specific dimensionality
1) Proxying an objective function to be optimized using a Gaussian process, doing so using an EI policyIs an acquisition function; setting the maximum test point frequency as I and the current test point number I as 0; firstly, randomly selecting a plurality of disturbance values to test, and generating an initial observation data set D1:tT observed data points;
2) from the observed data set D1:tThe obtained posterior distribution constructs an EI acquisition function alphat(z;D1:t):
Figure GDA0002821969250000031
Wherein v is*Represents the current optimum function value, phi (-) as a standard normal distribution probability density function, mut(z) and σt(z) each represents D1:tMean and variance of the median data points;
3) selecting the next evaluation point by maximizing the acquisition function
Figure GDA0002821969250000032
Will zt+1Is assigned to the corresponding dimension s of Δ x and the value of the objective function at that time B (z) is evaluatedt+1) At z ist+1Adding the evaluation value into the observation data set D after evaluation; if I is less than or equal to I, turning to (2);
4) outputting the minimum function value B (z) in the observed data set and a corresponding disturbance value z;
step five: assigning the optimal disturbance value z obtained in the step four to a disturbance vector delta x; if B (z) is less than or equal to 0, the attack is considered to be successful, the disturbed picture x + delta x is output as a countercheck sample, if B (z) is greater than 0, the attack is considered to be unsuccessful in the iteration, the step three is skipped, and the next iteration is continued on the basis of the current disturbance vector delta x.
The invention has the beneficial effects that:
according to the method, the gradient of the structural similarity is calculated, and the disturbance is added to the pixel point of the coordinate corresponding to the minimum gradient, so that the influence of the added disturbance on the image quality is reduced. Meanwhile, the Bayesian optimization method is adopted to calculate the disturbance, and the optimal disturbance value can be obtained by using fewer query times.
Drawings
FIG. 1 is an original image;
FIG. 2 is a Gaussian random image;
FIG. 3 is a resistance disturbance image;
fig. 4 is a countermeasure sample image.
Detailed Description
The method takes an original image as input, calculates the structural similarity between the original image and a random Gaussian image, calculates the gradient of the original image and the random Gaussian image, and selects the dimension corresponding to the minimum gradient value. The best perturbation value is obtained using bayesian optimization dimension by dimension. And superposing the disturbances obtained by multiple iterations until the class prediction result of the DNN classifier is changed.
The following illustrates the specific implementation of the whole process of the present invention (see fig. 2 for the effect diagram of each step):
step one, acquiring the real category y of an original image xcAnd its probability Mc
x is the original image vector (as shown in FIG. 1), Δ x is the all 0 perturbation vector with the same dimension as x, xGIs a random vector sampled from a gaussian distribution with the same dimension as x (as shown in fig. 2). Taking an original image x as the input of a target DNN classifier, and obtaining a probability output vector M (x; theta) of the original image; taking the category corresponding to the maximum value in the probability output vector as the category prediction y of the original imagecMaximum value of the probability output vector is Mc
Step two, determining an objective function to be optimized
Since the image vector x has a higher dimension and no perturbation needs to be added to all dimensions to generate the challenge sample, only one dimension is perturbed at a time in the method, and the other dimensions are not changed to generate the trial perturbation ax. And inputting the x + delta x into the DNN classifier to obtain a prediction output vector M (x + delta x; theta). Divide y by M (x + Deltax; theta)cMaximum probability value outside class MtThe corresponding category is ytThe objective function is defined as b (z) log (M)c)- log(Mt). The goal of the optimization is B (z) ≦ 0, changing the classification result of the target DNN classifier on the perturbed image.
Step three: determining the coordinates and channels to be optimized in the iteration
In the Tth iteration, the current disturbance image x' x + delta x and the random image x are calculatedGStructural similarity SSIM (x', x)G):
Figure GDA0002821969250000041
Here mux′
Figure GDA0002821969250000042
Denotes x' and xGThe average value of (a) of (b),
Figure GDA0002821969250000043
denotes x' and xGThe variance of (a) is determined,
Figure GDA0002821969250000044
denotes x' and xGIs e.g., c1And e2Is a small scalar to ensure that the denominator is not zero. Then, the gradient of the structural similarity with respect to x' is solved to obtain a gradient vector with the same dimension as the original image
Figure GDA0002821969250000045
Figure GDA0002821969250000046
And selecting the coordinate s and the channel c corresponding to the minimum gradient value as the next optimized coordinate:
Figure GDA0002821969250000051
step four: using Bayesian optimization for particular pixels
1) And (3) using a Gaussian process to proxy an objective function to be optimized, and using an EI strategy as an acquisition function. Setting the maximum test point frequency I and the current test point number I to be 0; firstly, randomly selecting a plurality of disturbance values to test, and generating an initial observation data set D1:tT observed data points are included.
2) From the observed data set D1:tThe obtained posterior distribution constructs an EI acquisition function alphat(z;D1:t):
Figure GDA0002821969250000052
Wherein v is*Represents the current optimum function value, phi (-) as a standard normal distribution probability density function, mut(z) and σt(z) each represents D1:tMean and variance of the mean data points.
3) Selecting the next evaluation point by maximizing the acquisition function
Figure GDA0002821969250000053
Figure GDA0002821969250000054
Will zt+1Is assigned to the corresponding dimension s of Δ x and the value of the objective function at that time B (z) is evaluatedt+1) At z ist+1After the evaluation, the evaluation value is added to the observation data set D. I + -, 1, if I is less than or equal to I, turn (2).
4) And outputting the minimum function value B (z) in the observed data set and the corresponding disturbance value z.
Step five: and assigning the optimal disturbance value z obtained in the fourth step to a disturbance vector delta x (the final disturbance image is shown in fig. 3, total disturbance is carried out on 36 pixel points, and 891 evaluation times are carried out). If B (z) ≦ 0, the attack is considered to be successful, and the disturbed picture x + Δ x is output as a countercheck sample (the final countercheck sample image is shown in FIG. 4), if B (z) > 0, the attack is considered to be unsuccessful in the iteration, the step three is skipped, and the next iteration is continued on the basis of the current disturbance vector Δ x.
The experimental results are as follows: 100 pictures are randomly selected from CIFAR10 as experimental data, and in the experimental result, the average number of the disturbed pixels is 95.22, the median is 78.5, the average evaluation times is 2364.85 times, and the median is 1944.5 times. The number of evaluations is significantly less than the One Pixel Attacks method and Boundary Attacks method.

Claims (1)

1. A countercheck sample generation method based on Bayesian optimization is characterized by comprising the following steps:
step one, acquiring the real category y of an original image xcAnd its probability Mc
Taking an original image x as the input of a target DNN classifier taking theta as a parameter to obtain a probability output vector M (x; theta) of the original image; taking the category corresponding to the maximum value in the probability output vector as the category prediction y of the original imagecMaximum value of the probability output vector is Mc
Step two, determining an objective function to be optimized
Generating a countermeasure sample by using an iterative method, and only disturbing a certain dimension of an image vector in each iteration in order to reduce the complexity of calculation; setting a disturbance value as z, and assigning the disturbance value z to a corresponding dimension of delta x; the disturbance value satisfies that < z < epsilon to ensure the image quality, and epsilon is a set threshold value; inputting x + delta x into a deep neural network DNN classifier with a parameter theta to obtain a prediction output vector M (x + delta x; theta); divide y by M (x + Deltax; theta)cMaximum probability value outside class MtThe corresponding category is ytThe objective function is defined as b (z) log (M)c)-log(Mt) (ii) a The optimization target is B (z) less than or equal to 0, so that the classification result of the target DNN classifier on the disturbed image is changed; Δ x is an all-0 perturbation vector with the same dimensions as x;
step three, determining the coordinates and channels needing to be optimized in the iteration
In the Tth iteration, the current disturbance image x' x + delta x and the random image x are calculatedGGradient of structural similarity of
Figure FDA0002821969240000011
Selecting a dimension s corresponding to the minimum gradient value as a required optimization dimension; x is the number ofGIs a random vector sampled from a gaussian distribution with the same dimension as x;
step four, Bayesian optimization is used in specific dimensionality
1) Using a Gaussian process to proxy an objective function to be optimized, and using an EI strategy as an acquisition function; setting the maximum test point frequency as I and the current test point number I as 0; firstly, randomly selecting a plurality of disturbance values to test, and generating an initial observation data set D1:tT observed data points;
2) from the observed data set D1:tThe obtained posterior distribution constructs an EI acquisition function alphat(z;D1:t):
Figure FDA0002821969240000012
Wherein v is*Represents the current optimum function value, phi (-) as a standard normal distribution probability density function, mut(z) and σt(z) each represents D1:tMean and variance of the median data points;
3) selecting the next evaluation point by maximizing the acquisition function
Figure FDA0002821969240000021
Will zt+1Is assigned to the corresponding dimension s of Δ x and the value of the objective function at that time B (z) is evaluatedt+1) At z ist+1Adding the evaluation value into the observation data set D after evaluation; if I is less than or equal to I, turning to (2);
4) outputting the minimum function value B (z) in the observed data set and a corresponding disturbance value z;
step five: assigning the optimal disturbance value z obtained in the step four to a disturbance vector delta x; if B (z) is less than or equal to 0, the attack is considered to be successful, the disturbed picture x + delta x is output as a countercheck sample, if B (z) is greater than 0, the attack is considered to be unsuccessful in the iteration, the step three is skipped, and the next iteration is continued on the basis of the current disturbance vector delta x.
CN201910414533.7A 2019-05-17 2019-05-17 Confrontation sample generation method based on Bayesian optimization Active CN110276377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910414533.7A CN110276377B (en) 2019-05-17 2019-05-17 Confrontation sample generation method based on Bayesian optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910414533.7A CN110276377B (en) 2019-05-17 2019-05-17 Confrontation sample generation method based on Bayesian optimization

Publications (2)

Publication Number Publication Date
CN110276377A CN110276377A (en) 2019-09-24
CN110276377B true CN110276377B (en) 2021-04-06

Family

ID=67960053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910414533.7A Active CN110276377B (en) 2019-05-17 2019-05-17 Confrontation sample generation method based on Bayesian optimization

Country Status (1)

Country Link
CN (1) CN110276377B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063398B (en) * 2019-12-20 2023-08-18 吉林大学 Molecular discovery method based on graph Bayesian optimization
CN111275106B (en) * 2020-01-19 2022-07-01 支付宝(杭州)信息技术有限公司 Countermeasure sample generation method and device and computer equipment
CN111507384B (en) * 2020-04-03 2022-05-31 厦门大学 Method for generating confrontation sample of black box depth model
CN111476228A (en) * 2020-04-07 2020-07-31 海南阿凡题科技有限公司 White-box confrontation sample generation method for scene character recognition model
CN111709435B (en) * 2020-05-18 2023-06-20 杭州电子科技大学 Discrete wavelet transform-based countermeasure sample generation method
CN111723864A (en) * 2020-06-19 2020-09-29 天津大学 Method and device for performing countermeasure training by using internet pictures based on active learning
CN111858345A (en) * 2020-07-23 2020-10-30 深圳慕智科技有限公司 Image sample generation capability multi-dimensional evaluation method based on antagonistic sample definition
CN112200243B (en) * 2020-10-09 2022-04-26 电子科技大学 Black box countermeasure sample generation method based on low query image data
CN112766430B (en) * 2021-01-08 2022-01-28 广州紫为云科技有限公司 Method, device and storage medium for resisting attack based on black box universal face detection
CN113158138A (en) * 2021-01-28 2021-07-23 浙江工业大学 Method for rapidly detecting contrast sensitivity threshold
CN113450271B (en) * 2021-06-10 2024-02-27 南京信息工程大学 Robust self-adaptive countermeasure sample generation method based on human visual model
CN113486736B (en) * 2021-06-21 2024-04-02 南京航空航天大学 Black box anti-attack method based on active subspace and low-rank evolution strategy
CN113704758B (en) * 2021-07-29 2022-12-09 西安交通大学 Black box attack countermeasure sample generation method and system
CN113420841B (en) * 2021-08-23 2021-12-14 北京邮电大学 Toxic sample data generation method and device
CN114444690B (en) * 2022-01-27 2024-06-07 厦门大学 Migration attack method based on task augmentation
CN115063654A (en) * 2022-06-08 2022-09-16 厦门大学 Black box attack method based on sequence element learning, storage medium and electronic equipment
CN114861893B (en) * 2022-07-07 2022-09-23 西南石油大学 Multi-channel aggregated countermeasure sample generation method, system and terminal
CN115271067B (en) * 2022-08-25 2024-02-23 天津大学 Android anti-sample attack method based on feature relation evaluation
CN116543268B (en) * 2023-07-04 2023-09-15 西南石油大学 Channel enhancement joint transformation-based countermeasure sample generation method and terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257116A (en) * 2017-12-30 2018-07-06 清华大学 A kind of method for generating confrontation image
CN108446765A (en) * 2018-02-11 2018-08-24 浙江工业大学 The multi-model composite defense method of sexual assault is fought towards deep learning
CN108520268A (en) * 2018-03-09 2018-09-11 浙江工业大学 The black box antagonism attack defense method evolved based on samples selection and model
CN109165735A (en) * 2018-07-12 2019-01-08 杭州电子科技大学 Based on the method for generating confrontation network and adaptive ratio generation new samples

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025284B (en) * 2017-04-06 2020-10-27 中南大学 Network comment text emotional tendency recognition method and convolutional neural network model
JP7023669B2 (en) * 2017-10-26 2022-02-22 株式会社Preferred Networks Image generation method, image generation device, and image generation program
US11741693B2 (en) * 2017-11-15 2023-08-29 Palo Alto Research Center Incorporated System and method for semi-supervised conditional generative modeling using adversarial networks
CN108491925A (en) * 2018-01-25 2018-09-04 杭州电子科技大学 The extensive method of deep learning feature based on latent variable model
CN108833401A (en) * 2018-06-11 2018-11-16 中国人民解放军战略支援部队信息工程大学 Network active defensive strategy choosing method and device based on Bayes's evolutionary Game

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257116A (en) * 2017-12-30 2018-07-06 清华大学 A kind of method for generating confrontation image
CN108446765A (en) * 2018-02-11 2018-08-24 浙江工业大学 The multi-model composite defense method of sexual assault is fought towards deep learning
CN108520268A (en) * 2018-03-09 2018-09-11 浙江工业大学 The black box antagonism attack defense method evolved based on samples selection and model
CN109165735A (en) * 2018-07-12 2019-01-08 杭州电子科技大学 Based on the method for generating confrontation network and adaptive ratio generation new samples

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Active Preference Learning for Generative Adversarial Networks;Masahiro Kazama et al;《IEEE International Conference on Big Data》;20180112;4389-4393 *
Verifying Controllers Against Adversarial Examples with Bayesian;Shromona Ghosh et al;《IEEE International Conference on Robotics and Automation》;20180904;7306-7313 *
基于贝叶斯生成对抗网络的背景消减算法;郑文博等;《自动化学报》;20180531;第44卷(第5期);878-890 *
深度学习中的对抗样本问题;张思思等;《计算机学报》;20181025;第42卷(第8期);1886-1904 *
黑盒威胁模型下深度学习对抗样本的生成;孟东宇;《电子设计工程》;20181231;第26卷(第24期);164-173 *

Also Published As

Publication number Publication date
CN110276377A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN110276377B (en) Confrontation sample generation method based on Bayesian optimization
Cui et al. Class-balanced loss based on effective number of samples
Ghosh et al. Structured variational learning of Bayesian neural networks with horseshoe priors
CN108073876B (en) Face analysis device and face analysis method
Liu et al. A chaotic quantum-behaved particle swarm optimization based on lateral inhibition for image matching
CN109961145B (en) Antagonistic sample generation method for image recognition model classification boundary sensitivity
CN110334806A (en) A kind of confrontation sample generating method based on production confrontation network
CN107578028B (en) Face recognition method, device and equipment and computer readable storage medium
CN111709435A (en) Countermeasure sample generation method based on discrete wavelet transform
Alqahtani et al. Pruning CNN filters via quantifying the importance of deep visual representations
CN110136162B (en) Unmanned aerial vehicle visual angle remote sensing target tracking method and device
CN111967573A (en) Data processing method, device, equipment and computer readable storage medium
CN114038055A (en) Image generation method based on contrast learning and generation countermeasure network
Striuk et al. Generative adversarial neural network for creating photorealistic images
Wiggers et al. Predictive sampling with forecasting autoregressive models
CN112560034B (en) Malicious code sample synthesis method and device based on feedback type deep countermeasure network
Zhang et al. Distribution-preserving-based automatic data augmentation for deep image steganalysis
Liu et al. A meaningful learning method for zero-shot semantic segmentation
Putra et al. Multilevel neural network for reducing expected inference time
CN117011508A (en) Countermeasure training method based on visual transformation and feature robustness
Yang et al. Pseudo-representation labeling semi-supervised learning
Shi et al. A scalable convolutional neural network for task-specified scenarios via knowledge distillation
WO2021248544A1 (en) Low resource computational block for trained neural network
CN111882563B (en) Semantic segmentation method based on directional full convolution network
CN114036503B (en) Migration attack method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant