CN116824232A - Data filling type deep neural network image classification model countermeasure training method - Google Patents

Data filling type deep neural network image classification model countermeasure training method Download PDF

Info

Publication number
CN116824232A
CN116824232A CN202310697116.4A CN202310697116A CN116824232A CN 116824232 A CN116824232 A CN 116824232A CN 202310697116 A CN202310697116 A CN 202310697116A CN 116824232 A CN116824232 A CN 116824232A
Authority
CN
China
Prior art keywords
model
data
countermeasure
training
filling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310697116.4A
Other languages
Chinese (zh)
Inventor
卢光跃
温苏雷
孙家泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN202310697116.4A priority Critical patent/CN116824232A/en
Publication of CN116824232A publication Critical patent/CN116824232A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

Aiming at the problem of robustness optimization of a deep neural network, the invention discloses a data-filled deep neural network image classification model countermeasure training method, and belongs to the field of deep learning and artificial intelligence safety. Firstly, reconstructing a target model, amplifying the class of the model output layer and defining the newly added class as a filling class. And secondly, establishing a trap type smooth loss function, optimizing a target manifold and improving the detection efficiency of the countermeasure sample. Then, a basic countermeasure training method is selected, filling class data is generated on the basis of the countermeasure samples by using a filling class data generation method in a data generation stage, the countermeasure training is performed by using the filling class data and the countermeasure samples, and model parameters are updated by using a trap type smoothing loss function. And finally, optimizing the method parameters according to the combined test result, and retraining for multiple times to obtain an optimized target model. The invention simultaneously combines the countermeasure training and the countermeasure sample detection, can effectively defend the countermeasure samples with different disturbance sizes, and improves the robustness of the model.

Description

Data filling type deep neural network image classification model countermeasure training method
Technical Field
The invention belongs to the field of deep learning and artificial intelligence safety, in particular relates to expansion of a neural network model and optimization of robustness of the neural network model, and provides a data filling type deep neural network image classification model countermeasure training method.
Background
The deep neural network (Deep Neural Network, DNN) is the most widely used and refined discipline in Artificial Intelligence (AI), and has been greatly successful in fields such as image classification and segmentation, image detection and tracking, and speech recognition, and has been rapidly developed in fields with extremely high safety requirements such as automatic driving and medical diagnosis, along with further optimization and promotion of computer computing power, models and data availability. Taking a Stylor company with functions of automatic gear shifting, automatic driving and the like as an example, the Tesla company in 2021 has the market share of the electric vehicle of up to 72% in the United states, and meanwhile, the sales lead of the electric vehicle in China is stabilized. This illustrates the great appeal and great commercial value of AI. However, the AI security is accompanied by serious tests in the high risk fields of automatic driving, medical images, military industry and the like.
In the field of AI image classification, with the invention of convolutional neural networks (Convolutional Neural Networks, CNNs), researchers sequentially put forward VGG, resNet, inceptionV and other deep neural network models, so that the image classification accuracy under various scenes is greatly improved. However, research shows that a high-precision deep neural network model is extremely vulnerable to attack against a sample, resulting in misjudgment of the model. The challenge sample is a malicious challenge sample generated by an attacker after adding a carefully made tiny challenge disturbance to the clean sample, and the clean sample and the challenge sample cannot be distinguished only by human vision. Meanwhile, due to the complexity of the deep neural network model, insufficient training data and other reasons, the countermeasure sample can influence the feature extraction of the DNN internal shallow network part on the data and the semantic analysis of the deep network part on the data, so that the final classification judgment of the model is changed, and the classification error of the model is caused.
DNN robustness is an important component of trusted AI, emphasizing in the field of DNN image classification that DNN models should not be affected by disturbances in the face of processed data samples, still maintaining correct predicted output data classes. The challenge sample reveals the vulnerability and vulnerable characteristics of the deep neural network, is a large security hole in the field of the deep neural network, and seriously jeopardizes the robustness and the security of the deep neural network.
For the DNN field security vulnerability of the challenge sample, researchers have proposed various challenge defense methods to improve the robustness of DNN. DNN robustness is used to measure the immunity of the DNN model to disturbance noise added input data. The countermeasure defense method mainly improves the robustness of DNN from four angles: challenge sample detection, sample input pre-processing, direct promotion of DNN itself challenge robustness, and can prove robust defenses. At present, due to the unexplained DNN and the endless variety of new challenge methods, the challenge training that directly improves DNN challenge robustness is the most powerful DNN challenge defense method known by the academic community for its stability and generalization of defense.
Disclosure of Invention
A data-filled deep neural network image classification model countermeasure training method comprises the steps of firstly, generating filling type data by using a filling type data generation method on the basis of a countermeasure sample generated by training set data, and performing countermeasure training on a target model M by using the countermeasure sample and a certain proportion of filling type data. Meanwhile, in order to add a filling type output category to the model M, the output layer of the model M is amplified, so that an output layer with the original output category of k is changed into k+1. The method comprises the steps of determining whether a corresponding input is abnormal countermeasure sample input or not according to a feature space which is not defined by a low-dimensional training set data manifold (data distribution) in a high-dimensional space of a deep neural network model, and if so, determining that the corresponding input is abnormal countermeasure sample input according to the fact that the output class of the model is the filling class. The loss function trained by the second modified model is a trap type smooth loss function. The method aims to optimize data distribution of different types in the training set data popularity, and simultaneously establish an induction relation between the training set data manifold and the filling type data manifold, so that the detection efficiency of the countermeasure sample is improved.
The invention relates to a defense method for a challenge sample, which is characterized by comprising the following steps:
step one: in order to improve the robustness of the target deep neural network image classification model, a trained deep neural network image classification model applied to specific transactions is selected as a target model M, then the target model M is modified, the output layer amplification is carried out on the target model M with k output categories, and the final model M is obtained plus The number of the output layer categories is k+1;
step two: creating a trap-type smoothing loss function for optimizing the target manifold distribution and creating an induction relation between the target manifold and the filling class distribution, as in formula (1), wherein q is the total class number of the model, alpha is the trap induction factor, y T And y S Output vectors belonging to the clean category and the filling category in the total output vector respectively, y q For the original one-hot type of output vector,and->Respectively represent y q Part of the original output vector belonging to the clean class and the fill class, size (y T ) Representing vector y T The number of elements of (2) is finally output as y T And y S The spliced output vector y, specifically: the trap-type smoothing loss function uses a default one-hot output vector for the label vector belonging to the filling class, and the formula (1) is given for y for the label vector belonging to the original target class T And y S Respectively processing, wherein y is as follows T The target class in the middle smoothes the a probability and distributes probabilities a/2size (y) to all the original classes in a uniform distribution manner T ) For y S Is assigned a probability a/2 (q-size (y) T ) And finally will be smoothedTreated y T And y S The label vector and the model output x are calculated by using a cross entropy loss function after being spliced;
step three: selecting a basic countermeasure training method, and generating corresponding filling class data by using a filling data generating method shown in a formula (2) for a countermeasure sample shown in the right side of fig. 3 generated by each small-batch training set, wherein x is adv To counter the samples, the random () function returns random numbers which are in the same format as the input data and conform to normal distribution, sign () sign function gives positive and negative values corresponding to disturbance according to positive and negative of parameters, l (Epoch) represents one-dimensional linear interpolation function interpolation () which dynamically adjusts the size of disturbance vector according to the current cycle number and total cycle number Epoch according to the cycle base Epoch, uses standard deviation std of training data as the minimum value of linear interpolation, regularizes the initial interpolation value size using super parameter beta, uses data normalization input upper limit x max As the maximum value of linear interpolation, and using the super parameter eta as the control parameter generated by filling type data;
step four: according to the pixel size and the model volume of the image dataset, manually and dynamically selecting the proportion parameters of the filling class data for the countermeasure training in each small batch, splicing the filling class data with the countermeasure sample by using the proportion and participating in the retraining process of the countermeasure training, in particular: the larger the image pixels, the lower the proportion parameters of the filling class data applied to the countermeasure training in each small batch, and the data class of the newly generated filling class data is k+1;
step five: using early-stop pairing model M plus Performing challenge training, specifically, updating the historical optimal accuracy of the model in challenge test each time, when the training is loopedWhen the model accuracy rate of the current cycle is lower than the historical optimal accuracy rate and exceeds a threshold value, model training is stopped and model parameters are stored, and specifically: the threshold default value is 0.1;
step six: testing the model parameters saved in the step five by using different anti-attack methods, adjusting trap smoothing factors in the step two, parameters beta and eta in the step three filling type data generation method and proportion parameters in the step four according to test results, and carrying out parameter optimization experiments by using a deep neural network image classification model of data filling type shown in fig. 2 to resist training method schematic diagrams, specifically: when model M plus Reducing the trap smoothing factor size, increasing beta in step three to increase the initial distance of the filling class data generation method, reducing eta in step three to reduce the intrusion of filling class data to a target manifold, and reducing the splicing proportion parameter in each cycle of filling class data in step four when the model M is used for plus When the challenge sample detection performance is poor in the high disturbance challenge environment, the trap type smoothing factor size should be properly increased, the beta in the third step should be reduced to enlarge the generation search space of the filling type data, and the splicing proportion parameter of the filling type data in each cycle in the fourth step should be increased, when the model M plus The robustness of the model M is improved to the expected effect of the algorithm, while the challenge robustness provided by the challenge training can be maintained under low-disturbance attack conditions and while the disturbance global can provide detection defenses against the samples.
Drawings
FIG. 1 is a flow chart of a data-filled deep neural network image classification model countermeasure training method.
Fig. 2 is a schematic diagram of a data-filled deep neural network image classification model countermeasure training method.
Fig. 3 is a schematic diagram of a challenge sample.
Detailed Description
Taking a deep neural network model ResNet-18 as an example, a specific implementation mode of a data filling type deep neural network image classification model countermeasure training method provided by the invention is described.
Step one: resNet-18 is selected as a basic target model M, 50000 training set data in MNIST data sets are selected as an original data set D, and the number of output categories of M is 10. And carrying out +1 operation on the number of the output layers of M, wherein the number of the output layers of M is 11, and the number of the output layers of M comprises 10 target data categories and 1 filling data category.
Step two: modifying a cross entropy loss function according to a formula (1), performing trap type label smoothing on a label vector of a target data class according to the formula (1), using a default one-hot vector on a label vector of a filling class data class, finally splicing the label vector of the filling class data class in a default one-hot expression form by using the label vector of the smoothed target data class, and performing cross entropy loss calculation on the label vector and the output of a model. Specifically: setting trap type smoothing factor to 0.35
Step three: a gradient-based fast gradient notation (Fast Gradient Sign Method, FGSM) challenge training method is selected. In the process of generating the challenge sample data of each small batch, firstly, using an FGSM challenge attack method to generate the challenge sample based on the training data of the small batch in the current cycle, and secondly, using a formula (2) filling type data generating method to generate filling type data based on the challenge sample generated in the current cycle. Specifically: the FGSM disturbance parameter is set to 0.3.
Step four: splicing the generated filling type data with the countermeasure sample generated in the current circulation in a proportion of 10% -50%, marking the countermeasure sample with correct category and marking the filling type data with category 11, calculating the loss of the model by using the trap type loss function defined in the second step, and finally carrying out back propagation on the model and updating model parameters.
Step five: a projection gradient attack method (Project Gradient Descent, PGD) is used as an attack method for the model robustness test. Setting variable history optimal accuracy for recording each large cycleRobustness (accuracy) of the model M after parameter updating at the end of the loop in the face of PGD attack, and when the model accuracy obtained by the current cycle is lower than the historical optimal accuracy and is lower than the threshold value of 0.1, the countermeasure training is ended in advance and the model M is saved plus . Specifically: setting PGD disturbance resisting interval as [0.1,0.2,0.3,0.35,0.4,0.45,0.5 ]]The model M after parameter updating is used for testing the robustness of the model M in the face of different disturbance sizes against attacks. The single disturbance of PGD was set to 0.01 and the attack iteration number was 50.
Step six: FGSM challenge training using the original model M on the same training dataset using the same parameters and preserving the comparison model M compare
Step seven: using PGD, C&W and adaptive Attack method AA (Auto-attach) pair model M obtained in step five plus And M compare And performing final combination test of the model. When model M plus Accuracy in low disturbance space (setting of disturbance countermeasure less than 0.3) compared to model M obtained in step six compare Without decrease, and M plus The accuracy of the detected model can reach 80-100% in a high disturbance space (the disturbance resistance setting is more than 0.3), and a final model M is output plus . When model M plus Accuracy in low disturbance space (setting of disturbance countermeasure less than 0.3) compared to model M obtained in step six compare If the number of the parameters is obviously reduced, the following scheme is selected for parameter adjustment: 1. reducing the trap type smoothing factor value in the second step; 2. increasing initial interpolation of the filling type data generating function in the third step; 3. and (3) reducing the splicing proportion of the filling type data in the fourth step. When model M plus When the resistance sample detection capability of the high disturbance space (the resistance disturbance setting is more than 0.3) is not good, the following scheme is selected for parameter adjustment: 1. the trap type smoothing factor value in the second step is improved; 2. and (5) improving the splicing proportion of the filling type data in the fourth step. Repeating the second to seventh steps after adjusting the parameters until the model can be maintained under low disturbance anti-attack environment and M in the combined test compare Similar robustness provided by countermeasure training while having higher robustness in disturbance global, especially high disturbance countermeasure environmentsAgainst the sample detection capability.
Through the above process, a data-filled deep neural network image classification model countermeasure training method can be realized, and a flow chart is shown in fig. 1. The method mainly comprises the steps of carrying out data amplification on countermeasure training by using a filling type data generation method, and using newly generated filling type data as a detection basis of a countermeasure sample; meanwhile, a trap type smooth loss function is provided, so that the induction relation between the target data manifold and filling type data distribution is established while the data manifold in the original data can be optimized, and the detection force of the countermeasure sample is improved. After tuning, the target model can maintain the defenses provided by the countermeasure training in the low disturbance space, and simultaneously enjoy additional countermeasure sample detection defenses in the global disturbance space.
In experiments based on MNIST datasets, convolutional neural network CNN was used 1 Experiments and analyses were performed. The network structure is as follows: conv (16,4,4) +ReLU, conv (32,4,4) +ReLU, FC (100), FC. Where Conv represents the convolutional layer and FC represents the fully-connected layer. Meanwhile, the experimental results were migrated to a ResNet-18 model to further analyze its regularity and verify its effectiveness. During the course of a particular experiment, an upper perturbation limit of challenge training, epsilon=0.3, was set. The magnitude of the disturbance against the sample is increased by a step number of 0.1. In PGD attack counter attack, the attack iteration number is set to 40 and the disturbance size is set to 0.01 in a single step iteration (select l Norm instead of l 2 Norm for PGD challenge sample generation, since it is known during the experiment that The aggressiveness of the norm-generated PGD against the sample is significantly higher than that of l 2 -norm generated challenge samples. ). At C&In the W counter attack, setting the initial value of a constant term c as 1 and setting the upper limit of c as 1e10 to perform binary search dynamic transformation. Setting the binary search step number as 5 and the maximum iteration number as 1000.
Table 1MNIST: defensive robustness expression of each model in white box attack scene
Table 1 shows the complete white box test results. Nine different challenge methods are included: (representing PGD attacks with different perturbation sizes and 40 loops, wherein T represents targeted attack, default attack tag is 2, U represents no-targeted attack),> (C represents the number of restarting was 5 and the number of cycles was 1000)&W is resistant to attack, its default upper limit of constant term c is 1e10, CW padding The self-adaptive attack and the automatic attack AA aiming at filling type challenge training are strong multi-challenge attack methods which are combined into a challenge attack method, and comprise APGD-CE, APGD-DLR, FAB and black box attack method Square). The disturbance size of the default countermeasure training is 0.3. Unlike other challenge training methods, filled challenge training shows accuracy before and after detection.
As can be seen from table 1, the filled challenge training can lose little or remain robust provided by standard challenge training under low disturbance (default epsilon=0.3, set by the challenge training phase). While providing extremely high challenge sample detection rates under high disturbance (epsilon=0.6) conditions. In contrast, in a targeted attack scenario, almost all challenge training methods have good challenge robustness under low disturbance conditions, while under high disturbance conditions, all challenge training defense methods, including filled challenge training, are very vulnerable. To explore the reasons for this experimental phenomenon, a comparative experiment was performed using a targeted FGSM attack under high disturbance conditions.
TABLE 2 robust against behavior of different models in CIFAR10 white-box attack scenarios
To further verify the effectiveness and generalization of the filled challenge training, experimental results obtained from MNIST experiments were migrated into CIFAR-10 and Tiny-ImageNet datasets and more network models of different architecture. As shown in Table 2, the experimental results on CIFAR-10 show that the robustness of the model against each disturbance interval, especially in the high disturbance interval, is obviously improved. In addition, filled challenge training provides additional challenge sample detection protection. When the disturbance is large enough, the detection rate of the filling type countermeasure training reaches the peak. As in CIFAR-10, when epsilon=64/255. The accuracy of GoogLeNet before challenge sample detection tends to 0, while the challenge sample detection rate reaches 92.4%.
Table 3Tiny-ImageNet filled challenge training defense generalization verification
Table 3 shows the robustness performance of the standard PGD challenge training and the padded challenge training method of PGD challenge training based on PGD challenge training when facing PGD challenge training with a cycle number of 40 using the Tiny-ImageNet dataset. As shown in table 3, the model exhibited similar antagonistic defensive performance to MNIST and CIFAR-10 under low disturbance (epsilon=8/255) and high disturbance (epsilon=16/255) conditions.
Table 4 shows the defensive performance of the method in the face of a black box attack. Wherein M is native CNN representing normal training 1 Network model, M FGSM-AT And M PGD-AT Representing CNN after FGSM challenge training and PGD challenge training 1 Network model, M P-AT(FGSM) Representing CNN after filling-in challenge training using FGSM-based challenge training 1 Network model, M native And M res_pgd ResNet-18 network model representing normal training and countering ResNet after training using PGD-18 network model. The filling type countermeasure training can keep the countermeasure robustness similar to the standard countermeasure training under the low disturbance condition, and has extremely high countermeasure sample detection efficiency under the high disturbance condition. Furthermore, under the same experimental conditions, all pre-detection accuracy of the filled challenge training was lower than that of the standard challenge training. This phenomenon suggests that with the help of trap-like smoothing, the data distribution for each target class becomes more cohesive. The feature space thus freed is then occupied by the label of the filler class data. This phenomenon is particularly pronounced under high disturbance conditions.
TABLE 4 robustness manifestation of various challenge training in MNIST Black Box attack scenarios
In summary, the method can provide additional challenge sample detection protection for the model while not invading or even enhancing the robustness provided by the conventional challenge training under the white-box attack scene and the black-box attack scene.

Claims (1)

1. A data-filled deep neural network image classification model countermeasure training method is characterized by comprising the following steps:
step one: in order to improve the robustness of the target deep neural network image classification model, a trained deep neural network image classification model applied to specific transactions is selected as a target model M, then the target model M is modified, the output layer amplification is carried out on the target model M with k output categories, and the final model M is obtained plus The number of the output layer categories is k+1;
step two: creating a trap-type smoothing loss function for optimizing the target manifold distribution and creating an induction relation between the target manifold and the filling class distribution, as in formula (1), wherein q is the total class number of the model, alpha is the trap induction factor, y T And y S Output vectors belonging to the clean category and the filling category in the total output vector respectively, y q For the original one-hot type of output vector,and->Respectively represent y q Part of the original output vector belonging to the clean class and the fill class, size (y T ) Representing vector y T The number of elements of (2) is finally output as y T And y S The spliced output vector y, specifically: the trap-type smoothing loss function uses a default one-hot output vector for the label vector belonging to the filling class, and the formula (1) is given for y for the label vector belonging to the original target class T And y S Respectively processing, wherein y is as follows T The target class in the middle smoothes the a probability and distributes probabilities a/2size (y) to all the original classes in a uniform distribution manner T ) For y S Is assigned a probability a/2 (q-size (y) T ) Finally, the smoothed y T And y S The label vector and the model output x are calculated by using a cross entropy loss function after being spliced;
step three: selecting a basic countermeasure training method, and generating corresponding filling type data by using a filling type data generating method shown in a formula (2) for the countermeasure samples generated by each small-batch training set, wherein x is adv To counter the samples, the random () function returns random numbers which are in the same format as the input data and conform to normal distribution, sign () sign function gives positive and negative values corresponding to disturbance according to positive and negative of parameters, l (Epoch) represents one-dimensional linear interpolation function interpolation () which dynamically adjusts the size of disturbance vector according to the current cycle number and total cycle number Epoch according to the cycle base Epoch, uses standard deviation std of training data as the minimum value of linear interpolation, regularizes the initial interpolation value size using super parameter beta, uses data normalization input upper limit x max As linear interpolationAnd using the super parameter eta as a control parameter generated by filling type data;
step four: according to the pixel size and the model volume of the image dataset, manually and dynamically selecting the proportion parameters of the filling class data for the countermeasure training in each small batch, splicing the filling class data with the countermeasure sample by using the proportion and participating in the retraining process of the countermeasure training, in particular: the larger the image pixels, the lower the proportion parameters of the filling class data applied to the countermeasure training in each small batch, and the data class of the newly generated filling class data is k+1;
step five: using early-stop pairing model M plus Performing countermeasure training, specifically, updating the historical optimal accuracy of the model in the countermeasure attack test each time, when the training cycle is finished, comparing the accuracy of the model updated in the cycle with the historical optimal accuracy when facing the same countermeasure attack test method, and when the model accuracy in the cycle is lower than the historical optimal accuracy and exceeds a threshold value, stopping model training and saving model parameters, specifically: the threshold default value is 0.1;
step six: testing the model parameters saved in the fifth step by using different anti-attack methods, and adjusting trap type smoothing factors in the second step, parameters beta and eta in the third filling type data generation method and proportion parameters in the fourth step according to test results to perform parameter optimization experiments, specifically: when model M plus Reducing the trap smoothing factor size, increasing beta in step three to increase the initial distance of the filling class data generation method, reducing eta in step three to reduce the intrusion of filling class data to a target manifold, and reducing the splicing proportion parameter in each cycle of filling class data in step four when the model M is used for plus When the challenge sample detection is not well performed in a high disturbance challenge environment, the method should be properly carried outIncreasing the trap type smoothing factor, reducing beta in the third step to enlarge the generation search space of the filling type data, and increasing the splicing proportion parameter of the filling type data in each cycle in the fourth step, when the model M plus The robustness of the model M is improved to the expected effect of the algorithm, while the challenge robustness provided by the challenge training can be maintained under low-disturbance attack conditions and while the disturbance global can provide detection defenses against the samples.
CN202310697116.4A 2023-06-13 2023-06-13 Data filling type deep neural network image classification model countermeasure training method Pending CN116824232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310697116.4A CN116824232A (en) 2023-06-13 2023-06-13 Data filling type deep neural network image classification model countermeasure training method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310697116.4A CN116824232A (en) 2023-06-13 2023-06-13 Data filling type deep neural network image classification model countermeasure training method

Publications (1)

Publication Number Publication Date
CN116824232A true CN116824232A (en) 2023-09-29

Family

ID=88113836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310697116.4A Pending CN116824232A (en) 2023-06-13 2023-06-13 Data filling type deep neural network image classification model countermeasure training method

Country Status (1)

Country Link
CN (1) CN116824232A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876221A (en) * 2024-03-12 2024-04-12 大连理工大学 Robust image splicing method based on neural network structure search

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876221A (en) * 2024-03-12 2024-04-12 大连理工大学 Robust image splicing method based on neural network structure search

Similar Documents

Publication Publication Date Title
CN109948658B (en) Feature diagram attention mechanism-oriented anti-attack defense method and application
CN108596258B (en) Image classification method based on convolutional neural network random pooling
CN109639710B (en) Network attack defense method based on countermeasure training
CN110334749B (en) Anti-attack defense model based on attention mechanism, construction method and application
CN111598210B (en) Anti-attack defense method for anti-attack based on artificial immune algorithm
CN113642717B (en) Convolutional neural network training method based on differential privacy
CN112115973A (en) Convolutional neural network based image identification method
CN112085050A (en) Antagonistic attack and defense method and system based on PID controller
CN111047054A (en) Two-stage countermeasure knowledge migration-based countermeasure sample defense method
CN112580728B (en) Dynamic link prediction model robustness enhancement method based on reinforcement learning
CN116824232A (en) Data filling type deep neural network image classification model countermeasure training method
CN115907029B (en) Method and system for defending against federal learning poisoning attack
CN112434213A (en) Network model training method, information pushing method and related device
CN113033822A (en) Antagonistic attack and defense method and system based on prediction correction and random step length optimization
CN114626042B (en) Face verification attack method and device
CN116996272A (en) Network security situation prediction method based on improved sparrow search algorithm
CN111881439A (en) Recognition model design method based on antagonism regularization
CN113935396A (en) Manifold theory-based method and related device for resisting sample attack
CN117408991A (en) Image anomaly detection method and device based on reconstruction resistance and storage medium
CN114444690B (en) Migration attack method based on task augmentation
Dai et al. A targeted universal attack on graph convolutional network
CN113554104B (en) Image classification method based on deep learning model
CN115294424A (en) Sample data enhancement method based on generation countermeasure network
Luo et al. Content-adaptive adversarial embedding for image steganography using deep reinforcement learning
CN115271067B (en) Android anti-sample attack method based on feature relation evaluation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination