CN112232434A - Attack-resisting cooperative defense method and device based on correlation analysis - Google Patents

Attack-resisting cooperative defense method and device based on correlation analysis Download PDF

Info

Publication number
CN112232434A
CN112232434A CN202011180916.1A CN202011180916A CN112232434A CN 112232434 A CN112232434 A CN 112232434A CN 202011180916 A CN202011180916 A CN 202011180916A CN 112232434 A CN112232434 A CN 112232434A
Authority
CN
China
Prior art keywords
defense
correlation
divergence
model
attack
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011180916.1A
Other languages
Chinese (zh)
Other versions
CN112232434B (en
Inventor
陈晋音
陈若曦
郑晓雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202011180916.1A priority Critical patent/CN112232434B/en
Publication of CN112232434A publication Critical patent/CN112232434A/en
Application granted granted Critical
Publication of CN112232434B publication Critical patent/CN112232434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/248Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/248Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
    • G06V30/2552Combination of methods, e.g. classifiers, working on different input data, e.g. sensor fusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for defending against attack synergy based on correlation analysis, comprising the following steps: (1) acquiring a counterimage corresponding to each attack method; (2) respectively acting on the deep learning model by using a plurality of defense methods to obtain a plurality of defense models; (3) calculating a first divergence correlation between any two defense models according to the prediction confidence coefficients of any two defense models; (4) calculating a second divergence correlation between any one defense model and the deep learning model according to the prediction confidence of any one defense model and the prediction confidence of the deep learning model; (5) after two defense models corresponding to the maximum first divergence correlation are determined, two second divergence correlations corresponding to the two defense models are selected, and the defense models corresponding to the larger one and the smaller one are selected from the two second divergence correlations to sequentially perform anti-attack defense identification. A fast and efficient defense method can be selected to combat defense attacks.

Description

Attack-resisting cooperative defense method and device based on correlation analysis
Technical Field
The invention belongs to the field of image recognition security, and particularly relates to an anti-attack cooperative defense method and device based on correlation analysis.
Background
Deep learning technology in artificial intelligence technology is widely applied to tasks such as human-computer interaction, safety protection, unmanned driving and the like by virtue of good performance in the fields of computer vision, natural language processing, complex network analysis and the like. In the process of gradually replacing human beings to carry out autonomous decision, the problems that the deep learning is easy to resist attack and the like exist, and risks are brought to network security, data security and information security. Therefore, the research on the safety and the robustness of the deep learning technology is the key for reliable application of the artificial intelligence technology.
At the same time, machine learning models are often susceptible to errors arising from antagonistic operations on their inputs. The existence of small disturbances imperceptible to the human eye may cause the deep learning model to be affected by the interference image to generate classification errors. In the aspect of computer vision (image classification and identification), the resistance attack is typified by FGSM, BIM, PGD, C & W, depfool and the like. In addition, adversarial attacks also exist in autoencoders, reinforcement learning, semantic segmentation and object detection. In reality, adversarial attacks also exist in scenes such as face recognition and guideboard recognition.
With the increasing application of artificial intelligence, the safety of the deep learning model is more and more important in the aspects of face recognition, self-driving and financial credit. The vulnerability of the depth model poses a great potential threat to applications with harsh security conditions, so that it is of great significance to study the defense against attacks. Szegydy and Goodfellow et al propose countertraining methods to inject countervailing samples into the training set to enhance the robustness of the neural network to the countervailing samples to prevent specific countervailing attacks. But the method causes the target model to have weak generalization performance when facing various types of combined attacks. For this reason, Miyato et al and Zheng et al propose virtual confrontation training and stability training methods, respectively, to enhance defense effects. Xie et al found that randomly transforming the dataset dimensions can effectively reduce the performance of combating sample attacks. Paperot et al designed a defensive method of countermeasure based on the concept of "distillation," expanding the defensive distillation method by solving the numerical instability problem. This approach modifies the network parameters but at a significant cost. Ross and Doshi-Velez improve the defense against attacks by a gradient regularization method, but the computational complexity is doubled. Meng and Chen propose MagNet, which designs a perturbation defense framework based on multiple external detectors, is effective against the most advanced attacks in black and gray boxes, and does not sacrifice the false alarm rate of benign samples.
As research continues, researchers will come up with more defense methods, but how to use these defense methods efficiently becomes a new problem. Which defense or which defense methods should be selected for use in an application? How can each defense method be assigned a respective application occasion, how can different defense methods be combined to exert the maximum defense effect? There is currently no fast and efficient way to direct cooperative defense.
Disclosure of Invention
In order to overcome the defects that the existing integrated defense method is low in efficiency and cannot achieve the maximized defense effect, the invention provides an anti-attack cooperative defense method and device based on correlation analysis, and a rapid and efficient defense method can be selected to resist the attack.
The technical scheme adopted by the invention for solving the technical problems is as follows:
in a first aspect, a collaborative defense method against attacks in correlation analysis includes the following steps:
(1) in the process of identifying the original image by the deep learning model, attacking the deep learning model by adopting a plurality of attack methods respectively to obtain a counterimage corresponding to each attack method;
(2) preparing a plurality of defense methods, and respectively acting the defense methods on the deep learning model to obtain a plurality of defense models;
(3) calculating first divergence correlation between any two defense models according to the prediction confidence coefficients of any two defense models for all types of confrontation images to form a first divergence correlation set;
(4) calculating second divergence correlation between any one defense model and the deep learning model according to the prediction confidence of any one defense model for all types of confrontation images and the prediction confidence of the deep learning model for all types of confrontation images to form a second divergence correlation set;
(5) selecting the maximum first divergence correlation from the first divergence correlation set, determining two defense models corresponding to the maximum first divergence correlation, selecting two second divergence correlations corresponding to the two defense models from the second divergence correlation set, selecting the defense model corresponding to the larger one from the two second divergence correlations to perform the first confrontation attack defense recognition, and selecting the defense model corresponding to the smaller one from the two second divergence correlations to perform the second confrontation attack recognition defense based on the first confrontation attack defense recognition result.
Preferably, the first divergence correlation may be a first KL divergence correlation and/or a first JS divergence correlation; the second divergence correlation can be a second KL divergence correlation and/or a second JS divergence correlation.
Preferably, when the first divergence correlation may be a first KL divergence correlation and the second divergence correlation may be a second KL divergence correlation, the first KL divergence correlation between the two defense models is calculated according to equation (1) and equation (2):
Figure BDA0002750154350000031
Figure BDA0002750154350000032
wherein the content of the first and second substances,
Figure BDA0002750154350000041
representing a first KL divergence correlation, F, between the ith and jth defense models for a kth class attacki k(x) Representing the prediction confidence of the ith defense model for the kth type of confrontation image, Fj k(x) Represents the prediction confidence of the jth defense model for the kth type countermeasure image, mu (x) represents the sampling density of the input image x, KLijIndicating the ith preventionThe first KL divergence correlation between the imperial model and the jth defensive model, ave (·) represents an average value, log (·) represents a logarithmic function, ^ represents an integral sign, and N is a natural number greater than or equal to 2;
calculating a second KL divergence correlation between any one of the defense models and the deep learning model according to the formula (3) and the formula (4):
Figure BDA0002750154350000042
Figure BDA0002750154350000043
wherein the content of the first and second substances,
Figure BDA0002750154350000044
representing a second KL divergence correlation, F, between the ith defense model and the deep learning model for a kth class attackori k(x) Representing the prediction confidence, KL, of a deep learning model for a kth-type confrontation imageiRepresenting a second KL divergence correlation between the ith defense model and the deep learning model.
Preferably, when the first divergence correlation can be a first JS divergence correlation and the second divergence correlation can be a second JS divergence correlation, the first JS divergence correlation between the two defense models is calculated according to equation (5) and equation (6):
Figure BDA0002750154350000045
Figure BDA0002750154350000046
wherein, JSijRepresenting a first JS divergence correlation between an ith defense model and a jth defense model for a kth type attack;
calculating a second JS divergence correlation between any one of the defense models and the deep learning model according to formula (7) and formula (8):
Figure BDA0002750154350000051
Figure BDA0002750154350000052
wherein the content of the first and second substances,
Figure BDA0002750154350000053
representing a second JS divergence correlation, JS, between the ith defense model and the deep learning model for a kth class attackiA second JS divergence correlation between the ith defense model and the deep learning model is represented.
In the invention, the attack method comprises FGSM (fast gradient notation), IGSM and C&Wl2Depfool, PGD (Project Gradient Descent), GA (genetic algorithm), PSO (particle swarm algorithm), CS (cuckoo search algorithm).
The defense methods comprise three types, namely data modification defense, model modification defense and additional network defense, and at least one of the three types of data modification defense, model modification defense and additional network defense is selected by the prepared multiple defense methods.
Preferably, when the first KL divergence correlation and the first JS divergence correlation, the second KL divergence correlation and the second JS divergence correlation are calculated simultaneously, and the defense effect of the defense model selected according to the KL divergence correlation is not consistent with the defense effect of the defense model selected according to the S divergence correlation, the defense model selected according to the KL divergence correlation is taken as a main part, and the counterattack defense recognition of the counterattack countervailing image is performed.
In a second aspect, a correlation analysis collaborative defense model against attack includes a computer memory, a computer processor and a computer program stored in the computer memory and executable on the computer processor, wherein the computer processor implements the correlation analysis collaborative defense method against attack when executing the computer program.
Compared with the prior art, the invention has the beneficial effects that at least:
in the attack-resisting cooperative defense method for correlation analysis, the contribution of different defense methods to the model is evaluated by using divergence in a high-dimensional characteristic space, so that the cooperative optimization of the defense method is realized. The method is suitable for various models and various attacks, and experimental results on real images show that the method for defending against the attack in a cooperative mode has good applicability and precision, is fast and efficient, and achieves a good defense effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a cooperative defense method against attacks of correlation analysis provided by an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The integrated defense method aims to overcome the defects that the existing integrated defense method is low in efficiency and cannot achieve the maximized defense effect. The embodiment provides an anti-attack collaborative defense method and device for correlation analysis. The anti-attack cooperative defense method and the device are suitable for various models and data sets, and achieve the effect of defending against attacks.
Fig. 1 is a flowchart of a cooperative defense method against attacks of correlation analysis provided by an embodiment. As shown in fig. 1, the attack-defense synergy method for correlation analysis provided by the embodiment includes the following steps:
step 1, obtaining a counterimage corresponding to each attack method as a countersample.
In the embodiment, in the process of identifying the original image by taking the deep learning model M as a target model, the deep learning model M is attacked by using 4 attack methods, and 10000 confrontation samples are generated respectively. The 4 attack methods are FGSM, IGSM and C respectively&Wl2Deepfol. The following detailed description is made for the process of obtaining challenge samples for each attack method:
countersample for FGSM attack method
Figure BDA0002750154350000071
Comprises the following steps:
Figure BDA0002750154350000072
wherein the content of the first and second substances,
Figure BDA0002750154350000073
for the original image of the i-th type,
Figure BDA0002750154350000074
is the correct label for the original image,
Figure BDA0002750154350000075
for the prediction output of the deep learning model M, L (-) is a Loss function Loss,
Figure BDA0002750154350000076
denotes derivation of the original image x, sgn (-) denotes sign function, ε is the magnitude of the control disturbanceIs determined.
Countercheck sample for IGSM attack method
Figure BDA0002750154350000077
Comprises the following steps:
Figure BDA0002750154350000078
wherein the content of the first and second substances,
Figure BDA0002750154350000079
representing the challenge sample generated at the jth iteration,
Figure BDA00027501543500000710
the original image of the ith class representing the j-1 st iteration input,
Figure BDA00027501543500000711
representing a challenge sample, clip, generated by iteration j-1ε(. cndot.) represents a clip function, defining the function value within the perturbation ε, and α represents the step size, typically set to 1.
For C&Wl2Attack method, countermeasure sample
Figure BDA00027501543500000712
Comprises the following steps:
Figure BDA00027501543500000713
where ρ is the incremental perturbation, D (-) is the distance matrix with two norms, c is the weight parameter, the confrontation sample
Figure BDA0002750154350000081
Is that
Figure BDA0002750154350000082
Fighting samples against Deepfol attack methods
Figure BDA0002750154350000083
Comprises the following steps:
repeatedly iterating the disturbance until the picture is judged by mistake, and obtaining the anti-sample
Figure BDA0002750154350000084
The computational formula for the modification of the image in each iteration is:
Figure BDA0002750154350000085
wherein r isiIn order for the disturbance to be added to the image,
Figure BDA0002750154350000086
and 2, modeling the defense method and obtaining a defense model.
Given a deep learning model M, x ∈ R for any input imagenAll have output y ═ fM(x) Where y ∈ RmFunction fM(. cndot.) represents a mapping of the deep learning model output to the input. After the defense method is applied to the model, the output of the model is represented as a new mapping y ═ f' (x) to the input.
Current defense against deep learning adversarial attacks develops in three main directions, including data modification defense, model modification defense, and additional network defense. For the first class of data modification defenses, the input data is re-represented as x' ═ fI(x) Where x represents the original input, fI(. cndot.) represents a mapping function of a data modification method, such as a resistance training or data preprocessing method of interpolation, fitting, and the like. At the addition of fIAfter the defense method, the output of the model becomes:
y=fM(x')=fM(fI(x))=FI(x) (5)
wherein, FI(. the equivalent representation after adding the data modification defense method on the original model.
Model modification for the second classDefenses, e.g. gradient regularization and distillation of defenses, and a third class of additional network defenses, e.g. using GAN for defenses, using FII(. represents the equivalence of the original model with the addition of a model modification defense method, FIIIThe equivalence of the original model with the addition of additional network defense methods.
Defining a Defense method pool Defense ═ { A, B, … }, wherein the Defense methods in the Defense method pool all select at least one of data modification Defense, model modification Defense and additional network Defense. Selecting a deep learning model M, y ═ f (x), copying multiple parameters to obtain multiple same deep learning models, acting each defense method in the defense method pool on the deep learning model to obtain multiple defense models, for example, using method f in the defense poolA(. and f)B(. The) respectively acts on the deep learning models, each deep learning model is only subjected to one defense, and a defense model F is obtainedA(. and F)B(·)。
And 3, calculating first divergence correlation among different defense models.
In the examples, note Fi(. and F)j(. the) is equivalent mapping after two different defense methods act on the same deep learning model, and defense relevance indexes of different equivalent formulas under the optimal transmission theory are defined as follows:
the KL divergence-based Defense Correlation (KL-DC) index calculation formula is defined as follows:
Figure BDA0002750154350000091
wherein log (·) represents a logarithmic function, · represents an integral sign, and x represents Fi(. and F)jElement in (·), μ (x) is the sampling density. When present satisfies Fj(. 0 and F)iAt the point (·) > 0, the KL-DC index is asymmetric and may be infinite. The larger the KL-DC value, the greater the difference between the two defense methods.
The Defense relevance (JS-DC) index calculation formula based on JS divergence is defined as follows:
Figure BDA0002750154350000092
unlike the KL-DC index, the JS-DC index is symmetrical, and the value thereof is between 0 and 1. The larger the value of JS-DC, the more dissimilar the two defense methods.
And respectively inputting the confrontation samples generated by each attack into different defense models, processing the output of the last layer of the defense models by softmax to obtain confidence matrixes, and splicing the confidence matrixes corresponding to all the confrontation samples of each attack. Mixing IGSM, C&Wl2After the confrontation samples corresponding to FGSM and Deepfol are input through the defense model A, the outputs are recorded as
Figure BDA0002750154350000101
Figure BDA0002750154350000102
And so on.
Taking the defense model A and the defense model B as an example, calculating a first KL divergence correlation KL between the defense model A and the defense model BABComprises the following steps:
Figure BDA0002750154350000103
Figure BDA0002750154350000104
Figure BDA0002750154350000105
Figure BDA0002750154350000106
Figure BDA0002750154350000107
taking the defense model A and the defense model B as an example, calculating a first JS divergence correlation JS between the defense model A and the defense model BABComprises the following steps:
Figure BDA0002750154350000108
Figure BDA0002750154350000109
Figure BDA00027501543500001010
Figure BDA00027501543500001011
Figure BDA00027501543500001012
and 4, calculating a second divergence correlation between the defense model and the original deep learning model.
The countermeasure samples generated by each attack are respectively input into the original deep learning model, the output of the last layer of the model is processed by softmax to obtain a confidence matrix, and the confidence matrixes corresponding to all the countermeasure samples of each attack are spliced. Mixing IGSM, C&Wl2After the confrontation samples corresponding to FGSM and Deepfol are input through the defense model A, the outputs are recorded as
Figure BDA00027501543500001013
Figure BDA0002750154350000111
And so on.
Taking the defense model A and the original deep learning model as an example, calculating a second KL divergence correlation KL between the defense model A and the original deep learning modelAComprises the following steps:
Figure BDA0002750154350000112
Figure BDA0002750154350000113
Figure BDA0002750154350000114
Figure BDA0002750154350000115
Figure BDA0002750154350000116
taking the defense model A and the original deep learning model as an example, calculating a second JS divergence correlation JS between the defense model A and the original deep learning modelAComprises the following steps:
Figure BDA0002750154350000117
Figure BDA0002750154350000118
Figure BDA0002750154350000119
Figure BDA00027501543500001110
Figure BDA00027501543500001111
and 5, selecting a defense model for cooperative defense based on the first divergence correlation and the second divergence correlation.
For the same deep learning model, not less than two defense methods act on the deep learning model at the same time, but the types and the action sequences of the defense methods are different, and the optimal cooperative defense types and the optimal defense sequences are also different.
Firstly, selecting the maximum first divergence correlation, determining two defense models corresponding to the maximum first divergence correlation, selecting two second divergence correlations corresponding to the two defense models, selecting a defense model corresponding to the larger one of the two second divergence correlations to perform first confrontation attack defense identification, and selecting a defense model corresponding to the smaller one of the two second divergence correlations to perform second confrontation attack defense identification based on the first confrontation attack defense identification result. This maximizes the defense effect.
And when the first KL divergence correlation and the first JS divergence correlation are calculated at the same time, and the second KL divergence correlation and the second JS divergence correlation are calculated at the same time, and the defense effect of the defense model selected according to the KL divergence correlation is inconsistent with the defense effect of the defense model selected according to the S divergence correlation, the defense model selected according to the KL divergence correlation is taken as a main part to carry out the confrontation attack defense identification of the confrontation image.
Embodiments also provide a correlation analysis-based defense cooperation against attacks model, which includes a computer memory, a computer processor and a computer program stored in the computer memory and executable on the computer processor, wherein the computer processor implements the correlation analysis-based defense cooperation against attacks method when executing the computer program.
In practical applications, the computer memory may be volatile memory at the near end, such as RAM, or volatile memory, such as ROM, FLASH, floppy disk, mechanical hard disk, etc., or may be a remote storage cloud. The computer processor can be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), or a Field Programmable Gate Array (FPGA), i.e. the steps of the attack-fighting cooperative defense method for correlation analysis can be implemented by these processors.
The counterattack collaborative defense method and the device for correlation analysis provided by the embodiment can be applied to the fields of face recognition, automatic driving, guideboard recognition and the like, when the method and the device are applied to the field of face recognition, an original image is a face image, a deep learning model is used for face recognition, a counterattack sample and a defense model can be constructed by adopting the attack method and the defense method, and then the optimal defense model is selected to sequentially perform face counterattack defense recognition according to the counterattack collaborative defense method for correlation analysis.
In the attack-resisting cooperative defense method and device based on correlation analysis, the contribution of different defense methods to the model is evaluated by using divergence in a high-dimensional characteristic space, and the cooperative optimization of the defense methods is realized. The method is suitable for various models and various attacks, and experimental results on real images show that the method for defending against the attack in a cooperative mode has good applicability and precision, is fast and efficient, and achieves a good defense effect.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. An attack-resisting cooperative defense method for correlation analysis is characterized by comprising the following steps:
(1) in the process of identifying the original image by the deep learning model, attacking the deep learning model by adopting a plurality of attack methods respectively to obtain a counterimage corresponding to each attack method;
(2) preparing a plurality of defense methods, and respectively acting the defense methods on the deep learning model to obtain a plurality of defense models;
(3) calculating first divergence correlation between any two defense models according to the prediction confidence coefficients of any two defense models for all types of confrontation images to form a first divergence correlation set;
(4) calculating second divergence correlation between any one defense model and the deep learning model according to the prediction confidence of any one defense model for all types of confrontation images and the prediction confidence of the deep learning model for all types of confrontation images to form a second divergence correlation set;
(5) selecting the maximum first divergence correlation from the first divergence correlation set, determining two defense models corresponding to the maximum first divergence correlation, selecting two second divergence correlations corresponding to the two defense models from the second divergence correlation set, selecting the defense model corresponding to the larger one from the two second divergence correlations to perform the first confrontation attack defense recognition, and selecting the defense model corresponding to the smaller one from the two second divergence correlations to perform the second confrontation attack recognition defense based on the first confrontation attack defense recognition result.
2. The method of correlation analysis for collaborative defense against attacks, according to claim 1, characterized in that the first divergence correlation can be a first KL divergence correlation and/or a first JS divergence correlation;
the second divergence correlation can be a second KL divergence correlation and/or a second JS divergence correlation.
3. The method of claim 2, wherein when the first divergence correlation can be a first KL divergence correlation and the second divergence correlation can be a second KL divergence correlation, the first KL divergence correlation between the two defense models is calculated according to formula (1) and formula (2):
Figure FDA0002750154340000021
Figure FDA0002750154340000022
wherein the content of the first and second substances,
Figure FDA0002750154340000023
representing a first KL divergence correlation, F, between the ith and jth defense models for a kth class attacki k(x) Representing the prediction confidence of the ith defense model for the kth type of confrontation image, Fj k(x) Represents the prediction confidence of the jth defense model for the kth type countermeasure image, mu (x) represents the sampling density of the input image x, KLijRepresenting a first KL divergence correlation between an ith defense model and a jth defense model, ave (·) representing an average value, log (·) representing a logarithmic function, ^ representing an integral sign, and N being a natural number greater than or equal to 2;
calculating a second KL divergence correlation between any one of the defense models and the deep learning model according to the formula (3) and the formula (4):
Figure FDA0002750154340000024
Figure FDA0002750154340000025
wherein the content of the first and second substances,
Figure FDA0002750154340000026
representing a second KL divergence correlation, F, between the ith defense model and the deep learning model for a kth class attackori k(x) Represents the prediction confidence of the deep learning model for the k-th type confrontation image,KLirepresenting a second KL divergence correlation between the ith defense model and the deep learning model.
4. The method of correlation analysis for collaborative defense against attack according to claim 3, wherein when the first divergence correlation can be a first JS divergence correlation and the second divergence correlation can be a second JS divergence correlation, the first JS divergence correlation between two defense models is calculated according to formula (5) and formula (6):
Figure FDA0002750154340000031
Figure FDA0002750154340000032
wherein, JSijRepresenting a first JS divergence correlation between an ith defense model and a jth defense model for a kth type attack;
calculating a second JS divergence correlation between any one of the defense models and the deep learning model according to formula (7) and formula (8):
Figure FDA0002750154340000033
Figure FDA0002750154340000034
wherein the content of the first and second substances,
Figure FDA0002750154340000035
representing a second JS divergence correlation, JS, between the ith defense model and the deep learning model for a kth class attackiA second JS divergence correlation between the ith defense model and the deep learning model is represented.
5. The method of claim 1, wherein the attack method comprises FGSM, IGSM, C&W l2、Deepfool、PGD、GA、PSO、CS。
6. The collaborative defense against attack according to the correlation analysis of claim 1, wherein the defense methods include three classes, and a plurality of defense methods prepared for the data modification defense, the model modification defense, and the additional network defense respectively select at least one of the three classes of the data modification defense, the model modification defense, and the additional network defense.
7. The cooperative defense method against attack of correlation analysis according to claim 2, wherein when the first KL divergence correlation and the first JS divergence correlation, the second KL divergence correlation and the second JS divergence correlation are calculated at the same time, and the defense effect of the defense model selected based on the KL divergence correlation is not consistent with the defense effect of the defense model selected based on the JS divergence correlation, the defense model selected based on the KL divergence correlation is mainly used, and the recognition of the defense against attack of the countermeasure image is performed.
8. A model of collaborative defense against attacks for correlation analysis, comprising a computer memory, a computer processor and a computer program stored in the computer memory and executable on the computer processor, wherein the computer processor when executing the computer program implements the collaborative defense against attacks method for correlation analysis of any of claims 1 to 7.
CN202011180916.1A 2020-10-29 2020-10-29 Correlation analysis-based anti-attack cooperative defense method and device Active CN112232434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011180916.1A CN112232434B (en) 2020-10-29 2020-10-29 Correlation analysis-based anti-attack cooperative defense method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011180916.1A CN112232434B (en) 2020-10-29 2020-10-29 Correlation analysis-based anti-attack cooperative defense method and device

Publications (2)

Publication Number Publication Date
CN112232434A true CN112232434A (en) 2021-01-15
CN112232434B CN112232434B (en) 2024-02-20

Family

ID=74109859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011180916.1A Active CN112232434B (en) 2020-10-29 2020-10-29 Correlation analysis-based anti-attack cooperative defense method and device

Country Status (1)

Country Link
CN (1) CN112232434B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487889A (en) * 2021-07-19 2021-10-08 浙江工业大学 Traffic state anti-disturbance generation method based on single intersection signal control of rapid gradient descent
JP6971514B1 (en) * 2021-07-13 2021-11-24 望 窪田 Information processing equipment, information processing methods and programs
CN113936140A (en) * 2021-11-18 2022-01-14 上海电力大学 Evaluation method of sample attack resisting model based on incremental learning
CN116071787A (en) * 2023-01-06 2023-05-05 南京航空航天大学 Multispectral palmprint recognition method, multispectral palmprint recognition system, electronic equipment and multispectral palmprint recognition medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674938A (en) * 2019-08-21 2020-01-10 浙江工业大学 Anti-attack defense method based on cooperative multi-task training
CN110958263A (en) * 2019-12-13 2020-04-03 腾讯云计算(北京)有限责任公司 Network attack detection method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674938A (en) * 2019-08-21 2020-01-10 浙江工业大学 Anti-attack defense method based on cooperative multi-task training
CN110958263A (en) * 2019-12-13 2020-04-03 腾讯云计算(北京)有限责任公司 Network attack detection method, device, equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6971514B1 (en) * 2021-07-13 2021-11-24 望 窪田 Information processing equipment, information processing methods and programs
CN113487889A (en) * 2021-07-19 2021-10-08 浙江工业大学 Traffic state anti-disturbance generation method based on single intersection signal control of rapid gradient descent
CN113936140A (en) * 2021-11-18 2022-01-14 上海电力大学 Evaluation method of sample attack resisting model based on incremental learning
CN113936140B (en) * 2021-11-18 2024-06-18 上海电力大学 Incremental learning-based evaluation method for challenge sample attack model
CN116071787A (en) * 2023-01-06 2023-05-05 南京航空航天大学 Multispectral palmprint recognition method, multispectral palmprint recognition system, electronic equipment and multispectral palmprint recognition medium
CN116071787B (en) * 2023-01-06 2023-09-29 南京航空航天大学 Multispectral palmprint recognition method, multispectral palmprint recognition system, electronic equipment and multispectral palmprint recognition medium

Also Published As

Publication number Publication date
CN112232434B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN112232434A (en) Attack-resisting cooperative defense method and device based on correlation analysis
He et al. Towards security threats of deep learning systems: A survey
Chawla et al. Host based intrusion detection system with combined CNN/RNN model
CN112380319B (en) Model training method and related device
Sajeeda et al. Exploring generative adversarial networks and adversarial training
CN112884802B (en) Attack resistance method based on generation
WO2023070696A1 (en) Feature manipulation-based attack and defense method for continuous learning ability system
Enache et al. Enhanced intrusion detection system based on bat algorithm-support vector machine
Duan et al. Mask-guided noise restriction adversarial attacks for image classification
Sun et al. Adversarial robustness and attacks for multi-view deep models
CN115883261A (en) ATT and CK-based APT attack modeling method for power system
Chu et al. Visualization feature and CNN based homology classification of malicious code
Abbasi Automating behavior-based ransomware analysis, detection, and classification using machine learning
CN115129896B (en) Network security emergency response knowledge graph relation extraction method based on comparison learning
CN115719085A (en) Deep neural network model inversion attack defense method and equipment
CN115758337A (en) Back door real-time monitoring method based on timing diagram convolutional network, electronic equipment and medium
Tang et al. Multi-scale meta-learning-based networks for high-resolution remote sensing scene classification
Yao et al. RemovalNet: DNN Fingerprint Removal Attacks
CN113569081A (en) Image recognition method, device, equipment and storage medium
Gupta et al. A methodical study for the extraction of landscape traits using membrane computing technique
Shah et al. Data-Free Model Extraction Attacks in the Context of Object Detection
CN115878848B (en) Antagonistic video sample generation method, terminal equipment and medium
Zhang et al. Take CARE: Improving Inherent Robustness of Spiking Neural Networks with Channel-wise Activation Recalibration Module
Sun et al. When Measures are Unreliable: Imperceptible Adversarial Perturbations toward Top-k Multi-Label Learning
Nedjah et al. Co-design dedicated system for efficient object tracking using swarm intelligence-oriented search strategies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant