CN116578876B - Safety improvement method based on resistive attack deep neural network - Google Patents

Safety improvement method based on resistive attack deep neural network Download PDF

Info

Publication number
CN116578876B
CN116578876B CN202310849556.7A CN202310849556A CN116578876B CN 116578876 B CN116578876 B CN 116578876B CN 202310849556 A CN202310849556 A CN 202310849556A CN 116578876 B CN116578876 B CN 116578876B
Authority
CN
China
Prior art keywords
refiner
data
input data
loss function
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310849556.7A
Other languages
Chinese (zh)
Other versions
CN116578876A (en
Inventor
潘裕庆
张苏宁
吴吉
王震宇
薛劲松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Suzhou Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Power Supply Co of State Grid Jiangsu Electric Power Co Ltd filed Critical Suzhou Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Priority to CN202310849556.7A priority Critical patent/CN116578876B/en
Publication of CN116578876A publication Critical patent/CN116578876A/en
Application granted granted Critical
Publication of CN116578876B publication Critical patent/CN116578876B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a security improvement method based on a deep neural network for a resistance attack, which comprises the following steps: setting a plurality of refiners, wherein each refiner is provided with a corresponding encoder and a decoder, different encoders can map input data to corresponding different hidden spaces and can perform corresponding decoding through corresponding decoders so as to restore the data in the hidden spaces to clean version data; randomly selecting a certain refiner to perform denoising treatment on the current input data, wherein the refiner selected by the previous input data is different from the refiner selected by the next input data; and inputting the denoised data into a convolutional neural main network for data countermeasure training so as to perform corresponding security defense. The security improving method provided by the invention enhances the defending capability of the neural network model and improves the stability of the neural network model.

Description

Safety improvement method based on resistive attack deep neural network
Technical Field
The invention relates to the technical field of neural network resistance attack defense methods, in particular to a security improvement method based on a resistance attack deep neural network.
Background
In recent years, the development of the machine learning field has been rapid, and particularly, a method represented by convolutional neural networks (Convolutional Neural Network, CNN) and the like has achieved a good performance in a wide range of applications. However, these methods also have problems such as being susceptible to interference, so that they present a safety risk in solving critical tasks. Such an attack that introduces samples with resistive noise into the network as input, thereby disrupting network performance, resulting in misclassification or reduced confidence of sensitive tasks is referred to as a resistive attack. Ensuring the safety of the power network and information is very important for the safe production of electric power, not only concerning the continuity and reliability of electric power supply, but also concerning the stability and development of socioeconomic. Because the depth model is widely applied to various fields of intelligent power, how to improve the robustness of the model and ensure the safety of a power information system is important.
The challenge attack may be generally defended against by training the challenge sample, defining a robust penalty function, or preprocessing the input.
The first class of defense methods is the way in which the challenge sample is trained. The method uses the clean sample and the resistant sample together as a training set for the target convolutional neural network to learn, so that the obtained neural network has robustness on the resistant sample in the training data. But this approach can only work on attacks defined by the training phase (challenge samples in the training set), so that the performance of such models can be degraded when faced with undefined or rare attacks. The second class of defense methods is to define efficient loss functions that guide the training of the network, making the network inherently robust. Such methods are most effective in principle, but how to design a robust loss function is a challenge to be solved, and no research has been done to make breakthrough progress. The last class of methods is to choose to pre-process the input before it is passed to the model, thereby reducing the effect of noise in the input on the model performance. This process may be an input through the encoder-decoder network, which is then fed to the model. Such methods are newly developed defense methods in recent years, and the present invention belongs to such methods.
However, the input preprocessing method has a problem. The encoder-decoder (refiner) network learns on clean samples, which are samples with correct index labels, by optimizing the sample reconstruction errors so that destructive noise in its input is eliminated and converted into clean samples for training by the neural network. But if an attacker has access to the model and the condensed program network, an antagonistic sample can be generated that can fool the model and the conciser network. To avoid this problem, the present invention proposes that a plurality of encoder-decoder networks can be used and that one of them is utilized randomly as a refiner in each step. But if multiple encoder-decoder networks are trained using the same loss function, they will follow each other, meaning that an attacker only needs to access one of the refiner networks to destroy all networks. In order to solve the above challenges, on the basis of improving the robustness, the invention further utilizes the concepts of randomness and diversity, and provides an effective solution for improving the safety.
The above disclosure of background art is only for aiding in understanding the inventive concept and technical solution of the present invention, and it does not necessarily belong to the prior art of the present patent application, nor does it necessarily give technical teaching; the above background should not be used to assess the novelty and creativity of the present application without explicit evidence that the above-mentioned content was disclosed prior to the filing date of the present patent application.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a security improvement method based on a deep neural network for resistance attack, which comprises the following specific technical scheme:
the security improvement method based on the deep neural network for the resistance attack comprises the following steps:
setting a plurality of refiners, wherein each refiner is provided with a corresponding encoder and a decoder, different encoders can map input data to corresponding different hidden spaces and can perform corresponding decoding through corresponding decoders so as to restore the data in the hidden spaces to clean version data (the labels of all the data are correctly identified);
randomly selecting a certain refiner to perform denoising treatment on the current input data, wherein the refiner selected by the previous input data is different from the refiner selected by the next input data;
and inputting the denoised data into a convolutional neural main network for data countermeasure training so as to perform corresponding security defense.
Further, each refiner corresponds to a different loss function, and the type of the loss function comprises a first type of loss function L 1 And a second class of loss functions L 2
The first class of loss functions L 1 Optimizing a refiner network R by minimizing reconstruction errors on training samples j The calculation formula is as follows:
in the method, in the process of the invention,Xa set of training samples is represented and,which is an input refiner networkR j The data output correspondingly;
the second class of loss functions L 2 The cosine similarity is obtained by calculating the cosine similarity, and the calculation formula is as follows:
in the method, in the process of the invention,I i representing data in the corresponding hidden space of the ith refiner,I j representing data in the corresponding hidden space of the jth refiner.
Further, the type of the loss function further comprises a third type of loss function L 3 The third class of loss functions L 3 The calculation formula of (2) is as follows:
where k is the sequence number of the refiner network.
Further, the convolutional neural main network adopts an countermeasure training mode when training, and the overall training aim is to maximize the confusing neural network and simultaneously minimize the influence of the neural network on functions;
the training formula is as follows:
in the method, in the process of the invention,Xrepresenting a training sample set, Z is represented as an hidden variable,δrepresenting the disturbance of the addition,f θ as a function of the neural network,ythe label of the sample is a label of the sample,εand θ is a network parameter of the convolutional neural main network as a weight coefficient.
Further, the countermeasure training is learned by adopting a mini-batch gradient descent mode, and the calculated gradient of the parameter and the input gradient are used for updating the parameter and disturbance in a forward and backward calculation process.
The manner of updating the parameters is as follows:
the manner of updating the perturbation is as follows:
wherein g is gradient, θ is network parameter of convolutional neural main network,xrepresented as sample data in a training sample set, τ is a weight coefficient,εas a further weight coefficient,sign(g adv ) For each result of the activation of the sigmoid function of the required gradients (including the gradient of the parameter and the gradient of the input),clip(δ,-ε,ε) Is the output result after the update amplitude limitation of the disturbance by using the clip function.
Further, a loss function database is set, a plurality of different pre-trained loss functions are stored in the loss function database, one loss function is randomly called from the loss function database before the refiner is selected to process input data so as to complete corresponding configuration, and the loss functions called by the same refiner in front and back are different.
The loss function corresponding to the refiner selected by the previous input data is different from the loss function corresponding to the refiner selected by the next input data;
when the previous refiner is selected to process the input data, then the next refiner selected is then validated and a corresponding loss function configuration is performed.
Further, a plurality of refiners are selected to denoise the same group of input data and compare the same group of input data, if the difference degree between the output result of one refiner and the overall output result exceeds a threshold value, the loss function used by the refiner at the time is subjected to risk marking, and the loss function with the risk marking is deleted from the loss function database. And carrying out sample storage on the input data corresponding to the risk marking at the same time, and taking the input data as a sample of subsequent training.
Compared with the prior art, the invention has the following advantages: the resistance noise is eliminated by refining the resistance sample, so that the defensive capability of the neural network model is enhanced, and the stability and safety of the neural network model are improved.
Drawings
FIG. 1 is a schematic diagram of diversity characterization of refiners in a security enhancement method based on a deep neural network for resistive attack according to an embodiment of the present invention;
fig. 2 is a flow chart of a security enhancement method based on a deep neural network for resistive attack according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion.
In one embodiment of the present invention, a security improving method based on a deep neural network for a resistive attack is provided, including the following steps:
setting a plurality of refiners, wherein each refiner is provided with a corresponding encoder and a decoder, different encoders can map input data (the tags of the data have the risk of incorrect identification) to corresponding different hidden spaces, and can perform corresponding decoding through corresponding decoders so as to restore the data in the hidden spaces to clean versions of the data (the tags of all the data are correctly identified);
randomly selecting a certain refiner to perform denoising treatment on the current input data, wherein the refiner selected by the previous input data is different from the refiner selected by the next input data;
and inputting the denoised data into a convolutional neural main network for data countermeasure training so as to perform corresponding security defense.
It should be noted that studies have shown that very small but highly targeted noise counteringeThe convolutional neural network may be compromised. Contamination of the image with such noise can confuse the convolutional neural network M, causing the convolutional neural network to misclassify its inputs; for example, M #X)≠M(X+e) Whereine≠0。
The security improvement method provided by the invention is a method for defending the targeted noise, and can greatly improve the security of the convolutional neural network. The method provided by the embodiment mainly comprises the following two steps: firstly, from the originalStarting with the input, a set of refiner networks R is trained, aiming to remove this targeted noise from the sample before it is input into the main network. Since attacker a generates targeted noise through only one of the refiner networkseI.e., a (M, R,X)→eand the other networks are quite different from the network, so even if attacker a has access to M and one of the R networks, i.e. M (R ? (X))≠M(R ? (X+e) And), wherein R ? Is a randomly selected network of refiners, and the whole system is not jeopardized by an attacker. Secondly, starting from model training place, and during training, loss function of main networkf θ Generating a certain disturbanceδThe network is endowed with natural robustness.
As previously mentioned, the previously proposed techniques for optimizing resistance samples are not secure to an attacker who knows the network parameters of the refiner program. This embodiment, inspired by the cryptography method and the resistance training method, is intended to add diversity, randomness and robustness to the defending program, aiming at making it more powerful. First, by refining the input samples by means of a set of refiners and forcing these refiners to represent and reconstruct the samples on diverse characterizations (independent feature spaces), as shown in fig. 1, 3 refiners R can each make up a different sequence S than the convolutional neural main network M to process the input data. The resulting refiners have the ability to resist attacks because the behavior of one of the refiner networks alone cannot be processed simultaneously with the other network by analyzing the behavior, so that an attacker cannot successfully attack the refiner and the target network.
The security enhancement method provided in this embodiment is implemented based on a security defense model, and the main structure of the security defense model includes a convolutional neural main network M and K refiners (i.e., encoder-decoder network R) with a classifier as a protection against attacks j )。
The feature learning procedure for refiner diversification is as follows:
parameters of the refiner network, i.e. θ j The learning is performed by: input clean sample setX(all data)Tags all correctly identified) to the hidden spaceI j And recover the samples by decoding this hidden space. By combining eachI j From other hidden spaces, a refiner network R is realized j Independence between them. Each sample is then subjected to R before being sent to the convolutional neural network h (h∈[1,...,k]) (randomly selecting h) treatment. Due to R h Trained to map input samples to their clean versions (tags of all data are correctly identified), so it can act as a denoising network to refine samples that are subject to noise immunity.
The present embodiment learns the refiner network R by minimizing reconstruction errors on available training samples j . Such a network is obtained by optimizing a first class of loss functions L 1 Training, a first class of loss functions L 1 The expression of (2) is as follows:
wherein,Xa set of training samples is represented and,. But as mentioned before, if the same loss function (i.e. L 1 ) To learn that multiple refiner networks will get a set of networks with the same vulnerability. The present embodiment thus uses different loss functions to help R j The samples are represented and reconstructed. First, R is 1 By optimizing the first class of loss functions L 1 To learn, the next refiner R 2 Will be considered to be learned, R 2 Also by optimizing L 1 To learn to map the reactance sample to a clean sample (the tags of all data are correctly identified), but at the same time its hidden space for the same sample is also forced to be equal to R 1 Is different from the hidden space of the figure. For this purpose, cosine similarity is +.>(Cryptographic space R) 1 And R is R 2 Between) is considered as R 2 The second class of loss functions is expressed as follows:
where, |·| is a dot product operation and || is the L2 norm. To train a third class of refiner networks, its hidden space cannot be similar to the previously learned refiner hidden space. Typically, to train the kth network, except for minimizing L 1 In addition, its hidden space for the same sample must be different from that of the previous one. Overall, θ for the Kth refiner network k The parameters are obtained by optimizing the loss function L 3 To learn. The third class of loss functions is expressed as follows:
when the K is different in value, the loss function L with different optimizations can be obtained 3 To correspond to different refiner networks; at least two types, preferably three types, of loss functions corresponding to all refiners are provided to improve defensive power.
The steps of the antagonism training for the primary network to promote robustness are as follows:
the convolutional neural main network M adopts an countermeasure training mode during training, and can be summarized as the following maximum and minimum formulas:
wherein,Xrepresenting a training sample set, Z is represented as an hidden variable,δrepresenting the disturbance of the addition,f θ as a function of the neural network,ythe label of the sample is a label of the sample,L(f θ (X+δ),y) Represented as in the sampleXSuperimposed disturbanceδThen pass through a neural network to finally and labelyLoss after comparison. The objective of maximization is to find the disturbance that maximizes the loss function, minimizingThe goal of the visualization is to minimize the loss in training data, and the overall goal is to minimize the functional impact of the neural network while maximizing the confusion of the neural network.
The manner of countermeasure training is also slightly different from the ordinary training manner: while the common training is to calculate different batches, in this embodiment, referring to fig. 2, the challenge training is to repeatedly gradient sample B of the same mini-batch when calculating samples. The meaning of this is to use the gradient of the calculated parameter and the input gradient simultaneously in one forward and backward calculation process. K gradients are calculated for each sample of mini-batch, each calculated gradient being used to update both the parameters and the perturbations. The manner of updating the parameters is as follows:
wherein g is gradient, θ is network parameter of convolutional neural main network,xexpressed as sample data, the gradient of the parameter and the gradient of the input can be calculated by the formula (5), and τ in the formula (6) is a weight coefficient, which is generally set to 0.5. In addition to updating parameters, the perturbation needs to be updated in the following manner:
wherein in formula (7)δAs another weight coefficient, it can reflect the effect of very small but highly targeted noise,sign(g adv ) For each result of activating the sigmoid function of the required gradient (including the gradient of the parameter and the gradient of the input), the update amplitude of the disturbance is limited by using the clip function in formula (8).
In one embodiment of the invention, in order to further improve the security, a loss function database is provided, a plurality of different loss functions trained in advance are stored in the loss function database, and when the refiner is selected to process input data, one loss function is randomly called from the loss function database to complete corresponding configuration, and the loss functions called by the same refiner twice before and after are different; the loss function corresponding to the refiner selected by the previous input data is different from the loss function corresponding to the refiner selected by the next input data; when the former refiner is selected to process the input data, the latter refiner is then confirmed and corresponding loss function configuration is performed; in a preferred embodiment, when a previous refiner is selected to process the input data, then n refiners are randomly selected from the remaining refiners for random loss function configuration, and then when the next refiner is used for processing, one refiner is randomly selected from the n refiners for functional use. According to the embodiment, the number of refiners can be greatly reduced, a small number of refiners realize the effect of fixed configuration of a large number of refiners through instant random configuration, the randomness of denoising processing is enhanced, and an attacker is difficult to effectively attack.
In order to improve the accuracy of denoising, a plurality of refiners are selected to denoise the same group of input data and compare the same group of input data, and if the difference between the output result of one refiner and the overall output result exceeds a threshold value, the loss function used by the refiner at the time is subjected to risk marking; the overall output result is the average value of the output data of the refiners at the moment, and the difference degree can be represented by standard deviation, variance, mean deviation and the like.
And deleting the loss function with the risk mark from the loss function database. And carrying out sample storage on the input data corresponding to the risk marking at the same time, and taking the input data as a sample of subsequent training.
The security improvement method provided by the invention eliminates the resistance noise as a defense strategy by refining the resistance sample, and is assisted with the resistance training method to resist potential attacks against the neural network.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present invention are directly or indirectly applied to other related technical fields, which are also included in the scope of the present invention.

Claims (4)

1. The security improvement method based on the resistive attack deep neural network is characterized by being applied to guaranteeing the security of an electric power information system and comprising the following steps of:
setting a plurality of refiners, wherein each refiner is provided with a corresponding encoder and a decoder, different encoders can map input data to corresponding different hidden spaces and can perform corresponding decoding through corresponding decoders so as to restore the data in the hidden spaces to clean version data, wherein the tags of the input data have the risk of incorrect identifications, and the clean version data have correct identifications;
setting a loss function database, storing a plurality of different pre-trained loss functions into the loss function database, and randomly calling a loss function from the loss function database before the refiner is selected to process input data so as to complete corresponding configuration, wherein the loss functions which are called by the same refiner twice before and after are different;
each refiner corresponding to a different loss function, the type of loss function comprising a first type of loss function L 1 Loss function of the second kind L 2 And a third class of loss functions L 3 The method comprises the steps of carrying out a first treatment on the surface of the The first class of loss functions L 1 Optimizing a refiner network R by minimizing reconstruction errors on training samples j The calculation formula is as follows:where, X represents the training sample set,which is an input refiner network R j The data output correspondingly; the second class of loss functions L 2 The cosine similarity is obtained by calculating the cosine similarity, and the calculation formula is as follows: />Wherein I is i Representing data in hidden space corresponding to the ith refiner, I j Representing data in the hidden space corresponding to the jth refiner; the third class of loss functions L 3 The calculation formula of (2) is as follows: />Wherein k is the serial number of the refiner network; using different loss functions to assist the refiner network R j Representing and reconstructing a sample, comprising: first refiner network R 1 By optimizing the first class of loss functions L 1 To learn, the next refiner network R 2 By optimizing the first class of loss functions L 1 To learn to map the countering sample to a clean sample while its hidden space for the same sample is forced to be in association with the first refiner network R 1 Is different from the hidden space of the frame;
randomly selecting a certain refiner to perform denoising treatment on the current input data, wherein the refiner selected by the previous input data is different from the refiner selected by the next input data;
the denoised data is input into a convolutional neural network for data countermeasure training so as to perform corresponding security defense, the convolutional neural network adopts a countermeasure training mode during training, the overall training aim is to minimize the influence of the neural network on functions while maximizing the confusion neural network, and the training formula is as follows:wherein X represents a training sample set, Z represents a hidden variable, delta represents an added disturbance, f θ As a neural network function, y is a label of the sample; the countermeasure training is learned by adopting a mini-batch gradient descent mode, and the calculated gradient of the parameter and the input gradient are used for updating the parameter and disturbance in the forward-backward calculation process, and the disturbance delta is updated as follows: delta ≡delta + epsilon sign (g) adv ) Delta ≡clip (delta, -epsilon, epsilon), wherein epsilon is the weight coefficient, sign (g) adv ) The clip (delta, -epsilon, epsilon) is the output result after the update amplitude of the disturbance is limited by the clip function, which is the result after each time the gradient is activated by the sigmoid function.
2. The security improvement method according to claim 1, wherein a loss function corresponding to a refiner selected for a previous input data is different from a loss function corresponding to a refiner selected for a subsequent input data;
when the previous refiner is selected to process the input data, then the next refiner selected is then validated and a corresponding loss function configuration is performed.
3. The method for improving safety according to claim 1, wherein a plurality of refiners are selected to denoise the same group of input data and compare the same, if the difference between the output result of one refiner and the overall output result exceeds a threshold, the loss function used by the refiner at the time is risk-marked, and the loss function with the risk mark is deleted from the loss function database.
4. A security improvement method according to claim 3, wherein the risk marking is performed and the corresponding input data is stored as a sample for subsequent training.
CN202310849556.7A 2023-07-12 2023-07-12 Safety improvement method based on resistive attack deep neural network Active CN116578876B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310849556.7A CN116578876B (en) 2023-07-12 2023-07-12 Safety improvement method based on resistive attack deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310849556.7A CN116578876B (en) 2023-07-12 2023-07-12 Safety improvement method based on resistive attack deep neural network

Publications (2)

Publication Number Publication Date
CN116578876A CN116578876A (en) 2023-08-11
CN116578876B true CN116578876B (en) 2024-02-13

Family

ID=87536306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310849556.7A Active CN116578876B (en) 2023-07-12 2023-07-12 Safety improvement method based on resistive attack deep neural network

Country Status (1)

Country Link
CN (1) CN116578876B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023312A (en) * 2016-05-13 2016-10-12 南京大学 Automatic 3D building model reconstruction method based on aviation LiDAR data
CN107358609A (en) * 2016-04-29 2017-11-17 成都理想境界科技有限公司 A kind of image superimposing method and device for augmented reality
CN111967502A (en) * 2020-07-23 2020-11-20 电子科技大学 Network intrusion detection method based on conditional variation self-encoder

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358609A (en) * 2016-04-29 2017-11-17 成都理想境界科技有限公司 A kind of image superimposing method and device for augmented reality
CN106023312A (en) * 2016-05-13 2016-10-12 南京大学 Automatic 3D building model reconstruction method based on aviation LiDAR data
CN111967502A (en) * 2020-07-23 2020-11-20 电子科技大学 Network intrusion detection method based on conditional variation self-encoder

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨浚宇.基于迭代自编码器的深度学习对抗样本防御方案.《信息安全学报》.2019,第35-44页. *

Also Published As

Publication number Publication date
CN116578876A (en) 2023-08-11

Similar Documents

Publication Publication Date Title
Vorobeychik et al. Adversarial machine learning
He et al. Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack
Song et al. Improving the generalization of adversarial training with domain adaptation
CN110490128A (en) A kind of hand-written recognition method based on encryption neural network
CN112313645B (en) Learning method and device for data embedded network and testing method and device thereof
CN111818101B (en) Network security detection method and device, computer equipment and storage medium
CN114758198A (en) Black box attack method and system for resisting disturbance based on meta-learning
CN112311733A (en) Method for preventing attack counterattack based on reinforcement learning optimization XSS detection model
CN114387449A (en) Image processing method and system for coping with adversarial attack of neural network
CN103873253B (en) Method for generating human fingerprint biometric key
CN112232434A (en) Attack-resisting cooperative defense method and device based on correlation analysis
CN116578876B (en) Safety improvement method based on resistive attack deep neural network
CN117332411B (en) Abnormal login detection method based on transducer model
Dharani et al. Detection of phishing websites using ensemble machine learning approach
CN113268990B (en) User personality privacy protection method based on anti-attack
Wang et al. Privacy-preserving adversarial facial features
Jia et al. Subnetwork-lossless robust watermarking for hostile theft attacks in deep transfer learning models
Tanay et al. Built-in vulnerabilities to imperceptible adversarial perturbations
CN111882037A (en) Deep learning model optimization method based on network addition/modification
CN115242539B (en) Network attack detection method and device for power grid information system based on feature fusion
US20230297823A1 (en) Method and system for training a neural network for improving adversarial robustness
CN113837253B (en) Single-step countermeasure training method, system, equipment, storage medium and product
CN114817937A (en) Keyboard encryption method, device, storage medium and computer program product
Liu et al. SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks
CN111967607A (en) Model training method and device, electronic equipment and machine-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant