CN114758187A - Universal countermeasure disturbance generation method based on steganography, medium and computer equipment - Google Patents

Universal countermeasure disturbance generation method based on steganography, medium and computer equipment Download PDF

Info

Publication number
CN114758187A
CN114758187A CN202210019738.7A CN202210019738A CN114758187A CN 114758187 A CN114758187 A CN 114758187A CN 202210019738 A CN202210019738 A CN 202210019738A CN 114758187 A CN114758187 A CN 114758187A
Authority
CN
China
Prior art keywords
mask
disturbance
pattern
steganography
acc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210019738.7A
Other languages
Chinese (zh)
Inventor
高海昌
刘欢
王宇飞
高艺鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202210019738.7A priority Critical patent/CN114758187A/en
Publication of CN114758187A publication Critical patent/CN114758187A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of deep learning safety, and discloses a universal countermeasure disturbance generation method based on steganography, a medium and computer equipment, wherein the universal countermeasure disturbance generation method based on steganography comprises the steps of obtaining an original sample set, a target model and a steganography model; initializing a mask and a pattern; calculating disturbance, and steganographically writing the disturbance into an original sample to obtain a confrontation sample; calculating loss, optimizing mask and mode; judging whether a termination condition is met; checking whether an early stop mechanism is triggered or not, and carrying out a new iteration; the final perturbation is calculated and saved. The method can be applied to various image classification scenes such as image recognition, automatic driving, biological feature recognition and the like, and the local anti-noise is steganographically written in the global range of the target sample through the depth steganographic model, so that the attack is initiated on the artificial intelligent system without being perceived, the normal function of the system is influenced, the attack and the prevention are promoted, and the development of the artificial intelligent system towards a more robust direction is promoted.

Description

Universal countermeasure disturbance generation method based on steganography, medium and computer equipment
Technical Field
The invention belongs to the technical field of deep learning security, and particularly relates to a universal anti-disturbance generation method based on steganography, a medium and computer equipment.
Background
At present, a deep neural network is widely applied to the fields of image recognition, intelligent voice recognition, spam filtering, machine translation, automatic driving systems and the like. While the deep neural network plays a great role in various fields, the safety problem of the deep neural network is gradually exposed. A number of studies have found that deep neural networks are susceptible to anti-sampling, i.e. they are easily fooled by human-imperceptible perturbations, resulting in the neural network outputting a false prediction with high confidence.
The existing method for generating the antagonistic disturbance generates specific antagonistic disturbance for each sample, and the disturbance is only suitable for specific samples and cannot be transmitted among the samples. Such anti-disturbance generation methods are time-consuming and difficult to deploy in practical application scenarios. Recent research shows that the universality of disturbance is realized by the universal anti-disturbance, specific anti-disturbance does not need to be generated for each sample, and the attack can be executed without acquiring the information of a target model in the testing stage, so that the threat to the neural network is larger. The current general countermeasure disturbance generation method has two main targets: (1) the universality of disturbance is realized, in the testing stage, the generated universal anti-disturbance is applied to the sample of the same data set as the training sample, so that high attack success rate can be obtained, and the universal anti-disturbance has good mobility on different data sets or different models; (2) the concealment of the disturbance is achieved and the generated universal countermeasure disturbance should be imperceptible to humans.
The existing general countermeasure disturbance generation method is generally difficult to take the generality of disturbance and the concealment of the disturbance into consideration, for example, the invention patent application with the application publication number of CN111680292A and the name of 'a countermeasure sample generation method based on high-concealment general disturbance' discloses a countermeasure sample generation method based on high-concealment general disturbance, which first maximizes the loss of a batch of training samples to obtain a general loss function to realize basic general disturbance generation, then adds the correction of samples except for a target attack class into the general loss function to construct a loss function which has no target and target general disturbance generation, optimizes and trains the loss function in a gradient descending manner to obtain primary general disturbance, and finally filters the primary general disturbance by low-pass filtering, and removing high-frequency noise to obtain the final universal anti-disturbance. The method has the defects that after the primary universality disturbance is obtained, low-pass filtering is used for filtering so as to improve the concealment of the disturbance, but the process of removing high-frequency noise causes the success rate of general attack on disturbance resistance to be obviously reduced, and the universality of the disturbance cannot be ensured, namely the method cannot give consideration to both the universality of the disturbance and the concealment of the disturbance.
Through the above analysis, the problems and defects of the prior art are as follows: the existing universal countermeasure disturbance generation method cannot take account of the universality of disturbance and the concealment of the disturbance.
The difficulty in solving the above problems and defects is: the universality of the disturbance and the concealment of the disturbance are mutually restricted, the improvement of the universality of the disturbance is at the cost of the concealment of the disturbance, the improvement of the concealment inevitably sacrifices the universality of the disturbance, and the existing method is difficult to well balance the relationship between the universality of the disturbance and the concealment of the disturbance. In order to ensure the universality of disturbance, namely, a high attack success rate can be realized by adding a disturbed confrontation sample, the existing disturbance is mostly added to the sample in a manner visible to human eyes, the disturbance is easy to be perceived, and the existing high-concealment disturbance cannot meet the requirement of the universality.
The significance of solving the problems and the defects is as follows: the countermeasure sample which takes account of the universality of disturbance and the concealment of the disturbance can execute attack without being perceived, and serious potential safety hazard is formed to the artificial intelligent system. The solution of the problems and the defects promotes the development of the field of resisting sample attack so as to attack and promote defense and construct a more stable artificial intelligence system.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a universal anti-disturbance generation method based on steganography, a medium and computer equipment.
The invention is realized in such a way, a universal countermeasure disturbance generation method based on steganography, which comprises the following steps:
step one, obtaining an original sample set, a target model and a steganography model: the data set and model required for the perturbation is ready to be generated.
Initializing a mask and a pattern; the mask and the initial value of the pattern are provided when the perturbation is calculated in step three for the first time.
Step three, calculating disturbance by the mask and pattern, and steganographically writing the disturbance into the original sample to obtain a confrontation sample: challenge samples are obtained for training to optimize mask and pattern in step four.
Step four, calculating loss, optimizing mask and pattern: and (3) optimizing attack performance and concealment of the disturbance by using two loss functions respectively, and incorporating the hidden writing of the disturbance into a training process to obtain the disturbance with higher universality and concealment.
Step five, the epochs are automatically increased, if the epochs are less than the total iteration times epochs, the step six is executed, and if the epochs are not less than the total iteration times epochs, the step seven is executed, wherein the epodh represents the current iteration times, the initial value of the epochs is 0, and the epochs is more than or equal to 500; and controlling the iteration times, and stopping the iteration after a certain iteration time is reached.
Step six, checking whether an early stop mechanism is triggered, if the early stop mechanism meets an early stop condition, executing step seven, and if the early stop mechanism does not meet the early stop condition, repeating the step three to the step six to perform a new iteration; and by utilizing an early-stopping mechanism, better generalization performance is obtained, and the training time cost is saved.
And step seven, calculating and storing the final disturbance to achieve the purpose of a scheme.
Further, in the first step, an original sample set, a target model and a steganography model are obtained, and the specific process is as follows:
the method comprises the steps of firstly, selecting samples from a target data set containing M categories, randomly selecting N non-repeated samples for each category in the data set, converting the selected samples into an original sample set X with the same size W H C to form an original sample set M N, and taking an ImageNet data set as the target data set, wherein M is 1000, N is 10, W is 224, and H is 224;
secondly, acquiring a network structure and weight of a pre-trained target model to obtain the target model, and taking VGG16 as the target model;
thirdly, acquiring a network structure and weight of a pre-trained steganographic model to obtain the steganographic model; the steganography model HNet (-) converts the data formats of the original sample and the general anti-disturbance image into a format of C, H and W by using a transpose () function, wherein C represents a channel, H represents the height of the image, and W represents the width of the image; and splicing the original image and the disturbance image on the dimension of C by using a concat () function, wherein the spliced image is used as the input of the steganographic model.
Further, in the second step, a mask and a pattern are initialized, and the specific process is as follows:
a. randomly generating a single-channel mask image mask with the same size W x H as the original sample by using a random () function, and limiting the value of each element in the mask within a given upper boundary and a given lower boundary; wherein, the upper bound MASK _ MAX is 1, and the lower bound MASK _ MIN is 0;
b. randomly generating an RGB mode image pattern with the same size w × H as the original sample by using a random () function, and limiting the value of each element in the pattern within a given upper boundary and a given lower boundary; wherein, the upper bound patern _ MAX is 255, and the lower bound patern _ MIN is 0.
Further, in the third step, the disturbance is calculated by the mask and the pattern, and the disturbance is steganographically written into the original sample to obtain the confrontation sample, and the specific process is as follows:
A. multiplying the mask and the position element corresponding to the pattern, wherein the result is that the mask is more than or equal to the pattern, and obtaining a disturbing image;
B. steganography is carried out on the general countermeasure disturbance to an original sample by using a pre-trained steganography model, and a countermeasure sample is obtained; the implementation mode of steganography is xadvH, h — HNet (mask, x), i.e., the original sample and the general-purpose anti-disturbance mask are spliced according to the third step in step S101, and the spliced sample is further processed An image is sent to an steganography model for steganography, and a universal antagonistic disturbance mask ^ pattern is steganographically written into an original sample x to obtain an antagonistic sample after steganography disturbance;
C. and (4) performing perturbation steganography on each sample of the sample set in the way of the step B, namely, the step B needs to be repeatedly executed M times by N times.
Further, in the fourth step, the loss is calculated, and the mask and pattern are optimized, and the specific process is as follows:
1) inputting a batch of samples added with disturbance into a pre-trained target model to obtain a predicted value, wherein the number of the batch of samples is MxN, and calculating the average attack success rate acc of the batch of samples;
2) calculating regularization loss _ reg of the original sample and the disturbance sample by adopting L2The regularization method comprises the steps of substituting loss _ reg, a predicted value of a target model to a disturbance sample and a real label of the sample into a non-target loss function to obtain the total loss;
3) minimizing loss, optimizing a mask and a pattern by using an optimizer, and optimizing by using an Adam optimizer;
4) when two conditions of acc < epsilon and loss _ reg < reg _ best are simultaneously met, making mask _ best be mask, pattern _ best be pattern and reg _ best be loss _ reg, wherein epsilon is an attack threshold value, 0 < epsilon is required to be less than or equal to 1, reg _ best is a best regularization value, the initial value is infinite, mask _ best and pattern _ best are respectively the best current mask and pattern, the initial values of the elements of the two are both 0, and epsilon is 0.9999.
Further, the targetless attack loss function:
Figure BDA0003461853930000051
where x is the original sample, xadvTo add the generic perturbed sample, ytrueAdopting a one-hot vector form for the real category of the sample, f (-) being a pre-trained target model, f (x)adv) Representing the target model versus the perturbation sample xadvThe classification result of (C) is CE (-)Cross entropy loss, Lp(. represents original sample x and perturbed sample x)advL betweenpDistance, where p has a value of 2.
Further, in the sixth step, whether an early stop mechanism is triggered is checked, if the early stop condition is met, the seventh step is executed, and if the early stop condition is not met, the third step and the sixth step are repeated to perform a new iteration, wherein the specific process is as follows:
the initial values of acc _ up, acc _ down and early _ stop _ counter are all 0, the initial value of early _ stop _ reg _ best is infinite, the initial values of acc _ up _ flag and acc _ down _ flag are both False, and the selected probability is 10 and the early _ stop _ probability is 20;
when the average attack success rate acc is greater than the attack threshold epsilon, acc _ up is automatically increased and acc _ down is set to zero, otherwise, acc _ down is automatically increased and acc _ up is set to zero; if the acc _ up is larger than the set constant value probability, setting the acc _ up to zero, and setting the acc _ up _ flag to True;
If acc _ down is greater than partition, setting acc _ down to zero, and setting acc _ down _ flag to True; if reg _ best is larger than or equal to early _ stop _ reg _ best, enabling the early _ stop _ counter to increase automatically, otherwise, setting the early _ stop _ counter to be zero;
obtaining a smaller value of reg _ best and early _ stop _ reg _ best by using a min () function, and assigning the smaller value to early _ stop _ reg _ best; if the values of acc _ up _ flag and acc _ down _ flag are both True and early _ stop _ counter is greater than early _ stop _ probability, the early stop condition is satisfied.
Further, in the seventh step, the final disturbance is calculated and stored, and the specific process is as follows:
(1) judging whether the values of the mask _ best and the pattern _ best are None, if so, executing the step (2), and if not, executing the step (3);
(2) acquiring a mask and a pattern of the last iteration, assigning the value of the mask to a mask _ best and assigning the value of the pattern to a pattern _ best;
(3) multiplying the position elements corresponding to the mask _ best and the pattern _ best, and calculating to obtain the final universal countermeasure disturbance;
(4) the general countermeasure disturbance is saved as an image by using an image processing method, the array _ to _ img () function converts the general countermeasure disturbance in an array format into a picture form, and the disturbed picture is saved by using the save () function.
Another object of the present invention is to provide a program storage medium for receiving user input, the stored computer program causing an electronic device to execute the steganographic-based universal countermeasure disturbance generating method, comprising the steps of:
step one, obtaining an original sample set, a target model and a steganography model:
initializing a mask and a pattern;
step three, calculating disturbance by the mask and pattern, and steganographically writing the disturbance into the original sample to obtain a confrontation sample:
step four, calculating loss, optimizing mask and pattern:
step five, the epochs are automatically increased, if the epochs are less than the total iteration times epochs, the step six is executed, and if the epochs are not less than the total iteration times epochs, the step seven is executed, wherein the epochs represent the current iteration times, the initial value of the epochs is 0, and the epochs are more than or equal to 500;
step six, checking whether an early stop mechanism is triggered, if the early stop mechanism meets an early stop condition, executing step seven, and if the early stop mechanism does not meet the early stop condition, repeating the step three to the step six, and performing a new iteration;
and step seven, calculating and storing the final disturbance.
Another object of the present invention is to provide a computer apparatus, which includes a memory and a processor, the memory storing a computer program, which when executed by the processor, causes the processor to execute the steps of the steganographic-based universal countermeasure disturbance generation method.
By combining all the technical schemes, the invention has the advantages and positive effects that: in the invention, the disturbance which is easily perceived by human is steganographically written into the target sample by using the depth steganographically model, and the disturbance after steganographically is not easy to perceive, thereby realizing the concealment of the disturbance. According to the method, the depth steganography module is added into the general anti-disturbance optimization process, the samples subjected to disturbance steganography are optimized, the generated disturbance can reach high attack success rate and good mobility, and the disturbance universality is realized. The invention overcomes the defect that the existing disturbance generation method can not consider the universality of disturbance and the concealment of the disturbance at the same time, and can realize the counterattack of sample attack with high attack success rate without being perceived. The method fills the blank of the disturbance generation method considering both universality and concealment, and excavates the concealment characteristics of the deep neural network, namely the characteristics are related to data distribution in an unobvious way, so that the vulnerability of a machine learning or deep learning algorithm is exposed, and the method forms a serious potential threat to an artificial intelligence system.
Drawings
Fig. 1 is a flowchart of a steganography-based universal countermeasure disturbance generation method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a steganography-based general countermeasure disturbance generation process provided in an embodiment of the present invention.
FIG. 3 is a diagram illustrating a generated universal countermeasure disturbance state provided by an embodiment of the invention;
fig. 4 is a diagram of a sample effect after steganographic perturbation according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In view of the problems in the prior art, the present invention provides a universal countermeasure disturbance generation method, medium, and computer device based on steganography, and the present invention is described in detail below with reference to the accompanying drawings.
The steganography-based universal countermeasure disturbance generation method provided by the invention can also be implemented by adopting other steps by persons of ordinary skill in the art, and the steganography-based universal countermeasure disturbance generation method provided by the invention in fig. 1 is only a specific embodiment.
As shown in fig. 1, a method for generating a universal countermeasure disturbance based on steganography according to an embodiment of the present invention includes:
S101: obtaining an original sample set, a target model and a steganography model:
s102: initializing a mask and a pattern;
s103: calculating disturbance by mask and pattern, and steganographically writing the disturbance into the original sample to obtain a confrontation sample:
s104: calculating loss, optimizing mask and pattern:
s105: if the epoch is less than the total iteration times epochs, executing S106, otherwise executing S107, wherein the epoch represents the current iteration times, the initial value of the epoch is 0, and the epochs is more than or equal to 500;
s106: checking whether an early stop mechanism is triggered, if the early stop mechanism meets an early stop condition, executing S107, if the early stop mechanism does not meet the early stop condition, repeating S103-S106, and performing a new iteration;
s107: the final perturbation is calculated and saved.
In S101 provided by the embodiment of the present invention, an original sample set, a target model, and a steganography model are obtained, and the specific process is as follows:
the method comprises the steps of firstly, selecting samples from a target data set containing M categories, randomly selecting N non-repeated samples for each category in the data set, converting the selected samples into an original sample set X with the same size W H C to form an original sample set M N, and taking an ImageNet data set as the target data set, wherein M is 1000, N is 10, W is 224, and H is 224;
Secondly, acquiring a network structure and weight of a pre-trained target model to obtain the target model; VGG16 is taken as the target model;
thirdly, acquiring a network structure and weight of a pre-trained steganographic model to obtain the steganographic model; a steganographic model HNet (-) converts the data formats of the original sample and the universal anti-disturbance image into a format of C, H and W by using a transpose () function, wherein C represents a channel, H represents the height of the image, and W represents the width of the image; and splicing the original image and the disturbance image on the dimension of C by using a concat () function, wherein the spliced image is used as the input of the steganographic model.
In S102 provided in the embodiment of the present invention, a mask and a pattern are initialized, and the specific process is as follows:
a. randomly generating a single-channel mask image mask with the same size W x H as the original sample by using a random () function, and limiting the value of each element in the mask within a given upper boundary and a given lower boundary; wherein, the upper bound MASK _ MAX is 1, and the lower bound MASK _ MIN is 0;
b. randomly generating an RGB mode image pattern with the same size W x H as the original sample by using a random () function, and limiting the value of each element in the pattern within a given upper boundary and a given lower boundary; wherein, the upper bound patern _ MAX is 255, and the lower bound patern _ MIN is 0.
In S103 provided by the embodiment of the present invention, the perturbation is calculated by the mask and the pattern, and the perturbation is steganographically written into the original sample to obtain the confrontation sample, which specifically includes the following steps:
A. multiplying the mask and the position element corresponding to the pattern, and indicating that the mask is as mask ^ pattern, so as to obtain an interference image;
B. steganography is carried out on the universal countermeasure disturbance to an original sample by using a pre-trained steganography model, and an countermeasure sample is obtained; the implementation of steganography is xadvH, splicing the original sample x and the universal anti-disturbance mask x according to the third step in the step S101, sending the spliced image into an steganography model for steganography, and steganography the universal anti-disturbance mask x into the original sample x to obtain an anti-disturbance sample after steganography disturbance;
C. and (4) performing perturbation steganography on each sample of the sample set in the way of the step B, namely, the step B needs to be repeatedly executed M times by N times.
In S104 provided by the embodiment of the present invention, the loss is calculated, and the mask and pattern are optimized, which specifically includes the following processes:
1) inputting a batch of samples added with disturbance into a pre-trained target model to obtain a predicted value, wherein the number of the batch of samples is MxN, and calculating the average attack success rate acc of the batch of samples;
2) Calculating regularization loss _ reg of the original sample and the disturbed sample by adopting L2RegularizationSubstituting loss _ reg, a predicted value of the target model to the disturbance sample and a real label of the sample into a no-target loss function to obtain total loss; target-free attack penalty function:
Figure BDA0003461853930000091
where x is the original sample, xadvTo add the generic perturbed samples, ytrueTaking a one-hot vector form as a real category of the sample, wherein f (-) is a pre-trained target model, and f (x)adv) Representing the target model versus the perturbation sample xadvThe classification result of (c), CE (. smallcircle.) is the cross-entropy loss, Lp(. represents original sample x and perturbed sample x)advL betweenpDistance, wherein the value of p is 2;
3) minimizing loss, optimizing mask and pattern by using an optimizer, and optimizing by using an Adam optimizer;
4) when two conditions of acc < epsilon and loss _ reg < reg _ best are simultaneously met, making mask _ best be mask, pattern _ best be pattern and reg _ best be loss _ reg, wherein epsilon is an attack threshold value, 0 < epsilon is required to be less than or equal to 1, reg _ best is a best regularization value, the initial value is infinite, mask _ best and pattern _ best are respectively the best current mask and Hpattern, the initial values of the elements of both are 0, and epsilon is 0.9999.
In S106 provided in the embodiment of the present invention, it is checked whether an early stop mechanism is triggered, if an early stop condition is satisfied, S107 is executed, and if not, S103-S106 are repeated to perform a new iteration, where the specific process is as follows:
the initial values of acc _ up, acc _ down and early _ stop _ counter are all 0, the initial value of early _ stop _ reg _ best is infinite, the initial values of acc _ up _ flag and acc _ down _ flag are both False, and the selected probability is 10 and the early _ stop _ probability is 20;
when the average attack success rate acc is greater than the attack threshold epsilon, acc _ up is automatically increased and acc _ down is set to zero, otherwise, acc _ down is automatically increased and acc _ up is set to zero; if the acc _ up is larger than the set constant value probability, setting the acc _ up to zero, and setting the acc _ up _ flag to True;
if acc _ down is greater than probability, setting acc _ down to zero, and setting acc _ down _ flag to True; if reg _ best is larger than or equal to early _ stop _ reg _ best, then enabling early _ stop _ counter to increase by itself, otherwise, setting early _ stop _ counter to zero;
acquiring a smaller value of reg _ best and early _ stop _ reg _ best by using a nun () function, and assigning the value to early _ stop _ reg _ best; if the values of acc _ up _ flag and acc _ down _ flag are True and early _ stop _ counter is greater than early _ stop _ probability, the early stop condition is satisfied.
In S107 provided by the embodiment of the present invention, the final disturbance is calculated and stored, and the specific process is as follows:
(1) judging whether the values of the mask _ best and the pattern _ best are None, if so, executing the step (2), and if not, executing the step (3);
(2) acquiring a mask and a pattern of the last iteration, assigning the value of the mask to a mask _ best and assigning the value of the pattern to a pattern _ best;
(3) multiplying the position elements corresponding to the mask _ best and the pattern _ best, and calculating to obtain the final universal countermeasure disturbance;
(4) the general countermeasure disturbance is saved as an image by using an image processing method, the array _ to _ img () function converts the general countermeasure disturbance in an array format into a picture form, and the disturbed picture is saved by using the save () function.
The technical solution of the present invention will be described in detail with reference to the following specific examples.
As shown in fig. 2, a specific process for generating a universal countermeasure disturbance based on steganography provided by the embodiment of the present invention is as follows:
step (1), obtaining an original sample set, a target model and a steganography model:
and (1a) selecting samples from a target data set containing M categories, randomly selecting N non-repeated samples for each category in the data set, and converting the selected samples into an original sample set X with the same size W X H X C to form an original sample set X with the size of M X N. In this example, the target dataset is preferably an ImageNet dataset, where M is 1000, N is 10, W is 224, and H is 224.
And (1b) acquiring a network structure and weight of a pre-trained target model to obtain the target model. VGG16 is preferred as the target model in this example.
And (1c) acquiring a network structure and weight of a pre-trained steganography model to obtain the steganography model. In this example, the steganographic model HNet (-) is preferred, which requires that the data formats of the original sample and the general disturbance-resistant image are converted into a data format by using a transpose () function
Figure BDA0003461853930000111
Format, C denotes channel, H denotes height of image, W denotes width of image; and splicing the original image and the disturbed image on the dimension of C by using a concat () function, wherein the spliced image is used as the input of the steganography model.
Step (2), initializing mask and pattern:
and (2a) randomly generating a single-channel MASK image MASK with the same size W × H as the original sample by using a random () function, and limiting the value of each element in the MASK within a given upper and lower bounds, wherein the upper bound MASK _ MAX is 1, and the lower bound MASK _ MIN is 0.
And (2b) randomly generating an RGB PATTERN image PATTERN with the same size W x H as the original sample by using a random () function, and limiting the value of each element in the PATTERN within a given upper and lower bound, wherein the upper bound PATTERN _ MAX is 255 and the lower bound PATTERN _ MIN is 0.
Step (3), calculating disturbance through the mask and the pattern, and steganographically writing the disturbance into the original sample to obtain a confrontation sample:
and (3a) multiplying the mask and the position element corresponding to the pattern to indicate that the mask is equal to the pattern, so as to obtain a disturbed image.
And (3b) steganography is performed on the universal countermeasure disturbance to the original sample by using a pre-trained steganography model, and a countermeasure sample is obtained. The implementation of steganography in this example is xadvH Net (mask:. x), i.e. the original sample x and the general counter disturbance mask |. pattern are entered as requested in step (1c)Line splicing, namely sending the spliced image into an steganography model for steganography, and steganography of the universal anti-disturbance mask ^ pattern into an original sample x to obtain an anti-sample subjected to steganography disturbance;
and (3c) performing disturbance steganography on each sample of the sample set according to the mode of the step (3b), namely the step (3b) needs to be repeatedly executed M multiplied by N times.
And (4) calculating loss, and optimizing mask Hpattern:
and (4a) inputting a batch of samples added with disturbance into a pre-trained target model to obtain a prediction value, wherein the number of the batch of samples is M multiplied by N, and calculating the average attack success rate acc of the batch of samples.
Step (4b), calculating regularization loss _ reg of the original sample and the disturbed sample by adopting L 2And the regularization method substitutes the loss _ reg, the predicted value of the target model to the disturbance sample and the real label of the sample into a non-target loss function to obtain the total loss. In this example, there is no target attack loss function:
Figure BDA0003461853930000121
where x is the original sample, xadvTo add the generic perturbed samples, ytrueTaking a one-hot vector form as a real category of the sample, wherein f (-) is a pre-trained target model, and f (x)adv) Representing the target model versus the perturbation sample xadvThe classification result of (c), CE (. smallcircle.) is the cross-entropy loss, Lp(. represents original sample x and perturbed sample x)advL betweenpDistance, where p has a value of 2.
And (4c), minimizing loss, and optimizing the mask and the pattern by using an optimizer. The Adam optimizer is preferred for this example.
And (4d) when two conditions of acc < epsilon and loss _ reg < reg _ best are simultaneously met, making mask _ best be mask, pattern _ best be pattern and reg _ best be loss _ reg, wherein epsilon is an attack threshold value, 0 < epsilon < 1, reg _ best is a best regularization value, the initial value of the reg _ best is infinite, the mask _ best and the pattern _ best are respectively the best current mask and pattern, and the initial values of the elements of the mask and the pattern are both 0. In this example, 0.9999 is preferred.
And (5) automatically increasing the epochs, if the epochs are less than the total iteration times epochs, executing the step (6), otherwise, executing the step (7), wherein the epochs represent the current iteration times, the initial value of the epochs is 0, and the epochs is more than or equal to 500. In this example, epochs is preferably 500.
And (6) checking whether an early stop mechanism is triggered, if the early stop mechanism meets an early stop condition, executing the step (7), and if the early stop mechanism does not meet the early stop condition, repeating the steps (3) to (6) and carrying out a new iteration. In this example, the initial values of acc _ up, acc _ down, and early _ stop _ counter are all 0, the initial value of early _ stop _ reg _ best is infinite, the initial values of acc _ up _ flag and acc _ down _ flag are all False, and preferably, the probability is 10 and the early _ stop _ probability is 20. And when the average attack success rate acc is greater than the attack threshold epsilon, the acc _ up is automatically increased, and the acc _ down is set to zero, otherwise, the acc _ down is automatically increased, and the acc _ up is set to zero. If acc _ up is greater than the set constant value probability, then acc _ up is set to zero and acc _ up _ flag is set to True. If acc _ down is greater than partition, then acc _ down is set to zero and acc _ down _ flag is set to True. If reg _ best is greater than or equal to early _ stop _ reg _ best, then let early _ stop _ counter increment, otherwise, set early _ stop _ counter to zero. The min () function is used to obtain the smaller of the reg _ best and early _ stop _ reg _ best and assign a value to early _ stop _ reg _ best. If the values of acc _ up _ flag and acc _ down _ flag are True and early _ stop _ counter is greater than early _ stop _ probability, the early stop condition is satisfied.
And (7) calculating and storing the final disturbance:
and (7a) judging whether the values of the mask _ best and the pattern _ best are None, if so, executing the step (7b), and if not, executing the step (7 c).
And (7b) obtaining the mask and the pattern of the last iteration, assigning the value of the mask to a mask _ best, and assigning the value of the pattern to a pattern _ best.
And (7c) multiplying the position elements corresponding to the mask _ best and the pattern _ best, and calculating to obtain the final universal countermeasure disturbance.
And (7d) storing the universal countermeasure disturbance as an image by using an image processing method. In the embodiment, the array _ to _ img () function preferably converts the universal countermeasure disturbance in the array format into a picture form, and the save () function is used for saving the disturbed picture.
By combining the application of the method in the field of automatic driving and traffic sign recognition, the method provided by the embodiment of the invention comprises the following specific steps of:
step (1), obtaining an original sample set, a target model and a steganography model:
and (1a) selecting samples from a target data set containing M categories, randomly selecting N non-repeated samples for each category in the data set, and converting the selected samples into an original sample set X with the same size W X H X C to form an original sample set X with the size of M X N. In the present example, the german traffic sign data set is preferred as the target data set, where M is 43, N is 1000, W is 32, and H is 32.
And (1b) acquiring a network structure and weight of a pre-trained target model to obtain the target model. The VGG19 is preferred in this example as the target model, i.e., the deep neural network model of the automatic driving recognition system.
And (1c) acquiring a network structure and weight of a pre-trained steganography model to obtain the steganography model. In this example, the steganographic model HNet (-) is preferred, which requires that the data formats of the original sample and the general disturbance-resistant image are converted into a data format by using a transpose () function
Figure BDA0003461853930000141
Format, C denotes channel, H denotes height of image, W denotes width of image; and splicing the original image and the disturbed image on the dimension of C by using a concat () function, wherein the spliced image is used as the input of the steganography model.
Step (2), initializing mask and pattern:
and (2a) randomly generating a single-channel MASK image MASK with the same size W x H as the original sample by using a random () function, and limiting the value of each element in the MASK within a given upper and lower bounds, wherein the upper bound MASK _ MAX is 1 and the lower bound MASK _ MIN is.
And (2b) randomly generating an RGB PATTERN image PATTERN with the same size W x H as the original sample by using a random () function, and limiting the value of each element in the PATTERN within a given upper and lower bound, wherein the upper bound PATTERN _ MAX is 255 and the lower bound PATTERN _ MIN is 0.
Step (3), calculating disturbance by the mask and pattern, and steganographically writing the disturbance into the original sample to obtain a confrontation sample:
and (3a) multiplying the mask and the position element corresponding to the pattern, which indicates that the mask is mask [ < pattern ], to obtain a disturbed image.
And (3b) steganography of the general countermeasure disturbance into the original sample by using a pre-trained steganography model to obtain a countermeasure sample. The implementation of steganography in this example is xadvH, splicing the German traffic sign sample x and the universal anti-disturbance mask x according to the requirement in the step (1c), sending the spliced image into an steganography model for steganography, steganography the universal anti-disturbance mask x into the original sample x, and obtaining an anti-sample after steganography disturbance;
and (3c) performing disturbance steganography on each sample of the German traffic sign data set according to the mode of the step (3b), namely the step (3b) needs to be repeatedly executed M multiplied by N times.
And (4) calculating loss, optimizing mask and pattern:
and (4a) inputting a batch of samples added with disturbance into a pre-trained target model to obtain a prediction value, wherein the number of the batch of samples is M multiplied by N, and calculating the average attack success rate acc of the batch of samples.
Step (4b), calculating regularization loss _ reg of the original sample and the disturbed sample by adopting L2And the regularization method substitutes the loss _ reg, the predicted value of the target model to the disturbance sample and the real label of the sample into a target loss function to obtain the total loss. In this example, a target attack loss function is selected to identify the traffic sign as a specified category:
Figure BDA0003461853930000151
where x is the original sample, xadvTo add the generic perturbed samples, ytargetFor the purpose of classifying classes, y is preferredtarget0, in the form of one-hot vector, f (-) is a pre-trained target model, f (x)adv) Representing the target model versus the disturbance sample xadvThe classification result of (c), CE (. smallcircle.) is the cross-entropy loss, Lp(. represents original sample x and perturbed sample x)advL betweenpDistance, where p is chosen 2 in this example.
And (4c), minimizing loss, and optimizing the mask and the pattern by using an optimizer. The Adam optimizer is preferred for this example.
And (4d) when two conditions of acc < epsilon and loss _ reg < reg _ best are simultaneously met, making mask _ best be mask, pattern _ best be pattern and reg _ best be loss _ reg, wherein epsilon is an attack threshold value, 0 < epsilon is required to be less than or equal to 1, reg _ best is a best regularization value, the initial value of the reg _ best is infinite, mask _ best and pattern _ best are respectively the best ask and pattern, and the initial values of the elements of the mask _ best and the pattern _ best are both 0. In this example, 0.9999 is preferred.
And (5) automatically increasing the epochs, if the epochs are less than the total iteration times epochs, executing the step (6), otherwise, executing the step (7), wherein the epochs represent the current iteration times, the initial value of the epochs is 0, and the epochs is more than or equal to 500. In this example, epochs is preferably 500.
And (6) checking whether an early stop mechanism is triggered, if the early stop mechanism meets an early stop condition, executing the step (7), and if the early stop mechanism does not meet the early stop condition, repeating the steps (3) to (6) and carrying out a new iteration. In this example, the initial values of acc _ up, acc _ down, and early _ stop _ counter are all 0, the initial value of early _ stop _ reg _ best is infinite, the initial values of acc _ up _ flag and acc _ down _ flag are all False, and preferably, the probability is 10 and the early _ stop _ probability is 20. When the average attack success rate acc is greater than the attack threshold epsilon, acc _ up is automatically increased and acc _ down is set to zero, otherwise acc _ down is automatically increased and acc _ up is set to zero. If acc _ up is greater than the set constant value probability, then acc _ up is set to zero and acc _ up _ flag is set to True. If acc _ down is greater than partition, then acc _ down is set to zero and acc _ down _ flag is set to True. If reg _ best is greater than or equal to early _ stop _ reg _ best, then let early _ stop _ counter increment, otherwise, set early _ stop _ counter to zero. The min () function is used to obtain the smaller of the reg _ best and early _ stop _ reg _ best and assign a value to early _ stop _ reg _ best. If the values of acc _ up _ flag and acc _ down _ flag are True and early _ stop _ counter is greater than early _ stop _ probability, the early stop condition is satisfied.
And (7) calculating and storing the final disturbance:
and (7a) judging whether the values of the mask _ best and the pattern _ best are None, if so, executing the step (7b), and if not, executing the step (7 c).
And (7b) obtaining the mask and the pattern of the last iteration, assigning the value of the mask to a mask _ best, and assigning the value of the pattern to a pattern _ best.
And (7c) multiplying the position elements corresponding to the mask _ best and the pattern _ best, and calculating to obtain the final universal countermeasure disturbance.
And (7d) storing the universal countermeasure disturbance as an image by using an image processing method. In the embodiment, the array _ to _ img () function is preferably used for converting the general countermeasure disturbance in the array format into a picture form, and the save () function is used for saving the disturbed picture.
The technical effects of the present invention will be described in detail with reference to experiments.
The universal disturbance generation method of the invention is applied to carry out non-target attack on four deep neural network models and test the attack success rate and the mobility, and is compared with the conventional SUAP universal antagonistic disturbance generation method. SUAP was experimented on 5 different secret images and four models. The results show that for the ResNet50 and the inclusion-v 3 models, the method almost approaches the best result of SUAP, only differs by about 2%, and the attack success rate on the rest four secret images is better than that of SUAP. However, the SUAP method has considerable limitations, and its attack effect depends heavily on the content of the steganography. The disturbance generated by the invention has weak dependency on the steganographic content, can realize high attack success rate on the premise of ensuring the concealment, and is obviously superior to the prior method.
TABLE 1
Figure BDA0003461853930000171
Figure BDA0003461853930000181
It should be noted that embodiments of the present invention can be realized in hardware, software, or a combination of software and hardware. The hardware portions may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. It will be appreciated by those skilled in the art that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, for example such code provided on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware) or a data carrier such as an optical or electronic signal carrier. The apparatus of the present invention and its modules may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, or software executed by various types of processors, or a combination of hardware circuits and software, e.g., firmware.
The above description is only for the purpose of illustrating the embodiments of the present invention, and the scope of the present invention should not be limited thereto, and any modifications, equivalents and improvements made by those skilled in the art within the technical scope of the present invention as disclosed in the present invention should be covered by the scope of the present invention.

Claims (10)

1. A universal countermeasure disturbance generation method based on steganography is characterized by comprising the following steps:
acquiring an original sample set, a target model and a steganographic model;
initializing a mask and a pattern;
calculating disturbance through the mask and the pattern, and steganographically writing the disturbance into the original sample to obtain a confrontation sample;
step four, calculating loss, and optimizing mask and pattern;
step five, the epoch is automatically increased, if the epoch is less than the total iteration frequency epochs, the step six is executed, otherwise, the step seven is executed, wherein the epoch represents the current iteration frequency, the initial value of the epoch is 0, and the epochs are more than or equal to 500;
step six, checking whether an early stop mechanism is triggered, if the early stop mechanism meets an early stop condition, executing step seven, and if the early stop mechanism does not meet the early stop condition, repeating the step three to the step six to perform a new iteration;
and step seven, calculating and storing the final disturbance.
2. The method for generating universal anti-perturbation based on steganography as claimed in claim 1, wherein the process of obtaining the original sample set, the target model and the steganography model in the first step is as follows:
the method comprises the steps of firstly, selecting samples from a target data set containing M categories, randomly selecting N non-repeated samples for each category in the data set, converting the selected samples into an original sample set X with the same size W.H.C, forming an original sample set X with the size of M.N, and taking an ImageNet data set as the target data set, wherein M is 1000, N is 10, W is 224, and H is 224;
Secondly, acquiring a network structure and weight of a pre-trained target model to obtain the target model, and taking VGG16 as the target model;
thirdly, acquiring a network structure and weight of a pre-trained steganographic model to obtain the steganographic model; a steganographic model HNet (-) converts the data formats of the original sample and the universal anti-disturbance image into a [ C, H, W ] format by using a transpose () function, wherein C represents a channel, H represents the height of the image, and W represents the width of the image; and splicing the original image and the disturbed image on the dimension of C by using a concat () function, wherein the spliced image is used as the input of the steganographic model.
3. The steganography-based universal countermeasure disturbance generation method according to claim 1, wherein the initialization mask and pattern processes in the second step are as follows:
a. randomly generating a single-channel mask image mask with the same size W x H as the original sample by using a random () function, and limiting the value of each element in the mask within a given upper boundary and a given lower boundary; wherein, the upper limit MASK _ MAX is 1, and the lower limit MASK _ MIN is 0;
b. randomly generating an RGB mode image pattern with the same size W x H as the original sample by using a random () function, and limiting the value of each element in the pattern within a given upper boundary and a given lower boundary; wherein, the upper bound patern _ MAX is 255, and the lower bound patern _ MIN is 0.
4. The method for generating universal antagonistic perturbation based on steganography according to claim 1, wherein the process of calculating perturbation from mask and pattern in the third step and steganography of perturbation to original sample to obtain antagonistic sample comprises:
A. multiplying the mask and the position element corresponding to the pattern, wherein the result is that the mask indicates that the pattern is more than or equal to the pattern, and obtaining a disturbed image;
B. steganography is carried out on the general countermeasure disturbance to an original sample by using a pre-trained steganography model, and a countermeasure sample is obtained; the implementation mode of steganography is xadvH, splicing the original sample x and the universal anti-disturbance mask x according to the third step in the step S101, sending the spliced image into an steganography model for steganography, and steganography the universal anti-disturbance mask x into the original sample x to obtain an anti-sample after steganography disturbance;
C. and (4) performing disturbance steganography on each sample of the sample set in the mode of the step B, namely the step B needs to be repeatedly executed M times by N times.
5. The steganography-based universal countermeasure disturbance generation method according to claim 1, wherein the loss is calculated in the fourth step, and the process of optimizing mask and pattern is as follows:
1) inputting a batch of samples added with disturbance into a pre-trained target model to obtain a predicted value, wherein the number of the batch of samples is MxN, and calculating the average attack success rate acc of the batch of samples;
2) Calculating regularization loss _ reg of the original sample and the disturbed sample by adopting L2The regularization method comprises the steps of substituting loss _ reg, a predicted value of a target model for a disturbance sample and a real label of the sample into a non-target loss function to obtain total loss;
3) minimizing loss, optimizing mask and pattern by using an optimizer, and optimizing by using an adam optimizer;
4) when two conditions of acc < epsilon and loss _ reg < reg _ best are simultaneously met, making mask _ best be mask, pattern _ best be pattern and reg _ best be loss _ reg, wherein epsilon is an attack threshold value, 0 < epsilon is required to be less than or equal to 1, reg _ best is a best regularization value, the initial value is infinite, mask _ best and pattern _ best are respectively the best current mask and pattern, the initial values of the elements of both are 0, and epsilon is 0.9999.
6. The steganographic-based generic countermeasure perturbation generation method of claim 5, wherein the targetless attack loss function:
Figure FDA0003461853920000031
where x is the original sample, xadvTo add the generic perturbed samples, ytrueTaking a one-hot vector form as a real category of the sample, wherein f (-) is a pre-trained target model, and f (x)adv) Representing the target model versus the perturbation sample x advCE (-) is a cross entropy loss, Lp(. cndot.) denotes original sample x and perturbed sample xadvL betweenpDistance, where p has a value of 2.
7. The universal immunity disturbance generating method based on steganography as claimed in claim 1, wherein in the sixth step, it is checked whether an early stop mechanism is triggered, if the early stop condition is satisfied, then step seven is executed, if not, the third step to the sixth step are repeated, and a new round of iteration process is performed as follows:
the initial values of acc _ up, acc _ down and early _ stop _ counter are all 0, the initial value of early _ stop _ reg _ best is infinite, the initial values of acc _ up _ flag and acc _ down _ flag are both False, and the selected probability is 10, and the early _ stop _ probability is 20;
when the average attack success rate acc is larger than the attack threshold epsilon, acc _ up is automatically increased and acc _ down is set to zero, otherwise acc-dwon is automatically increased and acc _ up is set to zero; if the acc _ up is larger than the set constant value probability, setting the acc _ up to zero, and setting the acc _ up _ flag to True;
if acc-dwon is greater than probability, setting acc-dwon to zero, and setting acc _ down _ flag to True; if reg _ best is larger than or equal to early _ stop _ reg _ best, then enabling early _ stop _ counter to increase by itself, otherwise, setting early _ stop _ counter to zero;
Obtaining smaller values of reg _ best and early _ stop _ reg _ best by using a min () function, and assigning values to early _ stop _ reg _ best; if the values of acc _ up _ flag and acc-dwon _ flag are True and early _ stop _ counter is greater than early _ stop _ probability, the early stop condition is satisfied.
8. The steganography-based universal countermeasure disturbance generation method according to claim 1, wherein the step seven of calculating and saving the final disturbance process is:
(1) judging whether the values of the mask _ best and the pattern _ best are None, if so, executing the step (2), and if not, executing the step (3);
(2) obtaining a mask and a pattern of the last iteration, assigning the value of the mask to a mask _ best, and assigning the value of the pattern to a pattern _ best;
(3) multiplying the position elements corresponding to the mask _ best and the pattern _ best, and calculating to obtain the final universal countermeasure disturbance;
(4) the general countermeasure disturbance is saved as an image by using an image processing method, the array _ to _ img () function converts the general countermeasure disturbance in an array format into a picture form, and the disturbed picture is saved by using the save () function.
9. A program storage medium for receiving user input, the stored computer program causing an electronic device to execute the steganographic-based universal countermeasure perturbation generation method of any one of claims 1 to 8, comprising the steps of:
Step one, obtaining an original sample set, a target model and a steganography model:
initializing a mask and a pattern;
calculating disturbance by the mask and the pattern, and steganographically writing the disturbance into the original sample to obtain a confrontation sample:
step four, calculating loss, optimizing mask and pattern:
step five, the epoch is automatically increased, if the epoch is less than the total iteration frequency epoch, the step six is executed, otherwise, the step seven is executed, wherein the epoch represents the current iteration frequency, the initial value of the epoch is 0, and the epochs are more than or equal to 500;
step six, checking whether an early stop mechanism is triggered, if the early stop mechanism meets an early stop condition, executing step seven, and if the early stop mechanism does not meet the early stop condition, repeating the step three to the step six, and performing a new iteration;
and step seven, calculating and storing the final disturbance.
10. A computer device, characterized in that the computer device comprises a memory and a processor, the memory storing a computer program, the computer program, when executed by the processor, causing the processor to perform the steps of the steganographic-based universal countermeasure disturbance generation method according to any one of claims 1 to 8.
CN202210019738.7A 2022-01-10 2022-01-10 Universal countermeasure disturbance generation method based on steganography, medium and computer equipment Pending CN114758187A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210019738.7A CN114758187A (en) 2022-01-10 2022-01-10 Universal countermeasure disturbance generation method based on steganography, medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210019738.7A CN114758187A (en) 2022-01-10 2022-01-10 Universal countermeasure disturbance generation method based on steganography, medium and computer equipment

Publications (1)

Publication Number Publication Date
CN114758187A true CN114758187A (en) 2022-07-15

Family

ID=82325603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210019738.7A Pending CN114758187A (en) 2022-01-10 2022-01-10 Universal countermeasure disturbance generation method based on steganography, medium and computer equipment

Country Status (1)

Country Link
CN (1) CN114758187A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018208763A1 (en) * 2018-06-04 2019-12-05 Robert Bosch Gmbh Method, apparatus and computer program for operating a machine learning system
CN111680292A (en) * 2020-06-10 2020-09-18 北京计算机技术及应用研究所 Confrontation sample generation method based on high-concealment universal disturbance
CN112183671A (en) * 2020-11-05 2021-01-05 四川大学 Target attack counterattack sample generation method for deep learning model
CN112287323A (en) * 2020-10-27 2021-01-29 西安电子科技大学 Voice verification code generation method based on generation of countermeasure network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018208763A1 (en) * 2018-06-04 2019-12-05 Robert Bosch Gmbh Method, apparatus and computer program for operating a machine learning system
CN111680292A (en) * 2020-06-10 2020-09-18 北京计算机技术及应用研究所 Confrontation sample generation method based on high-concealment universal disturbance
CN112287323A (en) * 2020-10-27 2021-01-29 西安电子科技大学 Voice verification code generation method based on generation of countermeasure network
CN112183671A (en) * 2020-11-05 2021-01-05 四川大学 Target attack counterattack sample generation method for deep learning model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SALAH UD DIN ET AL: "Steganographic universal adversarial perturbations", 《PATTERN RECOGNITION LETTERS》, vol. 135, 25 April 2020 (2020-04-25), pages 146 - 152, XP086190378, DOI: 10.1016/j.patrec.2020.04.025 *
付章杰等: "基于深度学习的图像隐写方法研究", 《计算机学报》, vol. 49, no. 03, 15 September 2020 (2020-09-15), pages 1656 - 1672 *

Similar Documents

Publication Publication Date Title
CN111310802B (en) Anti-attack defense training method based on generation of anti-network
CN113554089B (en) Image classification countermeasure sample defense method and system and data processing terminal
CN109902617B (en) Picture identification method and device, computer equipment and medium
CN111598182B (en) Method, device, equipment and medium for training neural network and image recognition
CN112633280B (en) Countermeasure sample generation method and system
CN111783551A (en) Confrontation sample defense method based on Bayes convolutional neural network
CN110991568A (en) Target identification method, device, equipment and storage medium
CN115439719B (en) Deep learning model defense method and model for resisting attack
CN112329832B (en) Passive positioning target track data enhancement method and system based on deep convolution generation countermeasure network
CN113254927B (en) Model processing method and device based on network defense and storage medium
CN112926661A (en) Method for enhancing image classification robustness
CN114648675A (en) Countermeasure training method, image processing method, apparatus, device, and medium
CN114529890A (en) State detection method and device, electronic equipment and storage medium
CN113935396A (en) Manifold theory-based method and related device for resisting sample attack
CN114758187A (en) Universal countermeasure disturbance generation method based on steganography, medium and computer equipment
CN111144243B (en) Household pattern recognition method and device based on counterstudy
CN116343007A (en) Target detection method, device, equipment and storage medium
CN115620100A (en) Active learning-based neural network black box attack method
CN115641584A (en) Foggy day image identification method and device
CN112785478B (en) Hidden information detection method and system based on generation of embedded probability map
CN110009579B (en) Image restoration method and system based on brain storm optimization algorithm
CN114090968A (en) Ownership verification method and device for data set
CN114332982A (en) Face recognition model attack defense method, device, equipment and storage medium
CN113537463A (en) Countermeasure sample defense method and device based on data disturbance
CN113507466A (en) Method and system for defending backdoor attack by knowledge distillation based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination