CN114283341B - High-transferability confrontation sample generation method, system and terminal - Google Patents

High-transferability confrontation sample generation method, system and terminal Download PDF

Info

Publication number
CN114283341B
CN114283341B CN202210206305.2A CN202210206305A CN114283341B CN 114283341 B CN114283341 B CN 114283341B CN 202210206305 A CN202210206305 A CN 202210206305A CN 114283341 B CN114283341 B CN 114283341B
Authority
CN
China
Prior art keywords
sample
model
gradient
target image
frequency information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210206305.2A
Other languages
Chinese (zh)
Other versions
CN114283341A (en
Inventor
郑德生
陈继鑫
周永
柯武平
张秀荣
李政禹
温冬
吴欣隆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202210206305.2A priority Critical patent/CN114283341B/en
Publication of CN114283341A publication Critical patent/CN114283341A/en
Application granted granted Critical
Publication of CN114283341B publication Critical patent/CN114283341B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method, a system and a terminal for generating a high-transferability confrontation sample, belonging to the technical field of deep learning, wherein a first sample is obtained by adding low-frequency information of a first target image to a randomly selected original image; randomly selecting a primary sampling model from a model pool; inputting a first sample into a first-stage sampling model; iteratively calculating the aggregation gradient of the sampling model according to the low-frequency information of the first target image added to the original image, and normalizing the aggregation gradient; updating a countermeasure sample generated by the sampling model based on the gradient symbol, constraining a countermeasure sample value, and outputting a primary temporary countermeasure sample as the input of a lower-level sampling model; and repeating the steps to obtain the final generated sample. The method and the device have the multi-level sampling model, and the transferability of the finally generated confrontation sample is improved. Meanwhile, target low-frequency information different from the type of the sample is superposed on the sample input into the sampling model, so that the transferability of the countersample can be further improved.

Description

High-transferability confrontation sample generation method, system and terminal
Technical Field
The invention relates to the technical field of deep learning, in particular to a method, a system and a terminal for generating a high-transferability confrontation sample.
Background
In recent years, with the rapid development of deep learning, intelligent applications in the fields of image recognition, face recognition, unmanned driving, target detection, medical treatment, and the like are in reality. Deep learning shows extremely high performance in various fields compared with the traditional method, however, research shows that the deep learning shows the fragile characteristic of being easily attacked by counterattack due to the existence of countersample. In the highly information and intelligent days, the safety guarantee of the deep learning algorithm is an unprecedented difficult point and pain point.
An attacker does not distinguish the confrontation sample generated by elaborately designing the disturbance from a real sample to human naked eyes, but can give an error output which is completely irrelevant to real information with a high confidence level for a deep learning model, a process of generating the confrontation sample to attack the deep learning model is called as confrontation attack, the generated confrontation sample has high attack success rate in the model attack, and the safety research work of the current neural network model is facilitated.
The counter attack can be classified into a white box attack and a black box attack according to how much model information an attacker has. For white-box attacks, an attacker needs to know all information of the attacked neural network model; the black box attack is divided into a black box attack based on query and a black box attack based on migration according to whether an attacker can query an attacked model, and the following problems exist in the process of promoting the safety research of the current neural network model:
1. white-box attacks require complete authority over the attacked model, and need to grasp the model's architecture, parameters, weights, etc., which is generally not feasible in real application scenarios, and often results in low transferability of the generated countermeasure sample due to overfitting to the model.
2. Query-based black-box attacks require a large number of query black-box models, the behavior of such large number of queries takes a large amount of time, and may cause the attacked models to detect anomalies.
3. The existing countermeasure attack method has the advantages that the transferability of generated countermeasure samples is low for black box models, the success rate of attack on most black box models is low, single models cannot be effectively trained, the models can learn the characteristics of the countermeasure samples and accurately classify the countermeasure samples, and classification models with high defense power are obtained.
Disclosure of Invention
The invention aims to overcome the problem that a countermeasure sample generated in the prior art is low in transferability in model attack, and provides a method, a system and a terminal for generating the countermeasure sample with high transferability.
The purpose of the invention is realized by the following technical scheme: a method of generating a highly metastatic challenge sample, comprising the steps of:
s1: adding low-frequency information of a first target image to a randomly selected original image to obtain a first sample, wherein the first target image and the original image belong to different categories;
s2: randomly selecting a primary sampling model from the model pool;
s3: inputting a first sample into a first-stage sampling model;
s4: iteratively calculating the aggregation gradient of the sampling model according to the low-frequency information of the first target image added to the original image, and normalizing the aggregation gradient;
s5: iteratively updating a countermeasure sample generated by the sampling model based on the gradient symbols, constraining a countermeasure sample value, and outputting a first-stage temporary countermeasure sample as the input of a lower-stage sampling model;
s6: and executing the steps S2-S5, and circulating for multiple times to obtain a final generated sample.
In one example, the obtaining of the low frequency information of the first target image includes:
selecting a first target image which is different from the original image;
and performing two-dimensional discrete Fourier transform processing on the first target image to acquire frequency domain low-frequency information of the first target image, and performing inverse Fourier transform processing to obtain the low-frequency information of the first target image.
In an example, the adding the low-frequency information of the first target image to the randomly selected original image specifically includes:
and performing weight superposition on the low-frequency information of the first target image and the low-frequency information of the original image, wherein the weight coefficient of the original image is more than or equal to 0.8.
In one example, the calculation formula of the gradient of aggregation is:
Figure 752866DEST_PATH_IMAGE001
wherein,
Figure 571918DEST_PATH_IMAGE002
is shown astThe polymerization gradient obtained by +1 iteration; s denotes for calculating the gradient of polymerization
Figure 865496DEST_PATH_IMAGE003
The number of (2);krepresenting the scaling times of the original image;
Figure 453603DEST_PATH_IMAGE004
represents a gradient sign;
Figure 440014DEST_PATH_IMAGE005
is shown astCountervailing samples obtained by the secondary iteration;
Figure 681115DEST_PATH_IMAGE006
representing a loss function;
Figure 184908DEST_PATH_IMAGE007
representing an input modelA sample;y one-hot the one-hot encoding of the authentic label corresponding to the first sample is expressed.
In one example, the normalized aggregate gradient is calculated as:
Figure 752156DEST_PATH_IMAGE008
wherein,
Figure 784834DEST_PATH_IMAGE009
represents a gradient of polymerization;trepresenting the number of iterations; | | expressed by | l | |
Figure 703111DEST_PATH_IMAGE002
Norm of (d).
In one example, the formula for iteratively updating the challenge samples generated by the sampling model based on the gradient sign is as follows:
Figure 213858DEST_PATH_IMAGE010
wherein,trepresenting the number of iterations;
Figure 635612DEST_PATH_IMAGE011
denotes the firsttCountervailing samples obtained by the secondary iteration;
Figure 635929DEST_PATH_IMAGE012
is shown ast+Confrontation samples obtained by 1 iteration;σrepresenting the magnitude of the disturbance;
Figure 979186DEST_PATH_IMAGE013
the polymerization gradient is indicated.
In one example, the calculation formula for constraining the challenge sample values is:
Figure 559203DEST_PATH_IMAGE014
wherein,trepresenting the number of iterations;
Figure 101043DEST_PATH_IMAGE011
is shown astObtaining a confrontation sample by the secondary iteration;
Figure 475524DEST_PATH_IMAGE012
is shown ast+Confrontation samples obtained by 1 iteration;cliprepresenting a range of constraints.
It should be further noted that the technical features corresponding to the above examples can be combined with each other or replaced to form a new technical solution.
The present application further includes a high transferability challenge sample generation system, the system comprising:
the model pool sampling module is used for randomly selecting a multi-level sampling model from the model pool, and each level of sampling model comprises a single model or a plurality of parallel models;
the aggregation gradient generation countermeasure sample module comprises aggregation gradient generation countermeasure sample sub-modules corresponding to the sampling model stages, and each aggregation gradient generation countermeasure sample sub-module is used for calculating the aggregation gradient of the corresponding stage sampling model and normalizing the aggregation gradient; updating the countermeasure sample generated by the current-stage sampling model based on the gradient symbol, constraining the value of the countermeasure sample, and outputting a temporary countermeasure sample or a finally generated sample; the final generated samples are generated by the last stage sampling model.
In an example, the system further comprises a model pool module to store a classification model of the ImageNet dataset.
The present invention also includes a storage medium having stored thereon computer instructions which, when executed, perform the steps of the method for generating a high transfer resistance challenge sample formed by any one or more of the above-described example compositions.
The present invention also includes a terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, the processor executing the computer instructions to perform the steps of the method for generating a high transfer resistance challenge sample formed in any one or more of the above examples.
Compared with the prior art, the invention has the beneficial effects that:
the more the integrated sampling model series are, the stronger universality is achieved in the disturbance effect of the confrontation sample when the confrontation sample is used for attacking the black box model, the transferability of the finally generated confrontation sample is improved, so that the attacking power of the confrontation sample to different models is improved, the finally generated confrontation sample is used for training the attacked model, and the model with high defense power can be obtained. Meanwhile, target low-frequency information different from the type of the sample is superposed on the sample input into the sampling model, and the gradient obtained in the process of calculating the aggregate gradient contains other types of target low-frequency information, so that when the gradient of the attacked model is calculated, the gradient contains image information of another target type, and the transferability of the resisting sample is further improved. Meanwhile, polymerization gradient calculation is introduced, overfitting of the white box model is reduced, and the generated confrontation sample has higher transferability.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention.
FIG. 1 is a flow chart of a method in an example of the invention;
FIG. 2 is a diagram of a challenge sample generation model in accordance with an example of the present invention;
FIG. 3 is a flow chart of a method in another example of the invention;
FIG. 4 is a schematic diagram of a challenge sample generation system in accordance with an example of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that directions or positional relationships indicated by "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like are directions or positional relationships based on the drawings, and are only for convenience of description and simplification of description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
In one example, as shown in fig. 1, a method for generating a high-transferability confrontation sample specifically includes the following steps:
s1: adding low-frequency information of a first target image to a randomly selected original image to obtain a first sample, wherein the first target image and the original image belong to different categories;
s2: and randomly selecting a primary sampling model from the model pool. In this example, a plurality of models are randomly drawn from the model pool as sampling models at each level.
S3: inputting a first sample into a first-stage sampling model;
s4: iteratively calculating the aggregation gradient of the sampling model according to the low-frequency information of the first target image added to the original image, and normalizing the aggregation gradient; specifically, in this step, the current iteration number of the model and the scaling number of the sample need to be initialized, and then the model starts to iterate to calculate the confrontation sample and calculate the aggregation gradient in the iterative process.
S5: iteratively updating a countermeasure sample generated by the sampling model based on the gradient symbols, constraining a countermeasure sample value, and outputting a first-stage temporary countermeasure sample as the input of a lower-stage sampling model;
s6: and executing the steps S2-S5, and circulating for m times to obtain a final generated sample. Where the larger m, the higher the final challenge sample transferability generated, m being 3 in this example.
As an option, step S2 may be performed before step S1, and preferably, steps S1 and S2 can be performed simultaneously.
Further, the confrontation sample generation model formed by the multi-level sampling model of the present example is shown in fig. 2, and includes a first-level sampling model, a second-level sampling model, and a third-level sampling model. The primary sampling model comprises a model A, a model B, a model C and a model G which are randomly extracted from a model pool, and the model A, the model B, the model C and the model G are arranged in parallel; the secondary sampling model comprises a model A, a model B, a model E and a model G which are randomly extracted from a model pool, and the model A, the model B, the model E and the model G are arranged in parallel; the three-level sampling model comprises a model B, a model C, a model E and a model F which are randomly extracted from a model pool, and the model B, the model C, the model E and the model F are arranged in parallel.
Further, the present polymerization gradient calculation is divided into two cases. If a single model is extracted from the model pool, inputting the sample into the single model, obtaining a plurality of sample predicted values based on different iteration times and scale times of the sample, respectively calculating loss between the sample predicted values and the real categories of the sample to obtain a plurality of loss values, calculating a plurality of gradients based on the plurality of loss values, and performing addition and average processing to obtain an average gradient. If a plurality of models are extracted from the model pool, samples are respectively input into the plurality of models, then a plurality of sample predicted values are obtained based on different iteration times and scale scaling times of the samples, loss between the sample predicted values and real categories of the samples is respectively calculated to obtain a plurality of loss values, a plurality of gradients are obtained based on the calculation of the plurality of loss values, an average gradient is obtained by addition and average processing, a single temporary countermeasure sample which is fitted with multi-model disturbance is generated based on low-frequency information of a multi-target image, and therefore the transfer performance of the temporary countermeasure sample is greatly improved.
According to the method, a plurality of models randomly extracted by each level of sampling model are used for generating the countermeasure sample, so that the generalization of the countermeasure sample can be well enhanced, and the transferability of the countermeasure sample is enhanced; meanwhile, the more the number of the sampling model stages is, the stronger universality is achieved for the disturbance effect of the confrontation sample when the confrontation sample is used for attacking the black box model, the transferability of the finally generated confrontation sample is improved, so that the attacking power of the confrontation sample on different models is improved, and the model with high defense power can be obtained by training the attacked model by using the finally generated confrontation sample. Meanwhile, target low-frequency information different from the type of the sample is superposed on the sample input into the sampling model, so that when the gradient of the attacked model is calculated, the gradient contains image information of another target type, and the transferability of the resisting sample is further improved. Meanwhile, polymerization gradient calculation is introduced, the gradient of the model can be automatically polymerized, the average gradient is obtained, overfitting of the model is reduced, and transferability of generating a confrontation sample is improved. According to the method and the device, the black box model does not need to be accessed and inquired, the black box model can be directly attacked, the attack is more robust, and the abnormality is not easy to detect.
In an example, the present application further includes a model training method based on the high-transitivity confrontation sample, which has the same inventive concept as the above-mentioned high-transitivity confrontation sample generation method, and specifically includes:
and training the neural network model according to the confrontation sample, so that the model can learn the characteristics of the confrontation sample and accurately classify the confrontation sample to obtain the model with high defense capacity. The model can learn the characteristics of the confrontation sample, namely the model can learn the disturbance characteristics of the confrontation sample different from the original sample, so that the classification result is corrected, accurate classification is realized, and the safety performance of the neural network model is improved.
According to the method, the final countermeasure sample with high transferability is generated to attack the black box model, and the generated countermeasure sample can also have high attack success rate on the black box model.
In one example, the obtaining of the low frequency information of the first target image includes:
selecting a first target image which is different from the original image; specifically, let the original image beXThe corresponding real label (object class) isYArbitrarily extracting a target image from the data setX i Corresponding real label isY i To satisfyYY i
And performing two-dimensional discrete Fourier transform processing on the first target image to acquire time domain low-frequency information of the first target image, and performing inverse Fourier transform processing to obtain the low-frequency information of the first target image. Specifically, a target image is imaged using a two-dimensional discrete Fourier transformX i To the frequency domain, using low-pass filtering to obtain a target imageX i Then, the inverse Fourier transform is used to obtain the low frequency information map low of the target map(X i )The fourier transform calculation formula is:
low(X i )←FFT (X i )
wherein FFT represents fourier transform. In this example, to reduce overfitting of the attack algorithm to the model, the input picture is processed using discrete fourier transform, enhancing transferability of generating a challenge sample.
In an example, adding the low-frequency information of the first target image to the randomly selected original image specifically includes:
and performing weight superposition on the low-frequency information of the first target image and the low-frequency information of the original image, wherein the weight coefficient of the original image is more than or equal to 0.8. In particular, in order toEnsuring the leading position of the original image, only distributing the weight value of a small part of the low-frequency information to obtain a first source image (original image) for calculating the gradient
Figure 368393DEST_PATH_IMAGE015
I.e. source maps
Figure 955364DEST_PATH_IMAGE003
For the samples of the input model, the specific weight superposition calculation formula is:
Figure 882868DEST_PATH_IMAGE016
=X adv +0.2* low(X i )
when the first iteration is carried out for the first time, X adv =X(ii) a Obtaining the original image according to the above in the same way
Figure 419461DEST_PATH_IMAGE017
The method calculates to obtain a second source map
Figure 534048DEST_PATH_IMAGE018
Third source diagram
Figure 190288DEST_PATH_IMAGE019
And the more the source image number is, the more the calculated gradient is, the higher the gradient quality is, and the higher the transferability of the generated confrontation sample is. It should be further noted that only the low-frequency information of the target image with a small proportion is added to the original image, at this time, the low-frequency information of the original image + the target image is input to the model as an input sample, and the recognition accuracy of the model for the input sample still does not decrease, so that the input sample cannot be used as a countermeasure sample, and therefore, the input sample needs to be subjected to a temporary countermeasure sample through a white-box attack on the integrated model, and at this time, the generated countermeasure sample has better transferability compared with the existing white-box attack. Meanwhile, in the application, the temporary countermeasure sample generated by the primary sampling model is utilizedAs the input of the two-stage sampling model, the process of generating the countermeasure sample can be stopped at any time by the circulation, and the larger the circulation times, the higher the transferability of the countermeasure sample.
In an example, the process of generating the countermeasure samples by using sampling models at different levels is shown in fig. 3, and model algorithm parameters need to be defined, where the size of the disturbance is ∈ =16 in this application, that is, the maximum difference between the infinite norm of the countermeasure sample and the original image is generated to be 16, the number of iterations T =10, and the learning rate is set to be higher than the thresholdσ’=ε/T = 1.6. On the basis, the loss of the sampling model to the input sample is further calculated, and the specific calculation formula is as follows:
Figure 706720DEST_PATH_IMAGE020
wherein, therein
Figure 750899DEST_PATH_IMAGE021
Is a loss function;nthe number of the models sampled from the model pool, namely the number of the models in the current-stage sampling model;CEis the cross entropy loss;M i for extracting from model poolsiA model;xis the input value of the model, i.e. the input sample;y one-hot for inputting samplesxUnique hot encoding of the corresponding real tag. On the basis, further calculating the aggregation gradient of the sampling model, wherein the calculation formula is as follows:
Figure 228148DEST_PATH_IMAGE022
wherein,
Figure 812713DEST_PATH_IMAGE023
is shown astThe polymerization gradient obtained by +1 iteration;srepresenting means for calculating the gradient of polymerisation
Figure 59018DEST_PATH_IMAGE003
The number of (2);krepresenting the scaling times of the original image;
Figure 70836DEST_PATH_IMAGE024
represents a gradient sign;
Figure 35381DEST_PATH_IMAGE005
is shown astCountervailing samples obtained by the secondary iteration;
Figure 423637DEST_PATH_IMAGE003
samples representing an input model, whereinsourceThe low frequency information of the target image needs to be added for the confrontational sample generated in each iteration process.
In one example, the calculation formula for the normalized aggregate gradient is:
Figure 524449DEST_PATH_IMAGE008
whereintRepresenting the number of iterations; | | expressed by | l | |
Figure 707168DEST_PATH_IMAGE025
Norm of (d).
In one example, iteratively updating the computation formula for the challenge samples generated by the sampling model based on the gradient sign is:
Figure 486905DEST_PATH_IMAGE010
wherein,σthe expression of the size of the perturbation is,σthe larger the resulting challenge sample attack, the more blurred the picture.
In one example, the challenge sample value is constrained to be between (-1,1), and the specific calculation formula is:
Figure 288639DEST_PATH_IMAGE026
wherein,cliprepresenting a range of constraints. When in uset=TIf =10, the iteration is ended, the provisional countermeasure sample is output, and the next stage (next stage) is ready to be inputA step sampling model) is calculated.
The present application further includes a high transferability challenge sample generation system, the system comprising:
the model pool sampling module is used for randomly selecting a multi-level sampling model from the model pool, and each level of sampling model comprises a single model or a plurality of parallel models;
the aggregation gradient generation countermeasure sample module comprises aggregation gradient generation countermeasure sample sub-modules corresponding to the sampling model stages, and each aggregation gradient generation countermeasure sample sub-module is used for calculating the aggregation gradient of the corresponding stage sampling model and normalizing the aggregation gradient; updating the countermeasure sample generated by the current-stage sampling model based on the gradient symbol, constraining the value of the countermeasure sample, and outputting a temporary countermeasure sample or a finally generated sample; the final generated samples are generated by the last stage sampling model.
In an example, the system further includes a model pool module (model pool), as shown in FIG. 4, for storing a classification model of the ImageNet dataset, including but not limited to Incepistionv 3, Incepistionv 4, IncepistionResNetv 2, Xception, ResNetv 2-101.
The present application further includes a storage medium having the same inventive concept as embodiment 1, and having stored thereon computer instructions which, when executed, perform the steps of the above-described method for generating a high-transitive countermeasure sample.
Based on such understanding, the technical solution of the present embodiment or parts of the technical solution may be essentially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The application also includes a terminal, which has the same inventive concept as embodiment 1, and includes a memory and a processor, wherein the memory stores computer instructions executable on the processor, and the processor executes the computer instructions to execute the steps of the method for generating the high-transitivity antagonistic sample. The processor may be a single or multi-core central processing unit or a specific integrated circuit, or one or more integrated circuits configured to implement the present invention.
Each functional unit in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above detailed description is for the purpose of describing the invention in detail, and it should not be construed that the detailed description is limited to the description, and it will be apparent to those skilled in the art that various modifications and substitutions can be made without departing from the spirit of the invention.

Claims (10)

1. A method for generating a high-transferability confrontation sample is characterized by comprising the following steps: which comprises the following steps:
s1: adding low-frequency information of a first target image to a randomly selected original image to obtain a first sample, wherein the first target image and the original image belong to different categories;
s2: randomly selecting a primary sampling model from the model pool;
s3: inputting a first sample into a first-stage sampling model;
s4: iteratively calculating the aggregation gradient of the sampling model according to the low-frequency information of the first target image added to the original image, and normalizing the aggregation gradient;
s5: iteratively updating a countermeasure sample generated by the sampling model based on the gradient symbols, constraining a countermeasure sample value, and outputting a first-stage temporary countermeasure sample as the input of a lower-stage sampling model;
s6: and executing the steps S2-S5, and circulating for multiple times to obtain a final generated sample.
2. The method for generating a highly metastatic antagonistic sample according to claim 1, wherein: the obtaining of the low frequency information of the first target image comprises:
selecting a first target image which is different from the original image;
and performing two-dimensional discrete Fourier transform processing on the first target image to acquire frequency domain low-frequency information of the first target image, and performing inverse Fourier transform processing to obtain the low-frequency information of the first target image.
3. The method for generating a highly metastatic antagonistic sample according to claim 1, wherein: the adding of the low-frequency information of the first target image to the randomly selected original image specifically includes:
and performing weight superposition on the low-frequency information of the first target image and the low-frequency information of the original image, wherein the weight coefficient of the original image is more than or equal to 0.8.
4. The method for generating a highly metastatic antagonistic sample according to claim 1, wherein: the calculation formula of the polymerization gradient is as follows:
Figure 186609DEST_PATH_IMAGE001
wherein,
Figure 982527DEST_PATH_IMAGE002
is shown astThe polymerization gradient obtained by +1 iteration; s denotes for calculating the gradient of polymerization
Figure 735719DEST_PATH_IMAGE003
The number of (2);krepresenting the scaling times of the original image;
Figure 351508DEST_PATH_IMAGE004
represents a gradient sign;
Figure 645087DEST_PATH_IMAGE005
is shown astCountervailing samples obtained by the secondary iteration;
Figure 295511DEST_PATH_IMAGE006
representing a loss function;
Figure 219604DEST_PATH_IMAGE003
a sample representing an input model;y one-hot the one-hot encoding of the authentic label corresponding to the first sample is expressed.
5. The method for generating a highly metastatic antagonistic sample according to claim 1, wherein: the calculation formula of the normalized aggregation gradient is as follows:
Figure 588269DEST_PATH_IMAGE007
wherein,
Figure 623221DEST_PATH_IMAGE008
represents a gradient of polymerization;trepresenting the number of iterations; | | expressed by | l | |
Figure 128152DEST_PATH_IMAGE009
The norm of (a).
6. The method for generating a highly metastatic antagonistic sample according to claim 1, wherein: the calculation formula of the confrontation sample generated by the iterative updating sampling model based on the gradient sign is as follows:
Figure 223147DEST_PATH_IMAGE010
wherein,trepresenting the number of iterations;
Figure 547949DEST_PATH_IMAGE011
is shown astCountervailing samples obtained by the secondary iteration;
Figure 377803DEST_PATH_IMAGE012
is shown ast+Confrontation samples obtained by 1 iteration;σrepresenting the magnitude of the disturbance;
Figure 2819DEST_PATH_IMAGE008
the polymerization gradient is indicated.
7. The method for generating a highly metastatic antagonistic sample according to claim 1, wherein: the calculation formula for constraining the confrontation sample values is as follows:
Figure 268715DEST_PATH_IMAGE013
wherein,trepresenting the number of iterations;
Figure 346393DEST_PATH_IMAGE015
is shown astCountervailing samples obtained by the secondary iteration;
Figure 519885DEST_PATH_IMAGE017
is shown ast+Confrontation samples obtained by 1 iteration;cliprepresenting a range of constraints.
8. A highly metastatic challenge sample generation system, characterized by: the system comprises:
the model pool sampling module is used for randomly selecting a multi-level sampling model from the model pool, and each level of sampling model comprises a single model or a plurality of parallel models;
the aggregation gradient generation countermeasure sample module comprises aggregation gradient generation countermeasure sample sub-modules corresponding to the sampling model stages, and each aggregation gradient generation countermeasure sample sub-module is used for calculating the aggregation gradient of the corresponding stage sampling model and normalizing the aggregation gradient; updating the countermeasure sample generated by the current-stage sampling model based on the gradient symbol, constraining the value of the countermeasure sample, and outputting a temporary countermeasure sample or a finally generated sample; the final generated samples are generated by the last stage sampling model.
9. The system of claim 8, wherein: the system also includes a model pool module for storing a classification model of the ImageNet dataset.
10. A terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, the terminal comprising: the processor, when executing the computer instructions, performs the steps of the method for generating a high transfer resistance countermeasure sample of any of claims 1-7.
CN202210206305.2A 2022-03-04 2022-03-04 High-transferability confrontation sample generation method, system and terminal Active CN114283341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210206305.2A CN114283341B (en) 2022-03-04 2022-03-04 High-transferability confrontation sample generation method, system and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210206305.2A CN114283341B (en) 2022-03-04 2022-03-04 High-transferability confrontation sample generation method, system and terminal

Publications (2)

Publication Number Publication Date
CN114283341A CN114283341A (en) 2022-04-05
CN114283341B true CN114283341B (en) 2022-05-17

Family

ID=80882199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210206305.2A Active CN114283341B (en) 2022-03-04 2022-03-04 High-transferability confrontation sample generation method, system and terminal

Country Status (1)

Country Link
CN (1) CN114283341B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115905B (en) * 2022-06-13 2023-06-27 苏州大学 High-mobility image countermeasure sample generation method based on generation model
CN114861893B (en) * 2022-07-07 2022-09-23 西南石油大学 Multi-channel aggregated countermeasure sample generation method, system and terminal
CN115439377B (en) * 2022-11-08 2023-03-24 电子科技大学 Method for enhancing resistance to image sample migration attack
CN116543268B (en) * 2023-07-04 2023-09-15 西南石油大学 Channel enhancement joint transformation-based countermeasure sample generation method and terminal
CN117523342B (en) * 2024-01-04 2024-04-16 南京信息工程大学 High-mobility countermeasure sample generation method, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625820A (en) * 2020-05-29 2020-09-04 华东师范大学 Federal defense method based on AIoT-oriented security
CN112116026A (en) * 2020-09-28 2020-12-22 西南石油大学 Countermeasure sample generation method, system, storage medium and device
CN113066002A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Generation method of countermeasure sample, training method of neural network, training device of neural network and equipment
CN113407939A (en) * 2021-06-17 2021-09-17 电子科技大学 Substitution model automatic selection method facing black box attack, storage medium and terminal
CN113673324A (en) * 2021-07-13 2021-11-19 复旦大学 Video identification model attack method based on time sequence movement

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11568282B2 (en) * 2019-09-24 2023-01-31 International Business Machines Corporation Mitigating adversarial effects in machine learning systems
US20210256422A1 (en) * 2020-02-19 2021-08-19 Google Llc Predicting Machine-Learned Model Performance from the Parameter Values of the Model
EP3926553A1 (en) * 2020-06-19 2021-12-22 Siemens Aktiengesellschaft Post-processing output data of a classifier

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625820A (en) * 2020-05-29 2020-09-04 华东师范大学 Federal defense method based on AIoT-oriented security
CN112116026A (en) * 2020-09-28 2020-12-22 西南石油大学 Countermeasure sample generation method, system, storage medium and device
CN113066002A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Generation method of countermeasure sample, training method of neural network, training device of neural network and equipment
CN113407939A (en) * 2021-06-17 2021-09-17 电子科技大学 Substitution model automatic selection method facing black box attack, storage medium and terminal
CN113673324A (en) * 2021-07-13 2021-11-19 复旦大学 Video identification model attack method based on time sequence movement

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Adversarial Examples Generation Algorithm through DCGAN》;Biying Deng et al.,;《Intelligent Automation & Soft Computing》;20211231;全文 *
《Boosting Adversarial Attacks with Momentum》;Yinpeng Dong et al.,;《2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition》;20181217;全文 *
《基于U-Net的对抗样本防御模型》;赖妍菱;《计算机工程》;20211221;全文 *
《深度学习中的对抗性攻击和防御》;任奎等;《Engineering》;20200315;全文 *

Also Published As

Publication number Publication date
CN114283341A (en) 2022-04-05

Similar Documents

Publication Publication Date Title
CN114283341B (en) High-transferability confrontation sample generation method, system and terminal
Van Lierde et al. Scalable spectral clustering for overlapping community detection in large-scale networks
CN111950408B (en) Finger vein image recognition method and device based on rule diagram and storage medium
Van Hieu et al. Automatic plant image identification of Vietnamese species using deep learning models
CN113066002A (en) Generation method of countermeasure sample, training method of neural network, training device of neural network and equipment
CN110135681A (en) Risk subscribers recognition methods, device, readable storage medium storing program for executing and terminal device
CN111047054A (en) Two-stage countermeasure knowledge migration-based countermeasure sample defense method
CN115115905A (en) High-mobility image countermeasure sample generation method based on generation model
CN114861893B (en) Multi-channel aggregated countermeasure sample generation method, system and terminal
CN114399630A (en) Countercheck sample generation method based on belief attack and significant area disturbance limitation
CN113111963A (en) Method for re-identifying pedestrian by black box attack
CN116743493A (en) Network intrusion detection model construction method and network intrusion detection method
CN116545764B (en) Abnormal data detection method, system and equipment of industrial Internet
Shi et al. Deep message passing on sets
Putra et al. Multilevel neural network for reducing expected inference time
CN110852102B (en) Chinese part-of-speech tagging method and device, storage medium and electronic equipment
Saeed Visual similarity-based phishing detection using deep learning
CN105205487B (en) A kind of image processing method and device
CN115952493A (en) Reverse attack method and attack device for black box model and storage medium
CN115758337A (en) Back door real-time monitoring method based on timing diagram convolutional network, electronic equipment and medium
CN114925765A (en) Construction method, device, equipment and storage medium of antagonism integrated classification model
CN112528068A (en) Voiceprint feature storage method, voiceprint feature matching method and device and electronic equipment
CN113378985A (en) Countermeasure sample detection method and device based on layer-by-layer correlation propagation
CN112257677A (en) Method and device for processing deep learning task in big data cluster
Chemmakha et al. A Novel Hybrid Architecture of Conditional Tabular Generative Adversarial Network and 1D Convolution Neural Network for Enhanced Attack Detection in IoT Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant