CN111414964A - Image security identification method based on defense sample - Google Patents

Image security identification method based on defense sample Download PDF

Info

Publication number
CN111414964A
CN111414964A CN202010206429.1A CN202010206429A CN111414964A CN 111414964 A CN111414964 A CN 111414964A CN 202010206429 A CN202010206429 A CN 202010206429A CN 111414964 A CN111414964 A CN 111414964A
Authority
CN
China
Prior art keywords
image
disturbance
pictures
sample
generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010206429.1A
Other languages
Chinese (zh)
Inventor
汪昕
金鑫
黄横
时超
陈力
蒋尚秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Golden Bridge Info Tech Co ltd
Original Assignee
Shanghai Golden Bridge Info Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Golden Bridge Info Tech Co ltd filed Critical Shanghai Golden Bridge Info Tech Co ltd
Priority to CN202010206429.1A priority Critical patent/CN111414964A/en
Publication of CN111414964A publication Critical patent/CN111414964A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image security identification method based on defense of a countermeasure sample, which comprises the following steps: step 1, firstly, collecting an image data set; step 2, generating a countermeasure sample by using a pixel attack method; the attack method comprises the steps of utilizing a differential evolution algorithm to conduct iterative modification on each pixel of a test set image to generate sub-images, then testing the attack effect of each sub-image, and taking the sub-image with the best attack effect as a countermeasure sample; step 3, generating a countermeasure sample by using a general disturbance generation method; step 4, generating a countermeasure test set based on the countermeasure sample; step 5, using the image data of the training set as training data, and finely adjusting the original pre-trained model; and 6, carrying out image recognition on the test set, and checking the image recognition effect. The method has good defense capability for the pixel to generate the anti-disturbance, has excellent defense capability for the anti-sample generated by the general disturbance, can not generate any influence on the generated image recognition model due to the general disturbance, and can be used for recognition classification of the electronic files and the like.

Description

Image security identification method based on defense sample
Technical Field
The invention discloses an image security identification method based on defense of a countermeasure sample, and belongs to the field of machine vision.
Background
In recent years, with the increasingly widespread application of deep learning technology in the field of computer vision and the excellent performance of various tasks, the deep learning technology attracts a large number of scholars to further research. In 2014, szegdy et al firstly proposed that the deep neural network is not perfect, and when the deep neural network is applied in the field of computer vision, although the deep neural network has a good effect, the deep neural network is easily disturbed by a small vector which is difficult to be detected by human eyes, namely, the vector is added to an image, the image is difficult to be seen to have obvious differences, and the deep neural network can obtain an error result on the image. Such small, imperceptible vectors that can perturb the deep neural network are called resist perturbations, while pictures with added resist perturbations are called resist samples.
In the aspect of defending and resisting disturbance, three methods are mainly used, some methods do not change the design of the network, and some methods need to analyze the deep neural network. Because the existing confrontation sample attack develops rapidly and the related research in the defense confrontation disturbance field is in a relatively lagged state, the invention provides a solution for making up the blank in the defense confrontation disturbance field and aiming at the defense of various confrontation samples.
Disclosure of Invention
The technical problem of the invention is solved: the image security identification method based on the defense of the antagonistic sample overcomes the defect of the prior art in the aspect of image identification, the antagonistic sample is generated by a mainstream antagonistic sample generation method generated by pixel generation and general disturbance to finely adjust an image identification model, so that the image security identification method has better defense and antagonistic sample attack capability compared with other methods, and a user can perform more accurate security identification on an image containing antagonistic neural network characteristics in the image identification process.
The technical scheme of the invention is as follows: an image security identification method based on countermeasure sample defense comprises the following steps:
step 1, firstly, collecting an image data set;
step 2, generating a countermeasure sample by using a pixel attack method; the attack method comprises the steps of utilizing a differential evolution algorithm to conduct iterative modification on each pixel of a test set image to generate sub-images, then testing the attack effect of each sub-image, and taking the sub-image with the best attack effect as a countermeasure sample;
step 3, generating a countermeasure sample by using a general disturbance generation method;
step 4, generating a countermeasure test set based on the countermeasure sample;
step 5, using the image data of the training set as training data, and finely adjusting the original pre-trained model;
and 6, carrying out image recognition on the test set, and checking the image recognition effect.
Furthermore, the process of collecting the image data set utilizes a crawler mode to crawl a plurality of pictures from the network; filtering out improper pictures, and finally obtaining new 100 types of pictures, wherein 20 pictures are set for each type; randomly cutting the existing picture with less than 20 pictures, turning over to construct a new picture, and finally obtaining 20 pictures; the original data set has 15 classes, and each class has 20 pictures; the obtained pictures and the original data set are collected to form a new data set new-ImageDataset, and the new data set has 252 classes, and 20 pictures in each class.
Further, in the step 2, the generation of the challenge sample in a pixel attack method is an optimization problem including a constraint condition; let the input image be X ═ X1,...,xn) (ii) a f is a classifier, v (x) ═ v1,...,vn) To combat the perturbation vector, e (x) represents the additional perturbation generated according to x, t represents the class label, ft(X) represents the probability that image X belongs to category t, d being the maximum modifier limit;
s.t. is constrained; antagonistic sample generation turned to an optimization problem containing constraints:
Figure BDA0002421276660000021
s.t.||v(x)||0≤d
for a single pixel attack, d is set to 1.
Further, the step 3 is as follows:
let u be the picture space RdDistribution in (1), P ∈ [1, ∞), P is defense disturbance, and a picture set X obtained by sampling is { X ═ X%1,x2,…,xmM is the number of pictures, i ∈ {1, 2.., m },
Figure BDA0002421276660000022
is a classifier function, perturbation vector v ∈ Rd
The perturbation vector v satisfies the following constraint:
constraint 1: | v | non-conducting phosphorp≤ξ
Constraint 2:
Figure BDA0002421276660000023
ξ controls the norm of the disturbance in the above formula to quantize the fool rate, the whole generation algorithm iterates based on the initial condition that the disturbance vector v is 0, and finally generates the disturbance vector v with the best effect of resisting attack, and in the whole process of iterative calculation, if the current disturbance vector v is not a valid disturbance, the method comprises the following steps:
Figure BDA0002421276660000024
r is the corresponding perturbation which validates the perturbation;
recording the projection operation as follows:
Figure BDA0002421276660000031
the update rule of the perturbation vector v is:
v=Pp,ξ(v+vi)
note Xv={x1+v,x2+v,...,xm+ v }, then the iteration stop condition is:
Figure BDA0002421276660000032
Err(Xv) And generating 1 universal disturbance for the disturbance rate of v to X according to the algorithm generated by the disturbance.
Further, the step 4 is as follows:
generating a disturbance resisting sample on an original data set, wherein the original data set comprises 152 classes, each class comprises 20 pictures, and the total number of the pictures is 3040; dividing the original data set into two data sets, one is a clean test set d1With no added perturbation, and the other is perturbation test set d2(ii) a Both datasets contain 152 classes; d1Each of which contains 10 images, d210 images per class;
the pre-trained model 20180408-102900 is pointed at d by the two methods2Respectively generated confrontation sample v1、v2 1Forming a new perturbation test set d2'; in addition, as the generated general disturbance is cross-model, a countermeasure sample v generated aiming at the general disturbance generated by other models such as the inclusion-ResNet-v 2 model is added2 2Testing the model; the finally obtained disturbance test set is three sets which are respectively a disturbance data set d generated by a pixel attack method2-1' disturbance data set d generated by general disturbance method for model 20180408-1029002-2-1', and a disturbance data set d formed for general disturbances generated by other deep neural network models2-2-2′。
Further, the step 5 is as follows:
the learning rate needs to be reset before the fine tuning training starts and set to become smaller as the number of iterations increases, the learning rate is decreased to 0.01 from the original initial learning rate, and the value of batch _ size is set to 90 in consideration of the GPU performance problem.
Further, the step 6 is as follows:
respectively training and generating an SVM classifier 1 on the verification set by using the original model and the 20181108-102900 model, and checking the image recognition effect of the classifier on the test set after the classifier 2.
Advantageous effects
The method has good defense capability for one pixel to generate the anti-disturbance, has extremely excellent defense capability for the anti-sample generated by the general disturbance, and cannot generate any influence on the generated image recognition model due to the general disturbance.
The invention provides secure identification of images containing countermeasure features. According to the mainstream deep neural network attack method, antagonistic samples are generated aiming at the algorithm model, and the safety and the identification accuracy of the whole model are analyzed and verified through the antagonistic samples. And performing fine-tuning on the model according to the verification result and the updated training data set, so that the defense capability of the model on the countermeasure sample is improved, the safety is higher, and the image safety identification is realized.
Drawings
FIG. 1 is a block diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and all other embodiments obtained by a person skilled in the art based on the embodiments of the present invention belong to the protection scope of the present invention without creative efforts.
For a better understanding of the invention, some basic concepts will be explained below.
The challenge sample: refers to samples with slight deviations that cause the model to eventually yield erroneous results.
And (3) image recognition algorithm: a distinctive image recognition algorithm is provided in Google 2015, and the algorithm makes full use of the characteristics that images have high aggregations and images of different categories have low couplings. Different from the past deep neural network algorithm, the softmax layer is not used any more, but a norm embedding layer is accessed, and the embedding layer has the function of mapping the characteristic value obtained by the full-connection layer at the tail end of the original neural network from one space to the other space, and actually mapping the characteristic value to a hypersphere.
fine-tuning: one of the commonly used methods in the transfer learning indicates that a network is not trained from 0, but a pre-trained network is used to further train the network according to the requirements of the user, so that a new model obtained after the fine-tuning is more suitable for the data set of the user.
Overfitting (over-fitting): generally, in the machine learning or depth process, the learning effect is too good due to a small training data set, so that the model learns some useless features.
The whole implementation process of the invention is as follows:
step 1, an image data set is first collected.
The collection process mainly utilizes a crawler mode to crawl a large number of pictures from the internet. And filtering out improper pictures, and finally obtaining new 100 types of pictures, wherein 20 pictures are set for each type. And constructing a new picture by using the technologies of random cutting, turning and the like on the existing picture with less than 20 pictures, and finally obtaining 20 pictures. The original data set has 15 classes, each with 20 pictures. These data are aggregated with the original data to form a new data set new-ImageData, which has 252 classes of 20 pictures each.
And 2, generating a countermeasure sample by using a pixel attack method.
The attack method mainly utilizes a differential evolution algorithm to iteratively modify each pixel of a test set image to generate a sub-image, then tests the attack effect of each sub-image, and takes the sub-image with the best attack effect as a countersample. This method can generate usable countersamples with minimal changes to the image, and can achieve good attack effect by modifying only one pixel under the best condition. In addition, different from other attack modes, the one-pixel attack method measures the attack strength by using the number of changed pixels, controls the modification number of the pixels to keep the disturbance small enough to be not perceived, and controls the modification amplitude of the whole image by other attack modes.
The generation of countersamples in a pixel-attack method is actually an optimization problem involving constraints. Let the input image be X ═ X1,...,xn). f is a classifier, v (x) ═ v1,...,vn) To combat the perturbation vector, e (x) represents the additional perturbation generated according to x, t represents the class label, ft(X) represents the probability that image X belongs to category t, and d is the maximum modifier limit. Antagonistic sample generation turned to an optimization problem containing constraints:
Figure BDA0002421276660000051
subject to||v(x)||0≤d
for a single pixel attack, d is set to 1. The method has the advantages that the global optimal solution can be found with high probability, the gradient does not need to be calculated, and the calculation amount is reduced. And belongs to half black box attack, only the class probability of the black box needs to be obtained, and the internal parameters of the network do not need to be known. And generating anti-disturbance samples for the original data set according to a pixel attack method. The disadvantage is that a specific challenge sample needs to be generated for each image, with no generalization capability on the data set.
And 3, generating a countermeasure sample by using a general disturbance generation method.
The universal disturbance is not unique, and a plurality of different universal disturbances can be generated to disturb a deep neural network model.
Let u be the picture space RdThe distribution in (1) is P ∈ [1, ∞) is usually 2, P is defense disturbance, and the sampling obtains a picture set X ═ { X ═ X1,x2,…,xmM is the number of pictures in the picture set,
Figure BDA0002421276660000052
is a classifier function, perturbation vector v ∈ Rd. v satisfies the following constraints:
constraint 1: | v | non-conducting phosphorp≤ξ
Constraint 2:
Figure BDA0002421276660000053
the whole generation algorithm iterates based on the initial case where v is 0, and finally generates v which has the best effect against attacks.
Figure BDA0002421276660000061
r is the corresponding perturbation that validates the perturbation.
Recording the projection operation as follows:
Figure BDA0002421276660000062
the update rule for v is:
v=Pp,ξ(v+vi)
note Xv={x1+v,x2+v,...,xm+ v }, then the iteration stop condition is:
Figure BDA0002421276660000063
and generating 1 universal disturbance according to the algorithm generated by the disturbance. And then the picture with the general disturbance added on the test set is verified after generating a countermeasure sample.
And 4, generating a countermeasure test set based on the countermeasure sample.
Counterdisturbance samples are generated on a Faces94 data set, 152 classes are included on a Faces94 data set, 20 pictures are included in each class, and 3040 pictures are total. The Faces94 dataset is divided into two datasets, one being cleanTest set d1(no added perturbation) and the other is perturbation test set d2. Both datasets contain 152 classes. d1Each of which contains 10 images, d2With 10 images per class.
The pre-trained model 20180408-102900 is pointed at d by the two methods2Respectively generated confrontation sample v1、v2 1Forming a new perturbation test set d2'. In addition, as the generated general disturbance is cross-model, a countermeasure sample v generated aiming at the general disturbance generated by other models such as the inclusion-ResNet-v 2 model is added2 2The model was tested. The finally obtained disturbance test set is three sets which are respectively a disturbance data set d generated by a pixel attack method2-1' disturbance data set d generated by general disturbance method for model 20180408-1029002-2-1', and a disturbance data set d formed for general disturbances generated by other deep neural network models2-2-2′。
And step 5, using the image data of the training set as training data, and performing fine-tuning on the original pre-trained model 20180408 and 102900. The learning rate needs to be reset before the fine training begins. And the learning rate is set to become smaller as the number of iterations increases. The learning rate reduction is set to 0.01 with respect to the original initial learning rate. The value of batch _ size (the size of the amount of data per batch of training) is set to 90 in view of GPU performance issues.
And 6, carrying out image recognition on the test set, and checking the image recognition effect.
Respectively training and generating an SVM classifier 1 on the verification set by using the original model and the 20181108-102900 model, and checking the recognition effect of the classifier on the test set after the classifier 2. The test set data consists of four parts, namely a test set 1 clean image, a test set 2 confrontation sample generated by a first attack, a test set 3 confrontation sample formed by a second attack and a test set 4 confrontation sample formed by another general disturbance generated by a second attack. The effect of classifier image recognition was examined on the above four test sets. The test results are shown in the following table:
TABLE 1 image recognition accuracy
Figure BDA0002421276660000071
As can be seen from the data in the table, the recognition effect of the model 20181108-102900 obtained after fine-tuning on the normal image is good, and compared with the effect of the original model, the image recognition accuracy rate is not reduced in the newly constructed data set containing new image data. On the challenge sample data set generated by the first attack method, the image identification accuracy rate is 94.65%, which is reduced compared with the identification accuracy rate on a normal data set, but the identification accuracy rate on the data set is greatly improved compared with the identification accuracy rate on an unfine-tuning identification model. And the accuracy of the model 20181108-102900 in identifying the countermeasure sample formed by the general disturbance is consistent with the accuracy of the model in identifying the general data set. Generic perturbations can no longer successfully affect this model. From the data, the models 20181108 and 102900 have higher accuracy and better recognition effect on new image recognition in the aspect of image recognition compared with the original model, further improve disturbance on the defense capability against sample attack, and realize image security recognition.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but various changes may be apparent to those skilled in the art, and it is intended that all inventive concepts utilizing the inventive concepts set forth herein be protected without departing from the spirit and scope of the present invention as defined and limited by the appended claims.

Claims (7)

1. An image security identification method based on defense of confrontation samples is characterized by comprising the following steps:
step 1, firstly, collecting an image data set;
step 2, generating a countermeasure sample by using a pixel attack method; the attack method comprises the steps of utilizing a differential evolution algorithm to conduct iterative modification on each pixel of a test set image to generate sub-images, then testing the attack effect of each sub-image, and taking the sub-image with the best attack effect as a countermeasure sample;
step 3, generating a countermeasure sample by using a general disturbance generation method;
step 4, generating a countermeasure test set based on the countermeasure sample;
step 5, using the image data of the training set as training data, and finely adjusting the original pre-trained model;
and 6, carrying out image recognition on the test set, and checking the image recognition effect.
2. The image security identification method based on the defense against samples as claimed in claim 1, characterized in that:
the process of collecting the image data set utilizes a crawler mode to crawl a plurality of pictures from the network; filtering out improper pictures, and finally obtaining new 100 types of pictures, wherein 20 pictures are set for each type; randomly cutting the existing picture with less than 20 pictures, turning over to construct a new picture, and finally obtaining 20 pictures; the original data set has 15 classes, and each class has 20 pictures; the obtained pictures and the original data set are collected to form a new data set new-ImageDataset, and the new data set has 252 classes, and 20 pictures in each class.
3. The image security identification method based on the defense against samples as claimed in claim 1, characterized in that:
in the step 2, the generation of the challenge sample in a pixel attack method is an optimization problem containing a limiting condition; let the input image be X ═ X1,...,xn) (ii) a f is a classifier, v (x) ═ v1,...,vn) To combat the perturbation vector, e (x) represents the additional perturbation generated according to x, t represents the class label, ft(X) represents the probability that image X belongs to category t, d being the maximum modifier limit;
s.t. is constrained; antagonistic sample generation turned to an optimization problem containing constraints:
Figure FDA0002421276650000011
s.t.||v(x)||0≤d
for a single pixel attack, d is set to 1.
4. The image security identification method based on defense against samples according to claim 1, characterized in that the step 3 is as follows:
let u be the picture space RdDistribution in (1), P ∈ [1, ∞), P is defense disturbance, and a picture set X obtained by sampling is { X ═ X%1,x2,…,xmM is the number of pictures, i ∈ {1, 2.., m },
Figure FDA0002421276650000012
is a classifier function, perturbation vector v ∈ Rd
The perturbation vector v satisfies the following constraint:
constraint 1: | v | non-conducting phosphorp≤ξ
Constraint 2:
Figure FDA0002421276650000021
ξ controls the norm of the disturbance in the above formula to quantize the fool rate, the whole generation algorithm iterates based on the initial condition that the disturbance vector v is 0, and finally generates the disturbance vector v with the best effect of resisting attack, and in the whole process of iterative calculation, if the current disturbance vector v is not a valid disturbance, the method comprises the following steps:
Figure FDA0002421276650000022
r is the corresponding perturbation which validates the perturbation;
recording the projection operation as follows:
Figure FDA0002421276650000023
the update rule of the perturbation vector v is:
v=Pp,ξ(v+vi)
note Xv={x1+v,x2+v,...,xm+ v }, then the iteration stop condition is:
Figure FDA0002421276650000024
Err(Xv) And generating 1 universal disturbance for the disturbance rate of v to X according to the algorithm generated by the disturbance.
5. The image security identification method based on defense against samples according to claim 1, characterized in that the step 4 is as follows:
generating a disturbance resisting sample on an original data set, wherein the original data set comprises 152 classes, each class comprises 20 pictures, and the total number of the pictures is 3040; dividing the original data set into two data sets, one is a clean test set d1With no added perturbation, and the other is perturbation test set d2(ii) a Both datasets contain 152 classes; d1Each of which contains 10 images, d210 images per class;
the pre-trained model 20180408-102900 is pointed at d by the two methods2Respectively generated confrontation sample v1、v2 1Forming a new perturbation test set d2'; in addition, as the generated general disturbance is cross-model, a countersample v generated by the general disturbance generated by other models is added2 2Testing the model; the finally obtained disturbance test set is three sets which are respectively a disturbance data set d generated by a pixel attack method2-1' disturbance data set d generated by general disturbance method for model 20180408-1029002-2-1', and perturbation data formed for generic perturbations generated by other deep neural network modelsCollection d2-2-2′。
6. The image security identification method based on defense against samples according to claim 1, characterized in that the step 5 is as follows:
the learning rate needs to be reset before the fine tuning training starts and set to become smaller as the number of iterations increases, the learning rate is decreased to 0.01 from the original initial learning rate, and the value of batch _ size is set to 90 in consideration of the GPU performance problem.
7. The image security identification method based on defense against samples according to claim 1, characterized in that the step 6 is as follows:
respectively training and generating an SVM classifier 1 on the verification set by using the original model and the 20181108-102900 model, and checking the image recognition effect of the classifier on the test set after the classifier 2.
CN202010206429.1A 2020-03-23 2020-03-23 Image security identification method based on defense sample Pending CN111414964A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010206429.1A CN111414964A (en) 2020-03-23 2020-03-23 Image security identification method based on defense sample

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010206429.1A CN111414964A (en) 2020-03-23 2020-03-23 Image security identification method based on defense sample

Publications (1)

Publication Number Publication Date
CN111414964A true CN111414964A (en) 2020-07-14

Family

ID=71493190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010206429.1A Pending CN111414964A (en) 2020-03-23 2020-03-23 Image security identification method based on defense sample

Country Status (1)

Country Link
CN (1) CN111414964A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000578A (en) * 2020-08-26 2020-11-27 支付宝(杭州)信息技术有限公司 Test method and device of artificial intelligence system
CN112257053A (en) * 2020-11-17 2021-01-22 上海大学 Image verification code generation method and system based on universal anti-disturbance
CN112270700A (en) * 2020-10-30 2021-01-26 浙江大学 Attack judgment method capable of interpreting algorithm by fooling deep neural network
CN112488172A (en) * 2020-11-25 2021-03-12 北京有竹居网络技术有限公司 Method, device, readable medium and electronic equipment for resisting attack
CN114220097A (en) * 2021-12-17 2022-03-22 中国人民解放军国防科技大学 Anti-attack-based image semantic information sensitive pixel domain screening method and application method and system
CN114707589A (en) * 2022-03-25 2022-07-05 腾讯科技(深圳)有限公司 Method, device, storage medium, equipment and program product for generating countermeasure sample
CN114722407A (en) * 2022-03-03 2022-07-08 中国人民解放军战略支援部队信息工程大学 Image protection method based on endogenous countermeasure sample
CN115439719A (en) * 2022-10-27 2022-12-06 泉州装备制造研究所 Deep learning model defense method and model for resisting attack
CN117197589A (en) * 2023-11-03 2023-12-08 武汉大学 Target classification model countermeasure training method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application
US20190244103A1 (en) * 2018-02-07 2019-08-08 Royal Bank Of Canada Robust pruned neural networks via adversarial training

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190244103A1 (en) * 2018-02-07 2019-08-08 Royal Bank Of Canada Robust pruned neural networks via adversarial training
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄 横等: "基于对抗样本防御的人脸安全识别方法" *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000578A (en) * 2020-08-26 2020-11-27 支付宝(杭州)信息技术有限公司 Test method and device of artificial intelligence system
CN112270700A (en) * 2020-10-30 2021-01-26 浙江大学 Attack judgment method capable of interpreting algorithm by fooling deep neural network
CN112270700B (en) * 2020-10-30 2022-06-28 浙江大学 Attack judgment method capable of interpreting algorithm by using deep neural network
CN112257053B (en) * 2020-11-17 2024-03-15 上海大学 Image verification code generation method and system based on general disturbance countermeasure
CN112257053A (en) * 2020-11-17 2021-01-22 上海大学 Image verification code generation method and system based on universal anti-disturbance
CN112488172A (en) * 2020-11-25 2021-03-12 北京有竹居网络技术有限公司 Method, device, readable medium and electronic equipment for resisting attack
CN114220097A (en) * 2021-12-17 2022-03-22 中国人民解放军国防科技大学 Anti-attack-based image semantic information sensitive pixel domain screening method and application method and system
CN114220097B (en) * 2021-12-17 2024-04-12 中国人民解放军国防科技大学 Screening method, application method and system of image semantic information sensitive pixel domain based on attack resistance
CN114722407A (en) * 2022-03-03 2022-07-08 中国人民解放军战略支援部队信息工程大学 Image protection method based on endogenous countermeasure sample
CN114722407B (en) * 2022-03-03 2024-05-24 中国人民解放军战略支援部队信息工程大学 Image protection method based on endogenic type countermeasure sample
CN114707589A (en) * 2022-03-25 2022-07-05 腾讯科技(深圳)有限公司 Method, device, storage medium, equipment and program product for generating countermeasure sample
CN115439719A (en) * 2022-10-27 2022-12-06 泉州装备制造研究所 Deep learning model defense method and model for resisting attack
US11783037B1 (en) 2022-10-27 2023-10-10 Quanzhou equipment manufacturing research institute Defense method of deep learning model aiming at adversarial attacks
CN115439719B (en) * 2022-10-27 2023-03-28 泉州装备制造研究所 Deep learning model defense method and model for resisting attack
CN117197589B (en) * 2023-11-03 2024-01-30 武汉大学 Target classification model countermeasure training method and system
CN117197589A (en) * 2023-11-03 2023-12-08 武汉大学 Target classification model countermeasure training method and system

Similar Documents

Publication Publication Date Title
CN111414964A (en) Image security identification method based on defense sample
Carlini et al. Evading deepfake-image detectors with white-and black-box attacks
Tuor et al. Overcoming noisy and irrelevant data in federated learning
Zhang et al. The secret revealer: Generative model-inversion attacks against deep neural networks
Zhang et al. Variational few-shot learning
Warde-Farley et al. 11 adversarial perturbations of deep neural networks
Rozsa et al. Towards robust deep neural networks with BANG
Yin et al. Semi-supervised clustering with metric learning: An adaptive kernel method
CN112364915B (en) Imperceptible countermeasure patch generation method and application
CN113627543B (en) Anti-attack detection method
CN110647645A (en) Attack image retrieval method based on general disturbance
He et al. Locality-aware channel-wise dropout for occluded face recognition
Yu et al. Deep metric learning with dynamic margin hard sampling loss for face verification
CN113378620B (en) Cross-camera pedestrian re-identification method in surveillance video noise environment
CN115048983A (en) Counterforce sample defense method of artificial intelligence system based on data manifold topology perception
CN105809200B (en) Method and device for autonomously extracting image semantic information in bioauthentication mode
CN113343123B (en) Training method and detection method for generating confrontation multiple relation graph network
Ma et al. Learning deep face representation with long-tail data: An aggregate-and-disperse approach
Zhu et al. LIGAA: Generative adversarial attack method based on low-frequency information
CN104573728B (en) A kind of texture classifying method based on ExtremeLearningMachine
Sun et al. A Deep Model for Partial Multi-label Image Classification with Curriculum-based Disambiguation
CN108510080A (en) A kind of multi-angle metric learning method based on DWH model many-many relationship type data
CN113487506B (en) Attention denoising-based countermeasure sample defense method, device and system
Dhar et al. On measuring the iconicity of a face
Wu et al. 3D-guided frontal face generation for pose-invariant recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200714

WD01 Invention patent application deemed withdrawn after publication