CN116259098B - Feature attention-based migration face recognition attack resistance method and device - Google Patents

Feature attention-based migration face recognition attack resistance method and device Download PDF

Info

Publication number
CN116259098B
CN116259098B CN202310518841.0A CN202310518841A CN116259098B CN 116259098 B CN116259098 B CN 116259098B CN 202310518841 A CN202310518841 A CN 202310518841A CN 116259098 B CN116259098 B CN 116259098B
Authority
CN
China
Prior art keywords
feature
layer
face
attention
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310518841.0A
Other languages
Chinese (zh)
Other versions
CN116259098A (en
Inventor
练智超
吕重仪
王玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202310518841.0A priority Critical patent/CN116259098B/en
Publication of CN116259098A publication Critical patent/CN116259098A/en
Application granted granted Critical
Publication of CN116259098B publication Critical patent/CN116259098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a feature attention-based migration face recognition attack resistance method and device, and belongs to the field of artificial intelligence safety. The method comprises the steps of obtaining an aligned face image; selecting a plurality of migration middle layers of the face recognition model; obtaining an integrated feature graph and integrated feature attention through feature fusion means; and calculating the L2 norm distance of the feature graph and the feature attention to obtain a final countermeasure sample. The invention utilizes the characteristic of the attention of the characteristics to extract the common characteristics of different face recognition models, and utilizes the characteristic fusion means to further expand the common characteristic searching range of different face recognition models, thereby not only ensuring the original white box attack capability, but also enhancing the mobility of the face countermeasure sample among different models and improving the black box attack success rate of the face countermeasure attack when the model characteristic space is attacked.

Description

Feature attention-based migration face recognition attack resistance method and device
Technical Field
The invention designs a method for resisting attack aiming at a face recognition model, in particular to a method for resisting attack based on mobility of feature attention.
Background
In recent years, with the rapid development of deep convolutional neural networks, the accuracy of face recognition technology is greatly improved, and face recognition system applications penetrate into various aspects of people's life, such as phone unlocking, payment, safe passing and the like. Nevertheless, researchers have found that deep convolutional neural networks are fragile, meaning that upon testing, these images, known as antagonistic samples, can easily confuse the deep convolutional neural network and obtain false recognition results after adding a deliberate disturbance to the original input. As an extension of the deep convolutional neural network, face recognition models are also subject to such attacks. This phenomenon raises serious concerns about the security of face recognition applications.
Many efforts have proven the vulnerability of face recognition models currently, but most of them assume that the attacker has full access to the target model, which is classified as a white-box attack in the counterattack. When the parameters and internal structure of the target model are known, the white-box attack uses the corresponding examples to adjust the perturbation imposed on the clean sample. In contrast, a black box attack generates a challenge sample only through the relationship between the input and output of the network. The query-based method in the black box setting needs to interact with the target model through a large number of queries, which is not suitable for the attack condition in the real environment. Many current efforts are directed to improving migration against samples, such as maintaining transformation using loss and increasing diversity of surrogate models. However, the transferability obtained from the above method is still unsatisfactory because it is difficult for the face challenge sample to escape from the locally optimal phenomenon, and at the same time, the above method does not consider the importance of the feature space. Therefore, a face challenge sample generation method with high mobility needs to be designed in combination with the feature space.
Disclosure of Invention
The invention solves the technical problems that: the method for generating the countermeasure sample with high mobility is designed for the face recognition model by utilizing the characteristic that the attention of the characteristic can select the shared characteristic of the multiple models.
The technical scheme is as follows: in order to solve the technical problems, the invention adopts the following technical scheme:
the feature attention-based migration face recognition attack resistance method is characterized by mainly comprising the following steps of:
step 1: obtaining an aligned face image;
step 2: selecting a plurality of migration middle layers of the face recognition model;
step 3: calculating a feature map and feature attention, and obtaining an integrated feature map and an integrated feature attention result by utilizing a feature fusion means;
step 4: and calculating the L2 norm distance of the feature graph and the feature attention to obtain a final countermeasure sample.
Further, in step 1, an aligned face image is obtained, and the method is as follows:
firstly, acquiring face frame coordinates and 5 face key points of a face image by using an MTCNN, setting a source front face as a position of a standard face key point for each face image, performing similar transformation with each extracted face key point to obtain a transformation matrix m, and performing affine transformation on the face by taking m as a parameter to obtain an aligned face image.
Further, in step 2, a plurality of mobility interlayers of the face recognition model are selected, and the method is as follows:
step 2.1: setting iteration turns as n, n is more than or equal to 10 and less than or equal to 20, generating a reference countermeasure sample for an output layer of a face recognition model F by utilizing MI-FGSM for one input face picture xMeanwhile, decomposing the structure of the face recognition model and selecting a certain number of layers at the middle part of the model;
step 2.2: for each layer l selected, generating face challenge samples in feature space iterated n-rounds using MI-FGSM
Step 2.3: calculating the proportion of the model feature diagram, drawing a line diagram of the level and the proportion value, wherein the level of the peak value appearing in the diagram beyond the back is the optimal mobility layer of the face recognition model, and selecting the peak value appearing layer and the first two layers thereof as a plurality of mobility intermediate layers obtained in the step 2.
Further, the calculation method of the model feature map scale comprises the following steps:
in the method, in the process of the invention,disturbance direction of model first layer antagonism indicating output layer attack method direction, +.>Method for representing feature layer attackThe first layer of the model of the primer is directed against the disturbance direction of the resistance, < >>Representing feature layer attack generation sample->Feature map at model layer 1, +.>Representing output layer attack generation sample->Feature map at model layer 1, +.>The feature map of the input sample x at the first layer of the model is shown.
Further, in step 3, a feature map and a feature attention are obtained, and the feature fusion means is utilized to obtain an integrated result, and the method is as follows:
step 3.1: assuming that the layer is l, the face recognition model is F, and calculating a model feature map for the peak value appearance layer selected in the step 2 and the front two layers thereofAnd feature attention->;/>Characteristic diagram representing input sample x at last layer of model, +.>For partial derivatives.
Step 3.2: considering the difference of the number and the size of the characteristic channels of different layers, carrying out downsampling and upsampling operations on the characteristic graphs and the characteristic attentions of the shallow layers and the deep layers in the selected three layers, so that the sizes of the shallow layers and the deep layers are kept the same, and the subsequent characteristic fusion is facilitated;
step 3.3: the feature images of the three layers are fused to obtain a multi-layer feature image integration result, and the fusion method comprises the following steps:
wherein,,representing an integrated feature map, l being the peak value appearance layer selected in step 2, ++>And->Then the first two layers of the peak occurrence layer, DS denotes a downsampling operation, UP denotes an upsampling operation, < +.>Representing input sample x at model +.>Layer profile, < >>Representing input sample x at model +.>The feature map of the layer, x represents a model input sample, concat is a classical feature fusion mode, and fusion is carried out in the channel dimension;
step 3.4: the feature attention of the three layers is fused to obtain a multi-layer feature attention integration result, and the fusion method comprises the following steps:
wherein,,representing integrated feature attention, l is selected in step 2A defined peak appearance layer, +.>Andthen the first two layers of the peak occurrence layer, DS denotes a downsampling operation, UP denotes an upsampling operation, < +.>Representing input sample x at model +.>Layer characteristic attention,/->Representing input sample x at model +.>The feature attention of the layer, x represents a model input sample, concat is a classical feature fusion mode, and fusion is carried out in the channel dimension.
Further, in step 4, a final challenge sample is obtained by calculating the feature map and the distance loss of the feature attention, as follows: according to the integrated feature map obtained in the step 3And Integrated feature attention->And calculating the feature graphs and the feature attention differences of the attack face and the target face by adopting the L2 norm distance to obtain loss, calculating the loss back propagation in each iteration process to obtain noise based on feature integration loss, and optimizing the gradient updating direction by utilizing a momentum-based updating method.
The feature integration loss is as follows:
wherein,,representing the antagonism face obtained by each round of updating, < > about>Representing the target attack face->Representing input opposing faces->In step 3->Obtained->Layer confrontation face integration feature map, +.>Representing input target face +.>In step 3->Obtained->Layer target face integrated feature map, +.>Representing input opposing faces->In step 3->Obtained->Layer-opposing face integration feature attention, +.>Representing input target face +.>In step 3->Obtained->The layer target face integrates feature attention.
The method for optimizing the gradient update direction based on the momentum update method comprises the following steps:
wherein,,indicate->Gradient of round collection iteration, ++>Indicate->Gradient of round collection iteration, ++>Is a momentum factor->Challenge samples generated for iteration of round t, < >>And (3) representing the feature integration loss obtained by updating each round, adding noise to the countermeasure sample of each round, and performing clipping operation until the iteration is finished to obtain a final face countermeasure sample.
The beneficial effects are that: compared with the prior art, the invention has the following advantages:
(1) The invention provides an attack method based on feature attention. When the countermeasure sample is generated, the disturbance is found in the characteristic space, so that the migration performance of the countermeasure sample is improved. Meanwhile, by utilizing the feature attention, the overfitting to the substitution model is avoided under the condition of ensuring the white box attack capability.
(2) The invention provides a model multi-layer attack strategy, and utilizes feature fusion to more comprehensively grasp the common features of different face recognition models, thereby improving the success rate of black box attack.
(3) Compared with other mobility countermeasure sample generation methods, the method has stronger migration performance under the condition of the same disturbance limit.
(4) The invention has a certain improvement effect on the safety of application places such as entrance guard machines, gate machines and the like of the existing integrated face recognition system. The challenge sample phenomenon reveals the security hole of the machine learning model, and the machine learning model is enabled to learn the challenge sample to fill the hole, so that the method is an extremely effective precaution means, and the novel attack method can promote the defending robustness of the model.
Drawings
Fig. 1 is a flow chart of a feature attention-based migratory face recognition attack resistance method of the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples, which are carried out on the basis of the technical solutions of the invention, it being understood that these examples are only intended to illustrate the invention and are not intended to limit the scope thereof.
According to the feature attention-based migration face recognition attack resistance method, firstly, an aligned face image is obtained; selecting a plurality of mobility middle layers of the face recognition model; obtaining an integrated feature graph and integrated feature attention through feature fusion means; computing feature maps and feature attentivenessAnd obtaining a final challenge sample by the norm distance. Specifically includes, for exampleThe following steps 1 to 4 are four steps:
step 1: the aligned face image is obtained by the following specific modes:
firstly, acquiring face frame coordinates and 5 face key points of a face image by using an MTCNN algorithm (Multi-task Cascaded Convolutional Networks), setting a source face as a position of a standard face key point for each face image, adopting an alignment coordinate template with 112 x 112 size provided by a mainstream face recognition model ArcFace, extracting the coordinates of the 5 face key points of each face image to be aligned, performing similar transformation with the standard face key points, thus obtaining a transformation matrix m, and finally performing affine transformation on the face image by taking the transformation matrix m as a parameter, thus obtaining an aligned face image.
Step 2: a plurality of mobility middle layers of the face recognition model are selected, and the specific mode is as follows:
step 2.1: setting iteration turns as n, n is more than or equal to 10 and less than or equal to 20, generating a reference countermeasure sample for an output layer of a face recognition model F by utilizing MI-FGSM for one input face picture xMeanwhile, decomposing the structure of the face recognition model and selecting a certain number of layers at the middle part of the model;
step 2.2: for each layer l selected, generating face challenge samples in feature space iteration 10 rounds using MI-FGSM
Step 2.3: calculating the proportion of the model feature diagram, drawing a line diagram of the level and the proportion value, wherein the level of the peak value appearing in the diagram beyond the back is the optimal mobility layer of the face recognition model, and selecting the peak value appearing layer and the first two layers thereof as a plurality of mobility intermediate layers obtained in the step 2.
The method for calculating the scale of the model feature map comprises the following steps:
in the method, in the process of the invention,disturbance direction of model first layer antagonism indicating output layer attack method direction, +.>Disturbance direction of model first layer antagonism indicating characteristic layer attack method direction, ++>Representing feature layer attack generation sample->Feature map at model layer 1, +.>Representing output layer attack generation sample->Feature map at model layer 1, +.>The feature map of the input sample x at the first layer of the model is shown.
Step 3: obtaining a feature map and feature attention, and obtaining an integrated result by utilizing a feature fusion means, wherein the specific mode is as follows:
step 3.1: assuming that the layer is l, the face recognition model is F, and calculating a model feature map for the peak value appearance layer selected in the step 2 and the front two layers thereofAnd feature attention->;/>Characteristic diagram representing input sample x at last layer of model, +.>For partial derivatives.
Step 3.2: considering the difference of the number and the size of characteristic channels of different layers, carrying out UP-sampling DS and down-sampling UP operations on the shallow layer and the deep layer of the selected three layers and the characteristic attention, so that the sizes of the shallow layer and the deep layer of the selected three layers are kept the same, and the subsequent characteristic fusion is facilitated;
step 3.3: the feature images of the three layers are fused to obtain a multi-layer feature image integration result, and the fusion method comprises the following steps:
wherein,,representing an integrated feature map, l being the peak value appearance layer selected in step 2, ++>And->Then the first two layers of the peak occurrence layer, DS denotes a downsampling operation, UP denotes an upsampling operation, < +.>Representing input sample x at model +.>Layer profile, < >>Representing input sample x at model +.>The feature map of the layer, x represents a model input sample, concat is a classical feature fusion mode, and fusion is carried out in the channel dimension;
step 3.4: the feature attention of the three layers is fused to obtain a multi-layer feature attention integration result, and the fusion method comprises the following steps:
wherein,,representing the integrated feature attention, l being the peak appearance layer selected in step 2, ++>Andthen the first two layers of the peak occurrence layer, DS denotes a downsampling operation, UP denotes an upsampling operation, < +.>Representing input sample x at model +.>Layer characteristic attention,/->Representing input sample x at model +.>The feature attention of the layer, x represents a model input sample, concat is a classical feature fusion mode, and fusion is carried out in the channel dimension.
Step 4: computing feature maps and feature attentivenessThe norm distance loss, the final challenge sample is obtained in the following manner: according to the integrated feature map obtained in step 3 +.>And Integrated feature attention->Adopts->Calculating the feature graphs and feature attention differences of the attack face and the target face by using the norm distance to obtain loss, and integrating the loss based on the features
Calculating loss counter-propagation in each iteration process to obtain noise, and optimizing gradient update direction by using momentum-based update method
Wherein,,representing the antagonism face obtained by each round of updating, < > about>Representing the target attack face->Representing input opposing faces->In step 3->Obtained->Layer confrontation face integration feature map, +.>Representing input target face +.>In step 3->The obtained first/>Layer target face integrated feature map, +.>Representing input opposing faces->In step 3->Obtained->Layer-opposing face integration feature attention, +.>Representing input target face +.>In step 3->Obtained->The layer target face integrates feature attention.
Wherein,,indicate->Gradient of round collection iteration, ++>Indicate->Gradient of round collection iteration, ++>Is a momentum factor->Challenge samples generated for iteration of round t, < >>And (3) representing the feature integration loss obtained by updating each round, adding noise to the countermeasure sample of each round, and performing clipping operation until the iteration is finished to obtain a final face countermeasure sample.
The effectiveness and efficiency of the method of the invention were verified by the following experiments:
the evaluation index is the attack success rate of the black box model.
Firstly, selecting a data set, selecting an LFW face data set and randomly extracting 1k pairs of faces from the LFW face data set. The present invention then selects IR152 as the white-box model, facenet, IRSE50, and MobileFace as the black-box attack model. The contrast method is the MI-DI-FGSM attack method based on momentum and input random transformation of the current mainstream.
TABLE 1 attack success Rate of the invention under different Black Box models
The results in table 1 show that the method of the invention can greatly improve the attack success rate of different black box models, and further improve the mobility of the face against the sample. In general, the present invention proposes a feature attention based attack approach. When the countermeasure sample is generated, the migration performance is improved on the premise of ensuring the white-box attack performance by combining the forward characteristic diagram derivative. And meanwhile, the common features of different face models are more comprehensively grasped by utilizing a feature fusion means, so that the phenomenon of overfitting to the alternative model is avoided.
The invention utilizes the characteristic of the attention of the characteristics to extract the common characteristics of different face recognition models, and utilizes the characteristic fusion means to further expand the common characteristic searching range of different face recognition models, thereby not only ensuring the original white box attack capability, but also enhancing the mobility of the face countermeasure sample among different models and improving the black box attack success rate of the face countermeasure attack when the model characteristic space is attacked.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (7)

1. The feature attention-based migration face recognition attack resistance method is characterized by mainly comprising the following steps of:
step 1: obtaining an aligned face image;
step 2: selecting a plurality of migration middle layers of the face recognition model; the method comprises the following steps:
step 2.1: setting iteration rounds as n, wherein n is more than or equal to 10 and less than or equal to 20, generating a reference countermeasure sample x' for an output layer of a face recognition model F by utilizing MI-FGSM for one input face picture x, decomposing the structure of the face recognition model and selecting a certain number of layers in the middle of the model;
step 2.2: for each layer l selected, generating face challenge samples x″ in feature space iterations n-rounds using MI-FGSM;
step 2.3: calculating the proportion of the model feature diagram, drawing a line diagram with a level and a proportion value, wherein the level with the peak value appearing more backward in the line diagram is the optimal mobility layer of the face recognition model, and selecting the peak value appearing layer and the first two layers thereof as a plurality of mobility intermediate layers obtained in the step 2;
step 3: calculating a feature map and feature attention, and obtaining an integrated feature map and an integrated feature attention result by utilizing a feature fusion means;
step 4: and calculating the L2 norm distance of the feature graph and the feature attention to obtain a final countermeasure sample.
2. The feature attention based migratory face recognition attack resistance method of claim 1 wherein in step 1, an aligned face image is obtained as follows:
firstly, acquiring face frame coordinates and 5 face key points of a face image by using an MTCNN, setting a source front face as a position of a standard face key point for each face image, performing similar transformation with each extracted face key point to obtain a transformation matrix m, and performing affine transformation on the face by taking m as a parameter to obtain an aligned face image.
3. The feature-attention-based migratory face recognition attack resistance method of claim 1, wherein the model feature map scale calculation method is as follows:
in the method, in the process of the invention,disturbance direction of model first layer antagonism indicating output layer attack method direction, +.>Disturbance direction of model first layer antagonism indicating characteristic layer attack method guidance, F l (x ') represents a feature map of feature layer attack generation sample x' at model layer I, F l (x ') represents the feature map of the output layer attack generation sample x' at the model first layer, F l (x) The feature map of the input sample x at the first layer of the model is shown.
4. The feature attention-based migratory face recognition attack resistance method of claim 1 wherein in step 3, feature graphs and feature attention are obtained and the result is obtained by feature fusion means, the method is as follows:
step 3.1: assuming layer l and face recognition model F, for the peak value selected in step 2 to appearLayer and its first two layers, calculate model feature map F l (x) And feature attentionF (x) represents the characteristic diagram of the input sample x at the last layer of the model, +.>To calculate the partial derivative;
step 3.2: considering the difference of the number and the size of the characteristic channels of different layers, carrying out downsampling and upsampling operations on the characteristic graphs and the characteristic attentions of the shallow layers and the deep layers in the selected three layers, so that the sizes of the shallow layers and the deep layers are kept the same, and the subsequent characteristic fusion is facilitated;
step 3.3: the feature images of the three layers are fused to obtain a multi-layer feature image integration result, and the fusion method comprises the following steps:
wherein,,representing an integrated feature map, l being the peak value appearance layer selected in step 2, l-2 and l-1 being the first two layers of the peak value appearance layer, DS representing a downsampling operation, UP representing an upsampling operation, F l-2 (x) Representing a feature map of the input sample x at layer 1-2 of the model, F l-1 (x) Representing a characteristic diagram of an input sample x in a model layer 1, wherein x represents the model input sample, concat is a classical characteristic fusion mode, and fusion is carried out in a channel dimension;
step 3.4: the feature attention of the three layers is fused to obtain a multi-layer feature attention integration result, and the fusion method comprises the following steps:
wherein the method comprises the steps of,Representing the integrated feature attention, l being the peak appearance layer selected in step 2, l-2 and l-1 being the first two layers of the peak appearance layer, DS representing the downsampling operation, UP representing the upsampling operation, alpha l-2 (x) Representing the characteristic attention, alpha, of the input sample x at layer 1-2 of the model l-1 (x) And (3) representing the characteristic attention of an input sample x in a model layer I-1, wherein x represents the model input sample, concat is a classical characteristic fusion mode, and fusion is carried out in a channel dimension.
5. The feature attention based migratory face recognition challenge method of claim 1 wherein in step 4, the final challenge sample is obtained by computing the feature map and the distance loss of feature attention, the method is as follows: according to the integrated feature map obtained in the step 3And Integrated feature attention->And calculating the feature graphs and the feature attention differences of the attack face and the target face by adopting the L2 norm distance to obtain loss, calculating the loss back propagation in each iteration process to obtain noise based on feature integration loss, and optimizing the gradient updating direction by utilizing a momentum-based updating method.
6. The feature-attention-based migratory face recognition challenge method of claim 5 wherein the feature integration penalty is:
wherein x is adv Representing the confronted face obtained by each round of updating, x t Representation purposeThe target attacks the face of the person,representing input opposing faces x adv In step 3->The obtained first layer of facial integrated feature map of antagonism, < >>Representing an input target face x t In step 3->The obtained first layer target face integrated feature map, </i >>Representing input opposing faces x adv In step 3->The obtained first layer is against the attention of the integrated feature of the face,/->Representing an input target face x t In step 3->The obtained first layer target face integrates the feature attention;
the method for optimizing the gradient update direction based on the momentum update method comprises the following steps:
wherein g t+1 Representing the gradient of the t+1st round of collection iteration, g t Indicating the t-th round of collectionThe gradient of the iteration, u is the momentum factor,challenge samples generated for iteration of round t, < >>And (3) representing the feature integration loss obtained by updating each round, adding noise to the countermeasure sample of each round, and performing clipping operation until the iteration is finished to obtain a final face countermeasure sample.
7. A computing device comprising a memory having executable code stored therein and a processor which, when executing the executable code, implements the method of any of claims 1-6.
CN202310518841.0A 2023-05-10 2023-05-10 Feature attention-based migration face recognition attack resistance method and device Active CN116259098B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310518841.0A CN116259098B (en) 2023-05-10 2023-05-10 Feature attention-based migration face recognition attack resistance method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310518841.0A CN116259098B (en) 2023-05-10 2023-05-10 Feature attention-based migration face recognition attack resistance method and device

Publications (2)

Publication Number Publication Date
CN116259098A CN116259098A (en) 2023-06-13
CN116259098B true CN116259098B (en) 2023-07-25

Family

ID=86679640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310518841.0A Active CN116259098B (en) 2023-05-10 2023-05-10 Feature attention-based migration face recognition attack resistance method and device

Country Status (1)

Country Link
CN (1) CN116259098B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114626042A (en) * 2022-03-18 2022-06-14 杭州师范大学 Face verification attack method and device
CN115798056A (en) * 2022-10-20 2023-03-14 招商银行股份有限公司 Face confrontation sample generation method, device and system and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10339367B2 (en) * 2016-03-29 2019-07-02 Microsoft Technology Licensing, Llc Recognizing a face and providing feedback on the face-recognition process
CN110633747A (en) * 2019-09-12 2019-12-31 网易(杭州)网络有限公司 Compression method, device, medium and electronic device for target detector
CN113435264A (en) * 2021-06-08 2021-09-24 广州紫为云科技有限公司 Face recognition attack resisting method and device based on black box substitution model searching
CN115147682B (en) * 2022-07-04 2024-07-05 内蒙古科技大学 Method and device for generating hidden white box countermeasure sample with mobility

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114626042A (en) * 2022-03-18 2022-06-14 杭州师范大学 Face verification attack method and device
CN115798056A (en) * 2022-10-20 2023-03-14 招商银行股份有限公司 Face confrontation sample generation method, device and system and storage medium

Also Published As

Publication number Publication date
CN116259098A (en) 2023-06-13

Similar Documents

Publication Publication Date Title
CN112750140B (en) Information mining-based disguised target image segmentation method
CN106997380B (en) Imaging spectrum safe retrieving method based on DCGAN depth network
CN112001868B (en) Infrared and visible light image fusion method and system based on generation of antagonism network
CN112668483B (en) Single-target person tracking method integrating pedestrian re-identification and face detection
CN113269089B (en) Real-time gesture recognition method and system based on deep learning
CN113111349B (en) Backdoor attack defense method based on thermodynamic diagram, reverse engineering and model pruning
CN114169002A (en) Key point differential privacy driven face image privacy protection method
CN113469965A (en) Countermeasure sample generation method for limiting disturbance noise by using mask
CN110263504A (en) The insertion of reciprocal relation database water mark and extracting method based on differential evolution algorithm
CN115512399A (en) Face fusion attack detection method based on local features and lightweight network
CN113988312A (en) Member reasoning privacy attack method and system facing machine learning model
Wen et al. Optimization of the occlusion strategy in visual tracking
CN116259098B (en) Feature attention-based migration face recognition attack resistance method and device
CN112598032B (en) Multi-task defense model construction method for anti-attack of infrared image
CN115719085B (en) Deep neural network model inversion attack defense method and device
Long et al. Detection of Face Morphing Attacks Based on Patch‐Level Features and Lightweight Networks
Zeng et al. A framework of camera source identification Bayesian game
CN116720219A (en) Gradient leakage attack method, equipment and storage medium under federal learning
CN116258934A (en) Feature enhancement-based infrared-visible light fusion method, system and readable storage medium
CN115565108A (en) Video camouflage and salient object detection method based on decoupling self-supervision
CN115565210A (en) Feature cascade-based lightweight face fusion attack detection method
CN106651924A (en) Lateral inhibition random fractal search template coupling method
CN114627373B (en) Method for generating countermeasure sample for remote sensing image target detection model
CN112989359A (en) Backdoor attack method for pedestrian re-identification model based on triple loss
Yu et al. Improving Adversarial Robustness Against Universal Patch Attacks Through Feature Norm Suppressing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant