CN114241569B - Face recognition attack sample generation method, model training method and related equipment - Google Patents

Face recognition attack sample generation method, model training method and related equipment Download PDF

Info

Publication number
CN114241569B
CN114241569B CN202111571253.0A CN202111571253A CN114241569B CN 114241569 B CN114241569 B CN 114241569B CN 202111571253 A CN202111571253 A CN 202111571253A CN 114241569 B CN114241569 B CN 114241569B
Authority
CN
China
Prior art keywords
face image
training
loss function
model
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111571253.0A
Other languages
Chinese (zh)
Other versions
CN114241569A (en
Inventor
于志刚
白亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202111571253.0A priority Critical patent/CN114241569B/en
Publication of CN114241569A publication Critical patent/CN114241569A/en
Application granted granted Critical
Publication of CN114241569B publication Critical patent/CN114241569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a method for generating a face recognition attack sample, a model training method and related equipment. The method comprises the following steps: and carrying out implicit vector initialization and noise input on the generated countermeasure network model to generate a virtual initial face image. And optimizing model parameters based on the similarity between the reference face image and the initial face image to obtain optimized hidden vectors and noise. A training face image is generated based on the optimized implicit vector and noise. And calculating the classification loss of the training face image and the target face image through the face recognition model, calculating the similarity loss of the training face image and the reference face image, and updating the model parameters of the countermeasure network model through a back propagation algorithm until the face image generated by the updated countermeasure network model is misjudged as the target face image by the face recognition model. The method can generate more realistic face images, can make the model misjudge, is simple in generation method and small in disturbance, and improves the concealment and stability of the attack sample.

Description

Face recognition attack sample generation method, model training method and related equipment
Technical Field
The disclosure relates to the technical field of face recognition, in particular to a generation method of a face recognition attack sample, a model training method and related equipment.
Background
Face recognition is a biological recognition technology for carrying out identity recognition based on facial feature information of people, and is realized by extracting facial features based on a deep neural network (CNN). Deep neural networks (CNNs) are susceptible to attacks that "fight samples" that can make incorrect predictions of the model by adding small perturbations that are not noticeable to the human eye. The challenge attack (Adversarial Attack) may evaluate the Robustness of the face recognition system in practical applications (Robustness), identify "weaknesses" of the deep neural network model and help the deep neural network model to improve Robustness.
GAN networks, collectively known as generative AdversarialNetworks, chinese call-generated countermeasure networks, were originally proposed by Goodroll et al, and have been widely used today for a series of unsupervised and semi-supervised image generation tasks. The GAN network mainly includes a Generator (Generator) and a Discriminator (Discriminator). The generator is responsible for generating contents, which can be pictures, words or music, according to random vectors, and aims at 'cheating' the discriminator. The arbiter is responsible for discriminating whether the received content is authentic or machine-generated, in order to find "false data" by the generator. The arbiter typically gives a probability representing the authenticity of the content. The main idea of the GAN network is to match the data distribution to the anti-game process, in which the generator tries to fool a discriminator which aims to distinguish whether the image comes from a real data distribution.
Since the generation of the countermeasure network, related research and application fields Jing Cui have generated various types of variants, which are mainly applied to the fields of image data generation, text and voice generation, and the like. At present, most of the generation of countermeasure samples based on a GAN network is to add disturbance in a real image or shallow features thereof, a face image, a face feature code and a target face feature code are required to be input, the operation steps are complex, and the artificial interference is large.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
It is an object of the present disclosure to provide a method of generating a face recognition attack sample, a model training method, and related devices, which overcome, at least in part, one or more of the problems due to the limitations and disadvantages of the related art.
According to a first aspect of an embodiment of the present disclosure, there is provided a method for generating a face recognition attack sample, including:
initializing a hidden vector and inputting noise to the generated countermeasure network model to generate a virtual initial face image;
optimizing model parameters of the generated countermeasure network based on the similarity of a reference face image and the initial face image to obtain optimized implicit vectors and noise, wherein the reference face image is a real face image;
Training the generated countermeasure network model based on the optimized implicit vector and noise to generate a training face image;
calculating a classification loss function of the training face image and a target face image through a face recognition model, wherein the target face image is a target face image to be attacked;
calculating a similarity loss function based on the similarity of the training face image and the reference face image;
and updating model parameters of the countermeasure network model through a back propagation algorithm based on the classification loss function and the similarity loss function until the face image generated by the updated countermeasure network model is misjudged as a target face image by the face recognition model.
In an exemplary embodiment of the present disclosure, the step of optimizing the model parameters of the generated countermeasure network based on the similarity between the reference face image and the initial face image, and obtaining the optimized implicit vector and noise includes:
determining a similarity loss function based on the similarity of the reference face image and the initial face image;
optimizing the model parameters of the generated countermeasure network through a back propagation algorithm based on the similarity loss function, so that the similarity between the face image generated by the optimized generated countermeasure network model and the reference image reaches a preset value;
And taking the implicit vector and noise in the optimized generation countermeasure network model as the optimized implicit vector and noise.
In one exemplary embodiment of the present disclosure, the step of updating model parameters of the countermeasure network model by a back propagation algorithm based on the classification loss function and the similarity loss function includes:
determining a total loss function based on the classification loss function, the similarity loss function, and a loss function of a generative countermeasure network model;
model parameters of the countermeasure network model are updated by a back propagation algorithm based on the total loss function.
In an exemplary embodiment of the present disclosure, the total loss function is based on the formula:
L total =L GAN +αL adv +βL sim
wherein L is total Represents the total loss function, L GAN Representing a loss function of a generated countermeasure network model, L adv Representing a classification loss function of the training face image and the target face image; l (L) sim Representing a similarity loss function of the training face image and the reference face image; alpha and beta are hyper-parameters controlling the weight of the loss function.
In one exemplary embodiment of the present disclosure, the classification loss function of the training face image and the target face image is determined by calculating a cross entropy loss function; and the similarity loss function of the training face image and the reference face image is determined by calculating the structural similarity index, cosine distance, KL distance or JS distance of the training face image and the reference face.
In one exemplary embodiment of the present disclosure, the generative countermeasure network model is a StyleGAN network model, a ProGAN network model, or a BigGAN network model, the generative countermeasure network model being pre-trained from a public dataset.
According to a second aspect of the embodiments of the present disclosure, there is provided a training method of a face recognition model, including:
acquiring a real face image;
generating an attack sample corresponding to the real face image, wherein the attack sample is generated according to the generation method of the face recognition attack sample;
fusing the real face image and the corresponding attack sample to obtain a training sample;
and training the face recognition model according to the training sample.
According to a third aspect of the embodiments of the present disclosure, there is provided a generating apparatus for a face recognition attack sample, including:
the initial face image generation module is used for carrying out implicit vector initialization and noise input on the generated countermeasure network model to generate a virtual initial face image;
the parameter optimization module is used for optimizing the model parameters of the generated countermeasure network based on the similarity between the reference face image and the initial face image to obtain optimized hidden vectors and noise, wherein the reference face image is a real face image;
The training face image generation module is used for training the generated countermeasure network model based on the optimized implicit vector and noise to generate a training face image;
the classification loss function determining module is used for calculating the classification loss function of the training face image and the target face image based on the face recognition model, wherein the target face image is an oriented target face image to be attacked;
the similarity loss function determining module is used for calculating a similarity loss function based on the similarity between the training face image and the reference face image;
and the model parameter updating module is used for updating the model parameters of the countermeasure network model through a back propagation algorithm based on the classification loss function and the similarity loss function until the face image generated by the updated countermeasure network model is misjudged as a target face image by the face recognition model.
According to a fourth aspect of embodiments of the present disclosure, there is provided a training apparatus for a face recognition model, including:
the real face acquisition module is used for acquiring a real face image;
and the attack sample generation module is used for generating an attack sample corresponding to the real face image, and the attack sample is generated according to the generation method of the face recognition attack sample.
The image fusion module is used for fusing the real face image and the corresponding attack sample to obtain a training sample;
and the training module is used for training the face recognition model according to the training sample.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of generating a face recognition attack sample as claimed in any one of the preceding claims or the method of training a face recognition model as described above via execution of the executable instructions.
According to a sixth illustrative aspect of an embodiment of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of generating a face recognition attack sample as set forth in any one of the above or the method of training a face recognition model as set forth above.
According to the method for generating the face recognition attack sample, the similarity of the reference face image and the virtual face image is compared, the optimized implicit vector and noise input are found, and the training face image with high visual quality is generated in a directional mode. And reversely spreading updating parameters through the classification loss of the training face image and the target face image and the similarity loss of the training face image and the reference face image until the face recognition model to be attacked misjudges the training face image as the target face image. The misjudged face image is the oriented attack sample of the target face image. The generation method can change the uncontrollable face image generation process into the controllable face image generation process, the generated high visual quality directional face countermeasure sample can not destroy the original picture information, the original image information is restored to the maximum extent, the realization is simple, the operation steps are reduced, the human interference is reduced, the manpower and material resources are saved, the disturbance is small, and the concealment and the stability of the sample are further improved. The generated attack sample image can effectively resist attack on the face recognition model to be attacked. The method is used for effectively training the target face model to defend against the attack by carrying out image fusion on the real face image and the corresponding attack sample image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 schematically illustrates a flowchart of a method for generating a face recognition attack sample in an exemplary embodiment of the disclosure.
Fig. 2 schematically illustrates a block diagram of a StyleGAN network model in an exemplary embodiment of the present disclosure.
Fig. 3 schematically illustrates a flowchart for optimizing model parameters of the generated countermeasure network based on similarity of a reference face image and the initial face image in an exemplary embodiment of the present disclosure.
Fig. 4 schematically illustrates a flow chart of updating model parameters of the countermeasure network model by a back propagation algorithm in an exemplary embodiment of the present disclosure.
Fig. 5 schematically illustrates a flowchart of a training method of a face recognition model in an exemplary embodiment of the present disclosure.
Fig. 6 schematically illustrates a block diagram of a generating apparatus of a face recognition attack sample in an exemplary embodiment of the present disclosure.
Fig. 7 schematically illustrates a block diagram of a training apparatus of a face recognition model in an exemplary embodiment of the present disclosure.
Fig. 8 schematically illustrates a block diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are only schematic illustrations of the present disclosure, in which the same reference numerals denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The generation method of the face recognition attack sample aims at generating. For ease of understanding, several terms referred to in this application are first explained below.
An attack sample, also called a challenge sample, is an image that can be spoofed into the target classifier with small modifications. Given a target model f and a clean image X and its real label y, they belong to a label set S, where f (X) =y indicates that the classification performance of the target model is good, and normal classification can be performed. The objective of the directed challenge is to obtain a new image X 0 Let the target model predict f (X 0 )=y 0 And y is 0 ∈S,y 0 Is a tool One label of the body and y 0 The following is carried out =y, denote y 0 Not equal to y.
The following describes example embodiments of the present disclosure in detail with reference to the accompanying drawings.
Fig. 1 schematically illustrates a flowchart of a method of generating a face recognition attack sample in an exemplary embodiment of the present disclosure. The method for generating the face recognition attack sample can be realized by a server. Referring to fig. 1, a method 100 for generating a face recognition attack sample includes:
step S101, carrying out implicit vector initialization and noise input on the generated countermeasure network model to generate a virtual initial face image;
step S102, optimizing model parameters of the generated countermeasure network based on the similarity of a reference face image and the initial face image to obtain optimized hidden vectors and noise, wherein the reference face image is a real face image;
step S103, training the generated countermeasure network model based on the optimized implicit vector and noise to generate a training face image;
step S104, calculating a classification loss function of the training face image and a target face image through a face recognition model, wherein the target face image is an oriented target face image to be attacked;
Step S105, calculating a similarity loss function based on the similarity between the training face image and the reference face image;
and step S106, updating model parameters of the countermeasure network model through a back propagation algorithm based on the classification loss function and the similarity loss function until the face image generated by the updated countermeasure network model is misjudged as a target face image by the face recognition model.
And (3) by comparing the similarity of the reference face image and the virtual face image, finding out the optimized implicit vector and noise input, and directionally generating the training face image with high visual quality. And reversely spreading updating parameters through the classification loss of the training face image and the target face image and the similarity loss of the training face image and the reference face image until the face recognition model to be attacked misjudges the training face image as the target face image. The misjudged face image is the oriented attack sample of the target face image. The generation method can change the uncontrollable face image generation process into the controllable face image generation process, the generated high visual quality directional face countermeasure sample can not destroy the original picture information, the original image information is restored to the maximum extent, the realization is simple, the operation steps are reduced, the human interference is reduced, the manpower and material resources are saved, the disturbance is small, and the concealment and the stability of the sample are further improved. The generated attack sample image can effectively resist attack on the face recognition model to be attacked.
Next, each step of the method 100 for generating a face recognition attack sample will be described in detail.
Step S101, performing implicit Vector (Vector) initialization and Noise (Noise) input on the generated countermeasure network model, and generating a virtual initial face image.
In one exemplary embodiment of the present disclosure, the generated countermeasure network model may be, for example, a StyleGAN network model, a ProGAN network model, or a BigGAN network model. The generated countermeasure network model is used in the present disclosure to build up a noise-to-image mapping. The generated countermeasure network model is obtained through pre-training of a public data set. The public data set is, for example, a high definition face data set FFHQ. In one embodiment of the present disclosure, a StyleGAN network model pre-processed on an FFHQ dataset is used as the generative countermeasure network model of the present disclosure.
Fig. 2 shows a structural diagram of a StyleGAN network model according to an embodiment of the present invention. Referring to fig. 2, the stylegan network model structure includes a mapping network f (Mapping network) and a generator g (Synthesis network). The mapping network f is shown on the left side in fig. 2 for controlling the style of the generated image. The mapping network f maps a potential implicit vector z (latency z) to an intermediate style space to obtain a style vector w, which is the style used to control the generated image.
The generator g is shown on the right side in fig. 2 for generating an image. Each layer of subnetwork of generator g takes as input a style vector w and a random noise input to gradually generate an image. A in fig. 2 is affine transformation obtained by converting a style vector w, and is used for controlling the style of a generated image, B is random noise obtained after conversion, and is used for enriching the details of the generated image, that is, each convolution layer can adjust "style" according to the input a. The recessive vector z is used for influencing global attributes such as the gesture, the identity characteristic and the like of the human face, and the noise is used for influencing detail parts such as hair, wrinkles, skin colors and the like of the human face. StyleGAN transforms z to w by mapping network alone and feeds w to each layer of Synthesis network, so the initial input in Synthesis network becomes a constant tensor, see Const 4 x 512 in FIG. 2.
In one embodiment of the present disclosure, the implicit Vector (Vector) and Noise (Noise) inputs of this step are sampled from a standard gaussian distribution. Different image appearances, such as gender, beard, hair cut, etc., may be generated by varying the different levels of noise of the generator g, while varying higher levels of noise may affect more general features of the image, such as color or texture. Global properties of the generated image, such as pose, identity, etc., can be controlled by changing different styles of the generator g.
Referring to fig. 3, in step S102, model parameters of the generated countermeasure network are optimized based on the similarity between the reference face image and the initial face image, so as to obtain optimized implicit vectors and noise, which specifically includes:
step S301, determining a similarity loss function based on the similarity of the reference face image and the initial face image;
step S302, optimizing the model parameters of the generated countermeasure network through a back propagation algorithm based on the similarity loss function, so that the similarity between the face image generated by the optimized generated countermeasure network model and the reference image reaches a preset value;
and step S303, taking the implicit vector and noise in the optimized generated countermeasure network model as the optimized implicit vector and noise.
The above stepsThe real face image in the real world is taken as a reference face image, and the initial face image obtained by initial implicit vector and noise input is an unknown high-definition face image and can be completely different from the types, skin colors and the like of the reference face image. The purpose of step S102 is to find the tension vector of the reference face image in this StyleGAN network model. For example, let the reference face image be Im r The initial value vector is z, and the generated initial face image is Im g1 Through Im g1 And Im r And setting a loss function for the similarity between the two images, and continuously iterating the implicit vector z through loss to finally obtain the most suitable tension vector and noise of the reference face image x.
In one embodiment of the present disclosure, in step S201, the similarity loss function of the reference face image and the initial face image is determined by calculating a Structural Similarity (SSIM) index, a cosine distance, a KL distance, or a JS distance of the initial face image and the reference face.
In one embodiment of the present disclosure, the similarity of the initial face image and the reference face image is measured by SSIM structural similarity theory. The SSIM consists of three parts, namely brightness contrast, contrast and structural contrast. And inputting corresponding contrast functions through sampling average gray scale, gray standard deviation and structure measurement parameters of the initial face image and the reference face image to obtain a final similarity index. The specific operation flow can comprise: (1) After the sizes of the initial face image and the reference face image are modified, carrying out graying treatment; (2) Using a Gaussian weighting function with standard deviation of 1.5 as a weighting window, and calculating based on pixels in the window in each step to obtain an SSIM index mapping matrix composed of local SSIM indexes; (3) And calculating an average SSIM index as a final result of evaluating the similarity of the initial face image and the reference face image. The similarity loss function of the initial face image and the reference face image is as follows:
L sim =1-SSIM(Im g1 ,Im r )
Wherein Im g1 Representing an initial face image, im r Representing a reference face image.
In another embodiment of the present disclosure, the similarity of the initial face image and the reference face image is measured by a cosine distance cos (a, b) between the initial face image and the reference face image.
L sim =L cos (Im g1 ,Im r )=1-cos(a,b)
Wherein Im g1 Representing an initial face image, im r Representing a reference face image. a is the feature vector of the initial face image, and b is the feature vector of the reference face image. For cosine distances cos (a, b), the cosine value approaches 1, the included angle tends to be 0, indicating that the more similar the two vectors are, the cosine value approaches 0, and the included angle tends to be 90 degrees, indicating that the two vectors are less similar.
Initial face image Im obtained by random implicit vector and noise input g1 Is an unknown high definition face image, possibly with a reference face image Im r Is completely different from the species, skin color, etc. In the first few steps of the iterative process, an initial face image Im g1 Remains substantially unchanged. As the number of iterations increases, the texture of the composite image becomes deeper, gradually changing the appearance of the output image after more iteration steps, and will bring about different contrast intensities. As the iterative process goes deep, the generated image becomes more powerful. Gradient descent is performed on noise and implicit vectors, the vectors are updated towards the direction of maximizing the predictive loss of the target model, and the aim is to find the correct direction so that the generated image is better deception model.
And step S103, training the generated countermeasure network model based on the optimized implicit vector and noise, and generating a training face image. The proper tension vector and noise are obtained through the step S102, and the step S103 is based on the tension vector and noise fine tuning parameters obtained in the step 2 to obtain a training face image Im g . Training face image Im g With a reference face image Im r Very similar.
Step S104, calculating a classification loss function of the training face image and the target face image through a face recognition model, wherein the target face image is an oriented target face image to be attacked. Specifically, the face recognition model may be, for example, facenet or Insightface, or the like. The face recognition model is a face recognition algorithm to be finally attacked. And calculating the classification loss of the training face image and the target face image through the face recognition model.
In one exemplary embodiment of the present disclosure, the classification loss function of the training face image and the target face image is determined by calculating a cross entropy loss function. Specifically, the classification loss function L of the training face image and the target face image adv The expression is as follows:
L adv =L soft (Im g ,Im tl )
wherein L is soft (Im g ,Im tl ) Is a cross entropy loss function, im g Is the generated training face image, im tl Is a specific target face image Im to be attacked t Is a personal identification number of the person.
Step S105, calculating a similarity loss function based on the similarity between the training face image and the reference face image. In this step, the face image Im is trained g And a reference face image Im r Training the similarity loss function of the face image Im by calculating g And a reference face image Im r Is determined by a Structural Similarity (SSIM) index, cosine distance, KL distance, or JS distance.
In one embodiment of the present disclosure, the face image Im is trained g And a reference face image Im r Is measured by SSIM structural similarity theory. The specific procedure is described above. The similarity loss function of the training face image and the reference face image is as follows:
L sim =1-SSIM(Im g ,Im r )
wherein Im g Representing training face images, im r Representing a reference face image.
In another embodiment of the present disclosure, the similarity of the initial face image and the reference face image is measured by a cosine distance cos (a, b) between the training face image and the reference face image. The similarity loss function of the training face image and the reference face image is as follows:
L sim =L cos (Im g ,Im r )=1-cos(a,b)
wherein Im g Representing training face images, im r Representing a reference face image. a is a feature vector of a training face image, and b is a feature vector of a reference face image.
In step S106, the step of updating the model parameters of the countermeasure network model by a back propagation algorithm based on the classification loss function and the similarity loss function specifically includes:
step S1061, determining a total loss function based on the classification loss function, the similarity loss function, and a loss function of the generated countermeasure network model;
step S1062, updating the model parameters of the countermeasure network model by a back propagation algorithm based on the total loss function.
In an exemplary embodiment of the present disclosure, the total loss function is based on the formula:
L total =L GAN +αL adv +βL sim
wherein L is total Represents the total loss function, L GAN Representing a loss function of a generated countermeasure network model, L adv Representing a classification loss function of the training face image and the target face image; l (L) sim Representing a similarity loss function of the training face image and the reference face image; alpha and beta are hyper-parameters controlling the weight of the loss function. In one embodiment of the present disclosure, α may take on a value of 0.01, for example, and β may take on a value of 0.02, for example.
Wherein L is adv And L sim Obtained from the similarity loss function described above, respectively. Generating a loss function L of an countermeasure network model GAN The following are provided:
L GAN =E x [log D(x (i) )]+E x [log(1-D(G(z (i) )))]
where E represents the bracketed value expectation, D represents the discriminator of the GAN network, and G represents the generator of the discriminator of the GAN network. The training process of the GAN network is as much as possible D(x (i) )=1,D(G(z (i) ) =0, letting the generated picture be as spurious as possible.
Fig. 4 schematically shows a flowchart of step S106 in an exemplary embodiment of the present disclosure. The training face image generated through similarity loss optimization is similar to the reference face image in vision, and the training image generated through classification loss optimization breaks the face recognition model to obtain a high attack success rate. And continuously carrying out iterative updating through the loss function until the generated face image is successfully misjudged as a target face image by the face recognition model. If the face recognition model to be attacked misclassifies the generated r face image as the label of the target face image, a face recognition attack sample based on directional attack is successfully obtained.
Fig. 5 schematically illustrates a flowchart of a training method of a face recognition model in an exemplary embodiment of the present disclosure. Referring to fig. 5, a training method 500 of a face recognition model includes:
step S501, acquiring a real face image. The real face image is the target face image.
Step S502, generating an attack sample corresponding to the real face image, where the attack sample is generated according to the method for generating the face recognition attack sample as described above. The attack sample obtained by the face recognition attack sample generation method 100 is referred to above. The attack sample is sufficiently similar to the reference face image, but can be misclassified as a true face image by the face recognition model.
Step S503, fusing the real face image and the corresponding attack sample, to obtain a training sample. Specifically, the image fusion uses a specific algorithm to sum two images into a new image, for example, the image fusion is performed by a logic filter method, an HIS transform, a PCA transform, a high-pass filter method, a pyramid decomposition method, a wavelet transform method, and the like.
Step S504, training the face recognition model according to the training sample. The human face recognition model is trained by the fused training sample, so that the generalization capability and robustness of the human face recognition model can be improved, and the corresponding countermeasure attack can be defended.
Fig. 6 schematically illustrates a schematic diagram of a generating apparatus of a face recognition attack sample in an exemplary embodiment of the present disclosure. Referring to fig. 6, a generating apparatus 600 of a face recognition attack sample includes:
the initial face image generating module 610 is configured to perform implicit vector initialization and noise input on the generated countermeasure network model, and generate a virtual initial face image;
the parameter optimization module 620 is configured to optimize the model parameters of the generated countermeasure network based on the similarity between a reference face image and the initial face image, so as to obtain an optimized implicit vector and noise, where the reference face image is a real face image;
A training face image generating module 630, configured to train the generating countermeasure network model based on the optimized implicit vector and noise, and generate a training face image;
the classification loss function determining module 640 is configured to calculate a classification loss function of the training face image and a target face image based on a face recognition model, where the target face image is a target face image to be attacked;
a similarity loss function determination module 650, configured to calculate a similarity loss function based on a similarity between the training face image and the reference face image;
and a model parameter updating module 660, configured to update model parameters of the countermeasure network model by a back propagation algorithm based on the classification loss function and the similarity loss function until a face image generated by the updated countermeasure network model is misjudged as a target face image by the face recognition model.
In an embodiment of the present disclosure, the apparatus 600 for generating a face recognition attack sample may further include modules for implementing other flow steps of the above-described processing method embodiments. For example, the specific principles of the respective modules and sub-modules may refer to the description of the embodiment of the method 100 for generating the face recognition attack sample described above, and the description thereof will not be repeated here.
Fig. 7 schematically illustrates a schematic diagram of a training apparatus of a face recognition model in an exemplary embodiment of the present disclosure. Referring to fig. 7, a training apparatus 700 of a face recognition model includes:
a real face acquisition module 710, configured to acquire a real face image;
and an attack sample generation module 720, configured to generate an attack sample corresponding to the real face image, where the attack sample is generated according to the method for generating the face recognition attack sample as described above.
The image fusion module 730 fuses the real face image and the corresponding attack sample to obtain a training sample;
and a training module 740 for training the face recognition model according to the training sample.
Since each function of the training device 700 of the face recognition model is described in detail in the corresponding method embodiment, the disclosure is not repeated here.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, and a bus 830 connecting the various system components, including the memory unit 820 and the processing unit 810.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present invention described in the above section of the "exemplary method" of the present specification.
In one embodiment of the disclosure, the processing unit 810 may perform step S101 shown in fig. 1, perform implicit vector initialization and noise input on the generated countermeasure network model, and generate a virtual initial face image; step S102, optimizing model parameters of the generated countermeasure network based on the similarity of a reference face image and the initial face image to obtain optimized hidden vectors and noise, wherein the reference face image is a real face image; step S103, training the generated countermeasure network model based on the optimized implicit vector and noise to generate a training face image; step S104, calculating a classification loss function of the training face image and a target face image through a face recognition model, wherein the target face image is an oriented target face image to be attacked; step S105, calculating a similarity loss function based on the similarity between the training face image and the reference face image; and step S106, updating model parameters of the countermeasure network model through a back propagation algorithm based on the classification loss function and the similarity loss function until the face image generated by the updated countermeasure network model is misjudged as a target face image by the face recognition model.
In another embodiment of the present disclosure, the processing unit 810 may perform step S501 shown in fig. 5 to obtain a real face image; step S502, generating an attack sample corresponding to the real face image; step S503, fusing the real face image and the corresponding attack sample to obtain a training sample; step S504, training the face recognition model according to the training sample.
The storage unit 820 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 8201 and/or cache memory 8202, and may further include Read Only Memory (ROM) 8203.
Storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 800, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 880. As shown, network adapter 880 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
Referring to fig. 7, a program product 800 for implementing the above-described method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (11)

1. The method for generating the face recognition attack sample is characterized by comprising the following steps of:
initializing a hidden vector and inputting noise to the generated countermeasure network model to generate a virtual initial face image;
optimizing model parameters of the generated countermeasure network model based on the similarity of a reference face image and the initial face image to obtain optimized implicit vectors and noise, wherein the reference face image is a real face image;
Training the generated countermeasure network model based on the optimized implicit vector and noise to generate a training face image;
calculating a classification loss function of the training face image and a target face image through a face recognition model, wherein the target face image is a target face image to be attacked;
calculating a similarity loss function based on the similarity of the training face image and the reference face image;
and updating model parameters of the generated countermeasure network model through a back propagation algorithm based on the classification loss function and the similarity loss function until the face image generated by the updated generated countermeasure network model is misjudged as a target face image by the face recognition model.
2. The method for generating a face recognition attack sample according to claim 1, wherein the step of optimizing model parameters of the generated countermeasure network model based on the similarity between the reference face image and the initial face image to obtain an optimized implicit vector and noise comprises:
determining a similarity loss function based on the similarity of the reference face image and the initial face image;
optimizing the model parameters of the generated countermeasure network model through a back propagation algorithm based on the similarity loss function, so that the similarity between the face image generated by the optimized generated countermeasure network model and the reference face image reaches a preset value;
And taking the implicit vector and noise in the optimized generation countermeasure network model as the optimized implicit vector and noise.
3. The method of claim 1, wherein updating model parameters of the generated challenge network model by a back propagation algorithm based on the classification loss function and the similarity loss function comprises:
determining a total loss function based on the classification loss function, the similarity loss function, and a loss function generated against a network model;
model parameters of the generated countermeasure network model are updated by a back propagation algorithm based on the total loss function.
4. A method of generating a face recognition attack sample according to claim 3, wherein the total loss function is based on the formula:
wherein,representing the total loss function>Representing the generation of a loss function against the network model, +.>Representing a classification loss function of the training face image and the target face image; />Representing a similarity loss function of the training face image and the reference face image; alpha and beta are hyper-parameters controlling the weight of the loss function.
5. A method of generating a face recognition attack sample according to claim 3, wherein the classification loss function of the training face image and the target face image is determined by calculating a cross entropy loss function; and the similarity loss function of the training face image and the reference face image is determined by calculating the structural similarity index, cosine distance, KL distance or JS distance of the training face image and the reference face image.
6. The method for generating the face recognition attack sample according to claim 1, wherein the generated countermeasure network model is a StyleGAN network model, a ProGAN network model or a BigGAN network model, and the generated countermeasure network model is obtained through pre-training of a public data set.
7. A method for training a face recognition model, comprising:
acquiring a real face image;
generating an attack sample corresponding to the real face image, wherein the attack sample is generated according to the generation method of the face recognition attack sample in any one of claims 1-6;
fusing the real face image and the corresponding attack sample to obtain a training sample;
and training the face recognition model according to the training sample.
8. A device for generating a face recognition attack sample, comprising:
the initial face image generation module is used for carrying out implicit vector initialization and noise input on the generated countermeasure network model to generate a virtual initial face image;
the parameter optimization module is used for optimizing the model parameters of the generated countermeasure network model based on the similarity between a reference face image and the initial face image to obtain optimized hidden vectors and noise, wherein the reference face image is a real face image;
The training face image generation module is used for training the generated countermeasure network model based on the optimized implicit vector and noise to generate a training face image;
the classification loss function determining module is used for calculating the classification loss function of the training face image and the target face image based on the face recognition model, wherein the target face image is an oriented target face image to be attacked;
the similarity loss function determining module is used for calculating a similarity loss function based on the similarity between the training face image and the reference face image;
and the model parameter updating module is used for updating the model parameters of the generated countermeasure network model through a back propagation algorithm based on the classification loss function and the similarity loss function until the face image generated by the updated generated countermeasure network model is misjudged as a target face image by the face recognition model.
9. A training device for a face recognition model, comprising:
the real face acquisition module is used for acquiring a real face image;
an attack sample generation module, configured to generate an attack sample corresponding to the real face image, where the attack sample is generated according to the method for generating a face recognition attack sample according to any one of claims 1 to 6;
The image fusion module is used for fusing the real face image and the corresponding attack sample to obtain a training sample;
and the training module is used for training the face recognition model according to the training sample.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of generating a face recognition attack sample of any of claims 1-6 or the method of training a face recognition model of claim 7 via execution of the executable instructions.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the method of generating a face recognition attack sample according to any one of claims 1 to 6 or the method of training a face recognition model according to claim 7.
CN202111571253.0A 2021-12-21 2021-12-21 Face recognition attack sample generation method, model training method and related equipment Active CN114241569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111571253.0A CN114241569B (en) 2021-12-21 2021-12-21 Face recognition attack sample generation method, model training method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111571253.0A CN114241569B (en) 2021-12-21 2021-12-21 Face recognition attack sample generation method, model training method and related equipment

Publications (2)

Publication Number Publication Date
CN114241569A CN114241569A (en) 2022-03-25
CN114241569B true CN114241569B (en) 2024-01-02

Family

ID=80760297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111571253.0A Active CN114241569B (en) 2021-12-21 2021-12-21 Face recognition attack sample generation method, model training method and related equipment

Country Status (1)

Country Link
CN (1) CN114241569B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612689B (en) * 2022-05-16 2022-09-09 中国科学技术大学 Countermeasure sample generation method, model training method, processing method and electronic equipment
CN115083001B (en) * 2022-07-22 2022-11-22 北京航空航天大学 Anti-patch generation method and device based on image sensitive position positioning
CN115171196B (en) * 2022-08-25 2023-03-28 北京瑞莱智慧科技有限公司 Face image processing method, related device and storage medium
CN116563556B (en) * 2023-07-05 2023-11-10 杭州海康威视数字技术股份有限公司 Model training method
CN117079336B (en) * 2023-10-16 2023-12-22 腾讯科技(深圳)有限公司 Training method, device, equipment and storage medium for sample classification model

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578017A (en) * 2017-09-08 2018-01-12 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN110443203A (en) * 2019-08-07 2019-11-12 中新国际联合研究院 The face fraud detection system counter sample generating method of network is generated based on confrontation
CN111275115A (en) * 2020-01-20 2020-06-12 星汉智能科技股份有限公司 Method for generating counterattack sample based on generation counternetwork
CN111310802A (en) * 2020-01-20 2020-06-19 星汉智能科技股份有限公司 Anti-attack defense training method based on generation of anti-network
CN111881935A (en) * 2020-06-19 2020-11-03 北京邮电大学 Countermeasure sample generation method based on content-aware GAN
CN112052789A (en) * 2020-09-03 2020-12-08 腾讯科技(深圳)有限公司 Face recognition method and device, electronic equipment and storage medium
CN112949535A (en) * 2021-03-15 2021-06-11 南京航空航天大学 Face data identity de-identification method based on generative confrontation network
WO2021218899A1 (en) * 2020-04-30 2021-11-04 京东方科技集团股份有限公司 Method for training facial recognition model, and method and apparatus for facial recognition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578017A (en) * 2017-09-08 2018-01-12 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN110443203A (en) * 2019-08-07 2019-11-12 中新国际联合研究院 The face fraud detection system counter sample generating method of network is generated based on confrontation
CN111275115A (en) * 2020-01-20 2020-06-12 星汉智能科技股份有限公司 Method for generating counterattack sample based on generation counternetwork
CN111310802A (en) * 2020-01-20 2020-06-19 星汉智能科技股份有限公司 Anti-attack defense training method based on generation of anti-network
WO2021218899A1 (en) * 2020-04-30 2021-11-04 京东方科技集团股份有限公司 Method for training facial recognition model, and method and apparatus for facial recognition
CN111881935A (en) * 2020-06-19 2020-11-03 北京邮电大学 Countermeasure sample generation method based on content-aware GAN
CN112052789A (en) * 2020-09-03 2020-12-08 腾讯科技(深圳)有限公司 Face recognition method and device, electronic equipment and storage medium
CN112949535A (en) * 2021-03-15 2021-06-11 南京航空航天大学 Face data identity de-identification method based on generative confrontation network

Also Published As

Publication number Publication date
CN114241569A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN114241569B (en) Face recognition attack sample generation method, model training method and related equipment
US11501192B2 (en) Systems and methods for Bayesian optimization using non-linear mapping of input
Wang et al. Deep visual domain adaptation: A survey
Warde-Farley et al. 11 adversarial perturbations of deep neural networks
Cisse et al. Houdini: Fooling deep structured visual and speech recognition models with adversarial examples
CN111754596B (en) Editing model generation method, device, equipment and medium for editing face image
CN106295694B (en) Face recognition method for iterative re-constrained group sparse representation classification
US20210117733A1 (en) Pattern recognition apparatus, pattern recognition method, and computer-readable recording medium
Zhang et al. Overview of currency recognition using deep learning
CN112395979B (en) Image-based health state identification method, device, equipment and storage medium
KR20160061856A (en) Method and apparatus for recognizing object, and method and apparatus for learning recognizer
JP2022141931A (en) Method and device for training living body detection model, method and apparatus for living body detection, electronic apparatus, storage medium, and computer program
CN110889865B (en) Video target tracking method based on local weighted sparse feature selection
JP2008506201A (en) Adaptive discriminant generation model and sequential Fisher discriminant analysis and application for visual tracking
JP6620882B2 (en) Pattern recognition apparatus, method and program using domain adaptation
JP2005202932A (en) Method of classifying data into a plurality of classes
CN112164002A (en) Training method and device for face correction model, electronic equipment and storage medium
CN113837205A (en) Method, apparatus, device and medium for image feature representation generation
CN111666588A (en) Emotion difference privacy protection method based on generation countermeasure network
CN114140885A (en) Emotion analysis model generation method and device, electronic equipment and storage medium
CN112446322A (en) Eyeball feature detection method, device, equipment and computer-readable storage medium
Dinh et al. PixelAsParam: A gradient view on diffusion sampling with guidance
CN117689884A (en) Method for generating medical image segmentation model and medical image segmentation method
CN111401440A (en) Target classification recognition method and device, computer equipment and storage medium
Chen et al. Kernel density network for quantifying regression uncertainty in face alignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant