CN112784790A - Generalization false face detection method based on meta-learning - Google Patents

Generalization false face detection method based on meta-learning Download PDF

Info

Publication number
CN112784790A
CN112784790A CN202110128192.4A CN202110128192A CN112784790A CN 112784790 A CN112784790 A CN 112784790A CN 202110128192 A CN202110128192 A CN 202110128192A CN 112784790 A CN112784790 A CN 112784790A
Authority
CN
China
Prior art keywords
meta
training
training set
generalization
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110128192.4A
Other languages
Chinese (zh)
Other versions
CN112784790B (en
Inventor
纪荣嵘
孙可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202110128192.4A priority Critical patent/CN112784790B/en
Publication of CN112784790A publication Critical patent/CN112784790A/en
Application granted granted Critical
Publication of CN112784790B publication Critical patent/CN112784790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

A generalization false face detection method based on meta-learning relates to false face detection. Aiming at the defect that the conventional method for detecting the fake face of the binary model cannot well detect the fake face of the unknown attack algorithm, the method for detecting the generalized fake face based on the meta-learning is provided by considering different contributions of fake face samples to the generalization of the model and the instability of a fake face generation algorithm. The method comprises the following steps: 1) firstly, carrying out domain division on training sets of a plurality of attack algorithms, and randomly dividing a common training set and a meta-training set in each training stage; 2) performing feature extraction and loss function calculation on a common training set by using a convolutional neural network, and weighting each sample by using a small weight perception network; 3) and calculating a loss function of the meta-training set, updating parameters of the weight perception network by using the gradient of the loss, correcting the gradient of the common training set, and increasing the generalization of the model.

Description

Generalization false face detection method based on meta-learning
Technical Field
The invention relates to fake face detection, in particular to a generalization fake face detection method based on meta-learning.
Background
With the rapid development of computer technology, the face recognition technology makes great progress, and particularly, the face recognition model based on deep learning is accurate and far exceeds that of human beings. Face recognition systems are currently used in all corners of our lives. The development of deep learning has also prompted the progress of other technologies, such as generative models, which can synthesize images, music, videos, and even human faces. These models have been widely used in life. However, these techniques also bring new challenges, for example, using GAN or deepake techniques can generate fake faces, reduce the accuracy of face recognition, and jeopardize the privacy security of the people. Moreover, since 2014, many open-source face-forgery-generation technologies are on the market, which greatly reduces the cost of forgery, resulting in the increasing number of face-forgery-inhibited videos. It is therefore important to develop a system that can distinguish between true and false faces to assist in face recognition.
The current face forging technology mainly comprises four types of face synthesis, identity exchange, attribute operation and expression exchange, wherein identity exchange (facefake) is the face forging technology which is most concerned at present. At present, companies such as FaceBook and Google release a plurality of identity-exchanged data sets, such as facesenses + +, cell-DF, DFDC, and the like, to help develop a face forgery detection algorithm. For the detection technology of forged faces, the authenticity of face images is determined by using the change of the face images after JPEG compression in early research, and the like (McCloskey S, interference M.detecting gate-generated image using color cups [ J ]. arXiv preprinting arXiv:1812.08247,2018) extracts RGB color features and uses an SVM classifier to determine the authenticity of the faces. In order to further improve the performance, Matern F and others (Matern F, Riess C, Standard M. explicit visual objects to expose surfaces and surfaces mechanics [ C ]//2019IEEE Window Applications of Computer Vision Works (WACVW) IEEE,2019:83-92) studied the characteristics of forged faces and extracted this characteristic by manual feature, and obtained good results. But these low-level features based on manual extraction are not very robust. To overcome these problems, recent research has focused on supervised deep learning approaches. The simplest method is to use a convolutional neural network to classify the data two, which is very efficient and widely used in DFDC fake face detection competitions. In addition, Han X et al (Han X, Morariu V, Larry Davis P I S.two-stream neural networks for sampled face detection [ C ]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshps.2017: 19-27) use a dual-stream neural network to perform face forgery detection, wherein one branch uses a convolutional neural network to extract the forgery feature of the face, the other branch extracts the steganography feature, and finally the two branches are combined to obtain a more accurate result. The literature (Nguyen H, Fang F, Yamagishi J, et al, Multi-task left for detecting and segmenting human facial images and video [ J ]. arXiv preprinting arXiv:1906.06876,2019) uses a generative model to reconstruct a face image for extracting features unique to a forged face to further improve classification performance.
However, most of the above methods are directed to the testing of the same type of forged human face, and experiments show that when the forging method used by the training set (source domain) and the testing set (target domain) have a large difference, the model performance will be sharply reduced. However, in actual deployment, due to the diversity of the counterfeit attack modes, a data set of all the face-counterfeiting methods cannot be collected for training, so that the test of an invisible domain is very common.
Although the domain adaptive method can complement the difference of different domains, the adaptive method still cannot perfectly solve the problem due to the lack of actual target domain data. Many generalized counterfeit face detection methods have also been proposed, including Cozzolino D, et al (Cozzolino D, Thies J,
Figure BDA0002924711100000021
A,et al.Forensictransfer:Weakly-supervised domain adaptation for forgery detection[J]arXiv preprint arXiv:1812.02510,2018.) for the first time, a variation autoencoder was used to reconstruct the data and to decouple the features of the hidden layer and use it to determine authenticity. And the accuracy of the model is further improved by using a multi-task learning method on the basis of predecessors. In addition, blending of false faces with real faces leaves a blending boundary that is imperceptible to the human eye, Li L, etc. (Li L, Bao J, Zhang T, et al]//Proceedins of the IEEE/CVF Conference on Computer Vision and Pattern recognition.2020: 5001-. However, most of these methods require additional data support and cannot use pre-trained models, which greatly limits the difficulty of training and the accuracy of models.
Disclosure of Invention
The invention aims to provide a generalization false face detection method based on meta-learning, aiming at the defect that a common detection method for a counterfeit face cannot achieve a good effect on a face with an unknown forgery method, and considering different generalization of different samples to models, variation among different domains, relative stability of a real face and diversity of false faces.
The invention comprises the following steps:
1) carrying out domain division on training sets of a plurality of attack algorithms, randomly dividing a common training set and a meta-training set in each training stage, wherein the meta-training set is used for assisting in enhancing the generalization of the model;
2) performing feature extraction and loss function calculation on a common training set by using a convolutional neural network, and weighting each sample by using a small weight perception network for distinguishing generalization ability of the samples;
3) and calculating a loss function of the meta-training set, updating parameters of the weight perception network by using the gradient of the loss, correcting the gradient of the common training set, and increasing the generalization of the model.
The invention has the following outstanding advantages:
1) according to the method, the problem that part of samples in the training samples are not beneficial to the improvement of the detection model generalization is solved, and the weight perception network is used for giving a weight to each sample, wherein the weight reflects the generalization of each sample. Meanwhile, positive samples are gathered through the in-class compact loss function, and the samples with higher network mining quality can be obtained, so that the generalization of the detection network is improved. The equipment required by the invention can be adjusted according to the actual situation, and generally the latest EfficientNet-b0 is used as a feature extractor, and only one NVIDIA 1080TI display card is required for training and testing the detection network.
2) The method can averagely improve the accuracy of the original model by about 3% without increasing the network reasoning speed and any parameter, and tests the method on a plurality of data sets, and the experimental result shows that the result of the method is higher than the compared method in all indexes.
3) Experiments prove that the method can improve the precision and the generalization of the model on any type of convolutional neural network, is favorable for selecting the convolutional neural network with proper size and complexity according to the actual calculation power and situation during deployment, and greatly increases the flexibility.
Drawings
FIG. 1 is a diagram of the main steps of the present invention.
Fig. 2 is a data partitioning method and task introduction.
Fig. 3 is a schematic diagram of the principle of the present invention.
Fig. 4 is a weight visualization.
Detailed Description
The following examples illustrate the present invention in detail.
The invention aims to solve the problem that a common fake face detection method cannot achieve a good effect on a face of an unknown fake method, consider the difference of different samples on the model generalization and the instability of generated samples, use a weight perception network to weight the samples, and use an intra-class compact loss function to assist in improving the generalization of the model. And meanwhile, a meta-learning framework is used for learning the parameters of the weight perception network and correcting the gradient of the network, so that the network cannot be quickly overfitted for a certain domain. In particular, the invention essentially consists of two branches, first a two-class convolutional neural network f (θ), whose purpose is to extract features and determine the authenticity of each face. The other branch is a weight-aware network p (w) which takes the features extracted by f (θ) as input. As shown in the upper half of fig. 1. The module is used for allocating domain adaptive weights to each sample and helping the classification network f (theta) to mine more general features. In order to reduce the overall training parameter quantity and inference time, the weight-aware network is designed to be small and only comprises 1M parameters, the characteristic channels are compressed by using deep separable convolution firstly, then the output of a full-connection layer is used as the weight of the current input sample, and the score is normalized to [0,1] by using a Sigmoid function for stable training.
Referring to fig. 1, the specific process of this embodiment is as follows:
1. data partitioning
To model domain bias, source domain data is applied to each training phase
Figure BDA0002924711100000047
Stochastic partitioning into training fields
Figure BDA0002924711100000048
And meta field
Figure BDA0002924711100000049
There is no overlap between the two domains. Meanwhile, in order to further learn the information of true and false image pairs, in each training process, a positive sample and a negative sample are taken out randomly from a training domain as current training data, and then the true/false faces corresponding to the positive sample and the negative sample in the meta domain are found as a current meta-training set. The specific data division is shown in fig. 2.
2. Yuan training
This step is used to calculate the loss of the neural network f (θ) on the normal training set with the help of the weight perception module p (w). The weight perception module is a mini-network with only 1M, and the weight perception module extracts the features f by a neural networkiFor input, the domain generalization weight p (f) of each sample is outputi(ii) a w). Hypothesis slave training fields
Figure BDA00029247111000000410
The K training data of the middle sampling are recorded as
Figure BDA0002924711100000041
The resulting loss function T (θ, w)
Figure BDA0002924711100000042
3. Meta test
After the meta-training step, a weighted loss is obtained, and in order to update the parameters p (w) of the weight-aware network, the following network parameters are first updated virtually:
Figure BDA0002924711100000043
here, the
Figure BDA0002924711100000044
Is the learning rate of the meta-training process. Then, the model is subjected to a meta-training set obtained by a data partitioning strategy
Figure BDA0002924711100000045
Upper computational cost loss:
Figure BDA0002924711100000046
4. parameter updating
After training and meta-losses are obtained, the final network optimization objective is assumed to be:
argminθ,wT(θ,w)+βM(θ')
where β represents the importance of the element loss. By minimizing the above equation, the convolution neural network parameter θ and the weight-aware network parameter w can be derived as:
Figure BDA0002924711100000051
Figure BDA0002924711100000052
hyperparametric gamma and
Figure BDA0002924711100000053
respectively, the learning rates of the two parameter updates.
5. Analysis of
The schematic diagram of the above steps is shown in fig. 3. The whole training framework is equivalent to using the second derivative of the model in the meta-training set to update the weight-aware network parameters and modify the gradient direction of the original network parameters. This forces the networks p (w) and f (θ) to perform well not only on the normal training set, but also on the meta training set. The generalization of the model can be enhanced, and overfitting on a certain domain can be avoided.
6. Loss function
Due to the diversity of the face forgery algorithm and the relative stability of real faces, the invention uses the idea of anomaly detection to provide an intra-class compact loss function, which aims to gather positive samples and push away negative samples. Definition O ═ { O1,o2,...onY is the output of the network full connection layer, Y ═ Y1,y2...yn},yiE { (0,1) } sample label, use the label to screen out the positive sample output set OrealAnd negative sample output set Ofake. Defining the distance between the positive sample and its cluster center as:
Figure BDA0002924711100000054
wherein:
Figure BDA0002924711100000055
the distance between the negative and positive samples is defined as:
Figure BDA0002924711100000056
the overall intra-class compact loss function can be defined as:
Licc=Lnegative-Lpositive
where C isrealIs updated following the batch. The final loss function L is defined as:
L=Lce+λLicc
the specific experimental results are as follows:
in order to verify the improvement of the method on the generalization of the detection model, three famous false face data sets faceforces + +, Celeb-DF and DFDC are combined with one another to form a new data set, and the specific division mode is shown in Table 1.
TABLE 1
Figure BDA0002924711100000061
To verify the effectiveness of the present invention, the following methods were compared on the above data sets: 1) basemodel, namely, pre-training a model on ImageNet without fine adjustment of fake face data, wherein the model is the most basic model; 2) Alltrain-Basemodel: training is carried out on a training data set on the basis of Basemodel, and the method is the most direct comparison method; 3) FocalLoss-Basemodel: as a method for weighting the samples, focalloss is added on the basis of Alltrain-Basemodel, and difficult and easy samples are further distinguished. 4) ForensicTransfer; 5) the two Multi-task methods are methods which focus on the detection of generalized fake faces at present; 6) the MLDG uses meta-learning to do a domain generalization problem method, and reproduces the method under a task.
Comparative experiments are shown in tables 2 and 3.
Table 2: comparative experiments on GID datasets
Figure BDA0002924711100000071
The results in the GCD experiment are as follows:
table 3: comparative experiments on GCD data set
Figure BDA0002924711100000072
From the above, it can be seen that the method of the present invention achieves better results on multiple indexes without increasing model parameters. The GCD experiment shows that the method of the invention not only can be improved on the known fake face data set, but also can be improved by about 3% of accuracy rate on the cross-data set test set, thereby fully illustrating the effectiveness of the method.
To explore the interpretability of the method of the present invention, the weights of the perceptual weight module were visualized as shown in FIG. 4. It can be seen that the weight perception module tends to give a high weight to a fake face or a real face with high quality, and give a low weight to a face with low quality, which indicates that the high-quality face contains more common forged information and is more helpful for generalization of the model.

Claims (2)

1. A generalization false face detection method based on meta-learning is characterized by comprising the following steps:
1) carrying out domain division on training sets of a plurality of attack algorithms, randomly dividing a common training set and a meta-training set in each training stage, wherein the meta-training set is used for assisting in enhancing the generalization of the model;
2) performing feature extraction and loss function calculation on a common training set by using a convolutional neural network, and weighting each sample by using a small weight perception network for distinguishing generalization ability of the samples;
3) and calculating a loss function of the meta-training set, updating parameters of the weight perception network by using the gradient of the loss, correcting the gradient of the common training set, and increasing the generalization of the model.
2. The method according to claim 1, wherein in step 1), the training sets of the plurality of attack algorithms are divided into domains, each training phase randomly divides a common training set and a meta training set, and each training phase randomly divides source domain data into a training domain and a meta domain without overlapping in order to simulate domain deviation; meanwhile, in order to further learn the information of true and false image pairs, in each training process, a positive sample and a negative sample are taken out randomly from a training domain as current training data, and then the true/false faces corresponding to the positive sample and the negative sample in the meta domain are found as a current meta-training set.
CN202110128192.4A 2021-01-29 2021-01-29 Generalization false face detection method based on meta-learning Active CN112784790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110128192.4A CN112784790B (en) 2021-01-29 2021-01-29 Generalization false face detection method based on meta-learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110128192.4A CN112784790B (en) 2021-01-29 2021-01-29 Generalization false face detection method based on meta-learning

Publications (2)

Publication Number Publication Date
CN112784790A true CN112784790A (en) 2021-05-11
CN112784790B CN112784790B (en) 2022-05-10

Family

ID=75759873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110128192.4A Active CN112784790B (en) 2021-01-29 2021-01-29 Generalization false face detection method based on meta-learning

Country Status (1)

Country Link
CN (1) CN112784790B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343771A (en) * 2021-05-12 2021-09-03 武汉大学 Face anti-counterfeiting method based on adaptive meta-learning
CN113537307A (en) * 2021-06-29 2021-10-22 杭州电子科技大学 Self-supervision domain adaptation method based on meta-learning
CN113723295A (en) * 2021-08-31 2021-11-30 浙江大学 Face counterfeiting detection method based on image domain frequency domain double-flow network
CN113822160A (en) * 2021-08-20 2021-12-21 西安交通大学 Evaluation method, system and equipment of deep forgery detection model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170006355A (en) * 2015-07-08 2017-01-18 주식회사 케이티 Method of motion vector and feature vector based fake face detection and apparatus for the same
KR101815697B1 (en) * 2016-10-13 2018-01-05 주식회사 에스원 Apparatus and method for discriminating fake face
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
CN110309798A (en) * 2019-07-05 2019-10-08 中新国际联合研究院 A kind of face cheat detecting method extensive based on domain adaptive learning and domain
CN111160102A (en) * 2019-11-29 2020-05-15 北京爱笔科技有限公司 Training method of face anti-counterfeiting recognition model, face anti-counterfeiting recognition method and device
CN111783505A (en) * 2019-05-10 2020-10-16 北京京东尚科信息技术有限公司 Method and device for identifying forged faces and computer-readable storage medium
CN112200075A (en) * 2020-10-09 2021-01-08 西安西图之光智能科技有限公司 Face anti-counterfeiting method based on anomaly detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170006355A (en) * 2015-07-08 2017-01-18 주식회사 케이티 Method of motion vector and feature vector based fake face detection and apparatus for the same
KR101815697B1 (en) * 2016-10-13 2018-01-05 주식회사 에스원 Apparatus and method for discriminating fake face
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
CN111783505A (en) * 2019-05-10 2020-10-16 北京京东尚科信息技术有限公司 Method and device for identifying forged faces and computer-readable storage medium
CN110309798A (en) * 2019-07-05 2019-10-08 中新国际联合研究院 A kind of face cheat detecting method extensive based on domain adaptive learning and domain
CN111160102A (en) * 2019-11-29 2020-05-15 北京爱笔科技有限公司 Training method of face anti-counterfeiting recognition model, face anti-counterfeiting recognition method and device
CN112200075A (en) * 2020-10-09 2021-01-08 西安西图之光智能科技有限公司 Face anti-counterfeiting method based on anomaly detection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KE SUN ET AL.: "Domain General face forgery detection by learning to weight", 《THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE》 *
TONGFENG YANG ET AL.: "VTD-Net: Depth Face Forgery Oriented Video Tampering Detection based on Convolutional Neural Network", 《2020 39TH CHINESE CONTROL CONFERENCE (CCC)》 *
曹玉红 等: "智能人脸伪造与检测综述", 《工程研究——跨学科视野中的工程》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343771A (en) * 2021-05-12 2021-09-03 武汉大学 Face anti-counterfeiting method based on adaptive meta-learning
CN113537307A (en) * 2021-06-29 2021-10-22 杭州电子科技大学 Self-supervision domain adaptation method based on meta-learning
CN113537307B (en) * 2021-06-29 2024-04-05 杭州电子科技大学 Self-supervision domain adaptation method based on meta learning
CN113822160A (en) * 2021-08-20 2021-12-21 西安交通大学 Evaluation method, system and equipment of deep forgery detection model
CN113822160B (en) * 2021-08-20 2023-09-19 西安交通大学 Evaluation method, system and equipment of depth counterfeiting detection model
CN113723295A (en) * 2021-08-31 2021-11-30 浙江大学 Face counterfeiting detection method based on image domain frequency domain double-flow network
CN113723295B (en) * 2021-08-31 2023-11-07 浙江大学 Face counterfeiting detection method based on image domain frequency domain double-flow network

Also Published As

Publication number Publication date
CN112784790B (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN112784790B (en) Generalization false face detection method based on meta-learning
Rana et al. Deepfakestack: A deep ensemble-based learning technique for deepfake detection
Jiang et al. A pedestrian detection method based on genetic algorithm for optimize XGBoost training parameters
Yang et al. Detecting fake images by identifying potential texture difference
CN104239858B (en) A kind of method and apparatus of face characteristic checking
CN109063649B (en) Pedestrian re-identification method based on twin pedestrian alignment residual error network
CN113221655B (en) Face spoofing detection method based on feature space constraint
CN106295501A (en) The degree of depth based on lip movement study personal identification method
CN113901448A (en) Intrusion detection method based on convolutional neural network and lightweight gradient elevator
Du et al. Age factor removal network based on transfer learning and adversarial learning for cross-age face recognition
CN113822377B (en) Fake face detection method based on contrast self-learning
Ilyas et al. Deepfakes examiner: An end-to-end deep learning model for deepfakes videos detection
Narvaez et al. Painting authorship and forgery detection challenges with ai image generation algorithms: Rembrandt and 17th century dutch painters as a case study
Zhu et al. A novel simple visual tracking algorithm based on hashing and deep learning
Saealal et al. Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance
CN113221683A (en) Expression recognition method based on CNN model in teaching scene
Singh et al. Demystifying deepfakes using deep learning
Hussein Robust iris recognition framework using computer vision algorithms
Zhang et al. Hierarchical features fusion for image aesthetics assessment
CN113205044B (en) Deep fake video detection method based on characterization contrast prediction learning
Hingrajiya et al. An Approach for Copy-Move and Image Splicing Forgery Detection using Automated Deep Learning
CN114492634A (en) Fine-grained equipment image classification and identification method and system
Rowan et al. The Effectiveness of Temporal Dependency in Deepfake Video Detection
Sonkar et al. Iris recognition using transfer learning of inception v3
Saealal et al. In-the-Wild Deepfake Detection Using Adaptable CNN Models with Visual Class Activation Mapping for Improved Accuracy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant