CN112861671B - Method for identifying deeply forged face image and video - Google Patents

Method for identifying deeply forged face image and video Download PDF

Info

Publication number
CN112861671B
CN112861671B CN202110110096.7A CN202110110096A CN112861671B CN 112861671 B CN112861671 B CN 112861671B CN 202110110096 A CN202110110096 A CN 202110110096A CN 112861671 B CN112861671 B CN 112861671B
Authority
CN
China
Prior art keywords
face
deep
forged
video
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110110096.7A
Other languages
Chinese (zh)
Other versions
CN112861671A (en
Inventor
李斌
周世杰
张家亮
贾宇
邹严
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110110096.7A priority Critical patent/CN112861671B/en
Publication of CN112861671A publication Critical patent/CN112861671A/en
Application granted granted Critical
Publication of CN112861671B publication Critical patent/CN112861671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for identifying a deeply forged face image and a deeply forged video, which comprises the following steps: s1, collecting a mixed training sample; s2, constructing an identification model; the identification model comprises two 2D deep convolution neural networks and a 3D deep convolution neural network; s3, training the identification model by using the mixed training sample; and S4, identifying the face video to be identified by using the trained identification model. The invention proposes three improvements: (1) improving the generalization performance by adopting a mixed training sample; (2) The face center cutting images of the large edge and the small edge are adopted to train two 2D depths, the depth convolution neural network of the prediction robustness is improved, and the prediction robustness is improved; (3) The 3D deep convolution neural network can utilize the interframe consistency information, so that the information utilization rate is improved; therefore, the method and the device can solve the problem that the discrimination capability of the prior art on the novel forged video is poor.

Description

Method for identifying deeply forged face image and video
Technical Field
The invention particularly relates to a method for identifying a deeply forged face image and a deeply forged video.
Background
Detection methods for counterfeit videos can be divided into two categories: firstly, based on a method (Temporal features across frames) of inter-frame time characteristics, the method utilizes time-related characteristics such as human blinking frequency, mouth shape and the like in a video to carry out judgment, and generally uses a recursive classification method; the other is a method (Visual objects with frame) based on the intra-frame Visual effect, which uses the flaws on the image edge, the position of five sense organs, facial shadows and other unnatural details to make a judgment, usually extracts specific features and then completes the detection by a deep or shallow classifier.
In addition, researchers have proposed tracing of depth-forged video using traceable, non-tamperable blockchain techniques. In 2019, researchers of the electrical and computer engineering system of harry university of the academy headquarters, arabian consortium published a paper named Using blockchains and intelligent contract attack depth forgery Videos (Combating fake Videos Using blockchains and Smart contacts), and a solution and a general framework for Using blockchains are proposed to track the source and history of digital content, and the digital content can be tracked even if copied for multiple times. The solution framework provided by the paper is universal and can be applied to any other form of digital content.
The specific achievement aspect is as follows:
in 8 months in 2017, a network security group of the singapore information communication research institute published a paper named automatic face exchange and detection (automatic face swapping and its detection), an AI face exchange detection frame was proposed for the first time, and the detection accuracy rate reached 92%. Since then, the research on the artificial intelligence face changing technology and the detection technology in the industry enters the stage of enthusiasm, and enterprises, universities and individual developers invest in the development of artificial intelligence face changing detection tools.
In 2019, researchers at Berkeley university of California and university of California in the United states collected personal characteristics in videos through existing non-forged videos, and a highly personalized 'soft biometric identification index' system was constructed. After the identification system grasps personal micro-expression and behavior habits, the false identification accuracy can reach 95%. Adobe corporation of america also introduced a reverse PS (Photoshop, the most widely used cartography software worldwide, here meaning "edit pictures") tool in 2019, 6 months. By means of an AI algorithm, the tool can automatically identify the part of the portrait picture modified by the image liquefaction tool and restore the portrait picture to an initial appearance, and the accuracy is as high as 99%.
To help researchers develop automatic detection tools for depth forgery, ***, inc, published a recognition data set of depth forgery videos including 3000 segments of videos shot by multiple real actors in 28 different scenes in 9 months 2019. Global researchers can use this fully open-source dataset to train deep forgery detection tools.
However, the above-mentioned techniques only identify a single image, and do not consider the context information in the video, and the neural network cannot automatically utilize the inter-frame information, so that the inference from the inter-frame consistency cannot be made. Because the method and the variable of the real-world deep-counterfeit video cannot be exhausted, and the algorithm for counterfeiting the video is continuously improved, new algorithms are continuously proposed, and the characteristics and the counterfeiting points of the real-world deep-counterfeit video are obviously different from the counterfeit data set manufactured in the industry at present. The common classification convolution neural network training method is used on the forged data sets, the generalization performance of the generated model is poor, and the discrimination capability of the novel forged video is poor.
Disclosure of Invention
The invention aims to provide a method for identifying a deeply forged face image and a deeply forged video, so as to solve the problems in the prior art.
The invention provides a method for identifying a deeply forged face image and a deeply forged video, which comprises the following steps:
s1, collecting a mixed training sample;
s2, constructing an identification model; the identification model comprises two 2D deep convolution neural networks and a 3D deep convolution neural network;
s3, training the identification model by using the mixed training sample;
and S4, identifying the face video to be identified by using the trained identification model.
Further, the method for collecting the hybrid training sample in step S1 includes:
s11, collecting a large number of deep forged videos and original videos corresponding to the deep forged videos to form a training data set;
s12, detecting the position of a first face in each frame of each depth forged video by using a face detection method, randomly intercepting a segment with the length of L in a continuous frame with a forged face, and cutting out a face frame by using first face position information to form a depth forged face segment;
s13, detecting the position of a second face in each frame of the original video corresponding to each depth forged video by using a face detection method, randomly intercepting segments with the length of L in continuous frames with faces, and cutting out face frames by using second face position information to form face segments of the original video;
s14, taking a frame F in the deeply forged face fragment and a corresponding frame R in the original video face fragment, and weighting and adding the frame F and the corresponding frame R to form a mixed face image;
and S15, forming a mixed face image by all the deeply forged face fragments and the corresponding original video face fragments through the method in the step S14, and obtaining a mixed training sample.
Further, frame F and corresponding frame R are weighted in step S14 to a weighted sum of [0,1] random samples according to a certain distribution.
Further, in step S2, the convolution kernel of each 2D deep convolutional neural network is 2D, the backbone network is a common deep convolutional neural network, and the full connection layer is a 2-class structure.
Further, the method for training the 2D deep convolutional neural network in step S3 includes:
(1) Randomly extracting a frame of mixed face image from the mixed training sample to carry out center clipping so as to enable the face to be far away from the edge of the mixed face image, and then repeatedly using a first 2D deep convolution neural network to carry out forward and backward propagation training on the mixed face image after center clipping;
(2) Randomly extracting a frame of mixed face image from the mixed training sample to perform center clipping so that the face is close to the edge of the mixed face image, and then repeatedly using a second 2D depth convolution neural network to perform forward and backward propagation training on the mixed face image subjected to center clipping;
further, the 3D deep convolutional neural network in step S2 is based on a 2D deep convolutional neural network, and the convolution kernel of the 2D deep convolutional neural network is replaced by a 3D convolution kernel, so that the 3D deep convolutional neural network has the capability of performing convolution between video frames.
Further, the method for training the 3D deep convolutional neural network in step S3 is to randomly extract several consecutive frames of mixed face images from the mixed training sample, and then to perform forward and backward propagation training on several consecutive frames of mixed face images by repeatedly using the 3D deep convolutional neural network.
Further, step S4 includes the following sub-steps:
s41, randomly extracting a video frame segment from the face video to be identified;
s42, identifying the face in each frame in the video frame segment by using the two trained 2D deep convolution neural networks;
s43, identifying each frame in the video frame segment by using the trained 3D deep convolution neural network;
and S44, using a weighted integration method as the identification result for the identification predicted values of the two 2D deep convolutional neural networks and the 3D deep convolutional neural network.
Further, the weight used in the weighted integration method in step S44 is the confidence of the identification prediction value; the confidence is the distance between the discrimination prediction value and 0.5.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the present invention proposes three improvements: (1) improving the generalization performance by adopting a mixed training sample; (2) The face center cutting images of the large edge and the small edge are adopted to train two 2D depths, the depth convolution neural network of the prediction robustness is improved, and the prediction robustness is improved; (3) The 3D deep convolution neural network can utilize the interframe consistency information, so that the information utilization rate is improved; therefore, the method and the device can solve the problem that the discrimination capability of the prior art on the novel forged video is poor.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of a method for identifying a deeply forged face image and a video according to an embodiment of the present invention
Fig. 2 is a schematic diagram of collecting a hybrid training sample according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of identifying a video of a face to be identified by using a trained identification model according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
Referring to fig. 1, the present embodiment provides a method for identifying a deeply forged face image and a deeply forged video, including the following steps:
s1, collecting a mixed training sample;
referring to fig. 2, the method for collecting the hybrid training sample in step S1 includes:
s11, collecting a large number of deep forged videos and original videos corresponding to the deep forged videos to form a training data set;
s12, detecting the position of a first face in each frame of each depth forged video by using a face detection method, randomly intercepting a segment with the length of L in a continuous frame with a forged face, and cutting out a face frame by using first face position information to form a depth forged face segment;
s13, detecting the position of a second face in each frame of the original video corresponding to each depth forged video by using a face detection method, randomly intercepting segments with the length of L in continuous frames with faces, and cutting out face frames by using second face position information to form face segments of the original video;
s14, taking a frame F in the depth fake face fragment and a corresponding frame R in the original video face fragment, and weighting and summing the frame F and the corresponding frame R to form a mixed face image; in some embodiments, the frame F and the corresponding frame R are weighted by a weighted sum of [0,1] random samples that fit some distribution, such as a normal distribution;
and S15, forming a mixed face image by all the deeply forged face segments and the corresponding original video face segments through the method in the step S14, and obtaining a mixed training sample.
The step S1 can generate a novel mixed training sample based on the original video and the deep forged video by using a data augmentation method, and can improve generalization performance.
S2, constructing an identification model; the identification model comprises two 2D deep convolution neural networks and a 3D deep convolution neural network;
for a 2D deep convolutional neural network, a convolution kernel of each 2D deep convolutional neural network in this embodiment is 2D, a backbone network is a common deep convolutional neural network, and a full connection layer is a 2-class structure. The 2D depth convolution neural network is used for identifying whether a single image is subjected to depth forgery or not.
For a 3D deep convolutional neural network, the 3D deep convolutional neural network of this embodiment is based on a 2D deep convolutional neural network, and its convolution kernel is replaced by a 3D convolution kernel, so that it has the capability of performing convolution between video frames. The 2D depth convolution neural network is used for identifying whether continuous frame images are subjected to depth forgery or not.
S3, training the identification model by using the mixed training sample;
for a 2D deep convolutional neural network, the method for training the 2D deep convolutional neural network in this embodiment is as follows:
(1) Randomly extracting a frame of mixed face image from the mixed training sample to perform center clipping so that the face is far away from the edge of the mixed face image, and then repeatedly using a first 2D depth convolution neural network to perform forward and backward propagation training on the mixed face image subjected to center clipping;
(2) Randomly extracting a frame of mixed face image from the mixed training sample to carry out center clipping so that the face is close to the edge of the mixed face image, and then repeatedly using a second 2D deep convolution neural network to carry out forward and backward propagation training on the mixed face image after center clipping;
in the process of training the 2D deep convolution neural network, the face center of the large edge and the face center of the small edge are adopted to cut out the images for training, and therefore the prediction robustness can be improved.
For the 3D deep convolutional neural network, the method for training the 3D deep convolutional neural network in this embodiment is to randomly extract several consecutive frames of mixed face images from the mixed training sample, and then repeatedly use the 3D deep convolutional neural network to perform forward and backward propagation training on the several consecutive frames of mixed face images. When the 3D deep convolution neural network identifies the frames in the video, the front frame and the rear frame of the frame are used as references, so that the consistency among frames can be utilized, and the information utilization rate is improved.
S4, identifying the face video to be identified by using the trained identification model;
as shown in fig. 3, step S4 includes the following sub-steps:
s41, randomly extracting a video frame segment with the length of L from the face video to be identified;
s42, identifying the face in each frame in the video frame segment by using the two trained 2D deep convolution neural networks;
s43, identifying each frame in the video frame segment by using the trained 3D deep convolution neural network;
and S44, using a weighted integration method as the identification result for the identification predicted values of the two 2D deep convolutional neural networks and the 3D deep convolutional neural network. The weighted integration method uses a weight that is the confidence of identifying the predicted value. The neural network outputs a discrimination prediction value of a section of video, the discrimination prediction value is between (0, 1), the closer to 1 represents that the neural network considers that the video is more likely to be a fake video, and the closer to 0 represents that the video is more likely to be a real video. The confidence level near 0 or 1 is high, and the confidence level near 0.5 is low, so that the confidence level in this embodiment is the distance between the discrimination prediction value and 0.5.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for identifying a deeply forged face image and a deeply forged video is characterized by comprising the following steps:
s1, collecting a mixed training sample;
s2, constructing an identification model; the identification model comprises two 2D deep convolution neural networks and a 3D deep convolution neural network;
s3, training the identification model by using the mixed training sample;
s4, identifying the face video to be identified by using the trained identification model;
the method for collecting the mixed training sample in the step S1 comprises the following steps:
s11, collecting a large number of deep forged videos and original videos corresponding to the deep forged videos to form a training data set;
s12, detecting the position of a first face in each frame of each depth forged video by using a face detection method, randomly intercepting a segment with the length of L in a continuous frame with a forged face, and cutting out a face frame by using first face position information to form a depth forged face segment;
s13, detecting the position of a second face in each frame of the original video corresponding to each depth forged video by using a face detection method, randomly intercepting segments with the length of L in continuous frames with faces, and cutting out face frames by using second face position information to form face segments of the original video;
s14, taking a frame F in the depth fake face fragment and a corresponding frame R in the original video face fragment, and weighting and summing the frame F and the corresponding frame R to form a mixed face image;
and S15, forming a mixed face image by all the deeply forged face fragments and the corresponding original video face fragments through the method in the step S14, and obtaining a mixed training sample.
2. The method for authenticating deep forged face images and videos as claimed in claim 1, wherein the weighting of the weighted sum of the frame F and the corresponding frame R in the step S14 is [0,1] random sampling conforming to a certain distribution.
3. The method for identifying the deep forged face images and videos according to claim 1, wherein in the step S2, a convolution kernel of each 2D deep convolutional neural network is 2D, a backbone network is a deep convolutional neural network, and a full connection layer is a 2-class structure.
4. The method for identifying deep forged face images and videos according to claim 3, wherein the method for training the 2D deep convolutional neural network in the step S3 comprises the following steps:
(1) Randomly extracting a frame of mixed face image from the mixed training sample to perform center clipping so that the face is far away from the edge of the mixed face image, and then repeatedly using a first 2D depth convolution neural network to perform forward and backward propagation training on the mixed face image subjected to center clipping;
(2) Randomly extracting a frame of mixed face image from the mixed training sample to perform center clipping so that the face is close to the edge of the mixed face image, and then repeatedly using a second 2D deep convolutional neural network to perform forward and backward propagation training on the mixed face image subjected to center clipping.
5. The method for identifying the deep forged face image and video as claimed in claim 3, wherein the 3D deep convolutional neural network in step S2 is based on a 2D deep convolutional neural network, and the convolution kernel is replaced by a 3D convolution kernel, so that the method has the capability of performing convolution between video frames.
6. The method for identifying deep forged face images and videos as claimed in claim 5, wherein the method for training the 3D deep convolutional neural network in step S3 is to randomly extract several continuous frames of mixed face images from the mixed training samples, and then to repeatedly use the 3D deep convolutional neural network to perform forward and backward propagation training on several continuous frames of mixed face images.
7. The method for discriminating deep forged face images and videos as claimed in claim 1, wherein the step S4 includes the sub-steps of:
s41, randomly extracting a video frame segment from the face video to be identified;
s42, identifying the face in each frame in the video frame segment by using the two trained 2D deep convolution neural networks;
s43, identifying each frame in the video frame segment by using the trained 3D deep convolution neural network;
and S44, using a weighted integration method as the identification result for the identification predicted values of the two 2D deep convolutional neural networks and the 3D deep convolutional neural network.
8. The method for identifying deep forged face images and videos as claimed in claim 7, wherein the weighting used in the weighted integration method in step S44 is the confidence of the identification prediction value; the confidence is the distance between the identification prediction value and 0.5.
CN202110110096.7A 2021-01-27 2021-01-27 Method for identifying deeply forged face image and video Active CN112861671B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110110096.7A CN112861671B (en) 2021-01-27 2021-01-27 Method for identifying deeply forged face image and video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110110096.7A CN112861671B (en) 2021-01-27 2021-01-27 Method for identifying deeply forged face image and video

Publications (2)

Publication Number Publication Date
CN112861671A CN112861671A (en) 2021-05-28
CN112861671B true CN112861671B (en) 2022-10-21

Family

ID=76009483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110110096.7A Active CN112861671B (en) 2021-01-27 2021-01-27 Method for identifying deeply forged face image and video

Country Status (1)

Country Link
CN (1) CN112861671B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435292B (en) * 2021-06-22 2023-09-19 北京交通大学 AI fake face detection method based on inherent feature mining
CN113627256B (en) * 2021-07-09 2023-08-18 武汉大学 False video inspection method and system based on blink synchronization and binocular movement detection
CN113723220B (en) * 2021-08-11 2023-08-25 电子科技大学 Deep counterfeiting traceability system based on big data federation learning architecture
CN114494935B (en) * 2021-12-15 2024-01-05 北京百度网讯科技有限公司 Video information processing method and device, electronic equipment and medium
CN114093013B (en) * 2022-01-19 2022-04-01 武汉大学 Reverse tracing method and system for deeply forged human faces

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
CN111368764A (en) * 2020-03-09 2020-07-03 零秩科技(深圳)有限公司 False video detection method based on computer vision and deep learning algorithm
CN111967427A (en) * 2020-08-28 2020-11-20 广东工业大学 Fake face video identification method, system and readable storage medium
CN112149608A (en) * 2020-10-09 2020-12-29 腾讯科技(深圳)有限公司 Image recognition method, device and storage medium
CN112163488A (en) * 2020-09-21 2021-01-01 中国科学院信息工程研究所 Video false face detection method and electronic device
CN112258388A (en) * 2020-11-02 2021-01-22 公安部第三研究所 Public security view desensitization test data generation method, system and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016074247A1 (en) * 2014-11-15 2016-05-19 Beijing Kuangshi Technology Co., Ltd. Face detection using machine learning
CN108985135A (en) * 2017-06-02 2018-12-11 腾讯科技(深圳)有限公司 A kind of human-face detector training method, device and electronic equipment
US11288764B2 (en) * 2019-07-01 2022-03-29 Digimarc Corporation Watermarking arrangements permitting vector graphics editing
CN111611873B (en) * 2020-04-28 2024-07-16 平安科技(深圳)有限公司 Face replacement detection method and device, electronic equipment and computer storage medium
CN112052759B (en) * 2020-08-25 2022-09-09 腾讯科技(深圳)有限公司 Living body detection method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
CN111368764A (en) * 2020-03-09 2020-07-03 零秩科技(深圳)有限公司 False video detection method based on computer vision and deep learning algorithm
CN111967427A (en) * 2020-08-28 2020-11-20 广东工业大学 Fake face video identification method, system and readable storage medium
CN112163488A (en) * 2020-09-21 2021-01-01 中国科学院信息工程研究所 Video false face detection method and electronic device
CN112149608A (en) * 2020-10-09 2020-12-29 腾讯科技(深圳)有限公司 Image recognition method, device and storage medium
CN112258388A (en) * 2020-11-02 2021-01-22 公安部第三研究所 Public security view desensitization test data generation method, system and storage medium

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Deep Learning for Deepfakes Creation and Detection;Thanh Thi Nguyen 等;《arXiv》;20190925;1-16 *
Deepfake Detection using Spatiotemporal Convolutional Networks;Oscar de Lima 等;《arXiv》;20200626;1-6 *
基于3D卷积神经网络的活体人脸检测研究;李山路;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20180115(第01期);I138-1456 *
基于新媒体的视图像内容识别技术研究;张家亮 等;《通信技术》;20181130;第51卷(第11期);2740-2743 *
最小二乘支持向量机的半监督学习算法;张健沛 等;《哈尔滨工程大学学报》;20081031;第29卷(第10期);1088-1092 *
视听觉深度伪造检测技术研究综述;梁瑞刚 等;《信息安全学报》;20200331;第5卷(第2期);1-17 *

Also Published As

Publication number Publication date
CN112861671A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN112861671B (en) Method for identifying deeply forged face image and video
CN111611873B (en) Face replacement detection method and device, electronic equipment and computer storage medium
CN111242837A (en) Face anonymous privacy protection method based on generation of countermeasure network
CN113537027B (en) Face depth counterfeiting detection method and system based on face division
CN111160313A (en) Face representation attack detection method based on LBP-VAE anomaly detection model
CN114694220A (en) Double-flow face counterfeiting detection method based on Swin transform
Yu et al. Detecting deepfake-forged contents with separable convolutional neural network and image segmentation
Huang et al. Deepfake mnist+: a deepfake facial animation dataset
CN113762138A (en) Method and device for identifying forged face picture, computer equipment and storage medium
CN113361474B (en) Double-current network image counterfeiting detection method and system based on image block feature extraction
Miao et al. Learning forgery region-aware and ID-independent features for face manipulation detection
CN113553954A (en) Method and apparatus for training behavior recognition model, device, medium, and program product
CN114842524B (en) Face false distinguishing method based on irregular significant pixel cluster
CN114724218A (en) Video detection method, device, equipment and medium
CN111882525A (en) Image reproduction detection method based on LBP watermark characteristics and fine-grained identification
CN113989713B (en) Depth forgery detection method based on video frame sequence prediction
Guo et al. Exposing deepfake face forgeries with guided residuals
CN117079354A (en) Deep forgery detection classification and positioning method based on noise inconsistency
CN112651319B (en) Video detection method and device, electronic equipment and storage medium
CN115936961B (en) Steganalysis method, equipment and medium based on few-sample comparison learning network
CN115578768A (en) Training method of image detection network, image detection method and system
CN113553895A (en) Multi-pose face recognition method based on face orthogonalization
CN113807232B (en) Fake face detection method, system and storage medium based on double-flow network
Li et al. A Deepfake Face Video Authentication Method Based on Spatio-temporal Fusion Features
Lin et al. AI‐generated video steganography based on semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant