CN113947794A - Fake face changing enhancement detection method based on head posture deviation correction - Google Patents

Fake face changing enhancement detection method based on head posture deviation correction Download PDF

Info

Publication number
CN113947794A
CN113947794A CN202111233086.9A CN202111233086A CN113947794A CN 113947794 A CN113947794 A CN 113947794A CN 202111233086 A CN202111233086 A CN 202111233086A CN 113947794 A CN113947794 A CN 113947794A
Authority
CN
China
Prior art keywords
key points
face
forged
face image
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111233086.9A
Other languages
Chinese (zh)
Other versions
CN113947794B (en
Inventor
王总辉
虞楚尔
刘非凡
段宇萱
陈文智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202111233086.9A priority Critical patent/CN113947794B/en
Publication of CN113947794A publication Critical patent/CN113947794A/en
Application granted granted Critical
Publication of CN113947794B publication Critical patent/CN113947794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a fake face-changing enhancement detection method based on head posture deviation correction, which comprises the following steps: acquiring a normal face image and a forged face image and extracting key points; aiming at the forged face image, combining a head three-dimensional model, carrying out re-projection on the key points to obtain re-projected key points, correcting the projected key points, and according to the extracted initial key points and the corrected key points, warping the forged face image by adopting a minimum motion-based quadratic method to obtain an enhanced forged face image; taking a normal face image and an enhanced forged face image as samples, performing head posture evaluation based on key points of the samples to determine a rotation matrix and a translation matrix of the head posture, and optimizing parameters of a forged face-changing detection model by taking the rotation matrix and the translation matrix as construction characteristics; and performing enhanced detection of fake face change by using the parameter-optimized fake face change detection model. The method enhances the detection capability of a forged face-changing detection model by enhancing forged samples.

Description

Fake face changing enhancement detection method based on head posture deviation correction
Technical Field
The invention belongs to the field of image recognition, and particularly relates to a fake face changing enhancement detection method based on head posture deviation correction.
Background
The deep fake video of the human face is mainly divided into four types, namely human face exchange, human face synthesis, facial attribute operation and facial expression exchange. The face exchange mainly generates a fake A which is consistent with the facial expression of A but is replaced by a B face; the face synthesis is to directly generate a forged face without definite target through learning; the face attribute operation is similar to a beautiful image camera, and some decorations similar to camouflage are generated on the face of a person; and the facial expression exchange is to apply the expression of B to A so that the original facial expression in A is consistent with B.
The human face exchange technology is mainly realized by a deep learning network and a confrontation generation network (GAN), the core idea is to train two trainers for coding and decoding human faces, an encoder is common to all human faces, a decoder is trained for a certain human face, if the human face is coded by an encoder A and decoded by a B encoder, a generated face which is consistent with an expression A but contains a face B is obtained. The encoder is composed of four convolutional layers, two full-link layers and a high-end layer, and the decoder is composed of three high-end layers and one convolutional layer. The core of the high-end layer is a Pixel shuffle () function, which warps the image to some extent, thereby increasing the learning difficulty and making the model achieve a better effect. Understanding face exchange, it will be appreciated that other categories of forgery may be generated in a similar manner.
Existing deep forgery technologies are mostly based on deep learning, so that many defects exist in visual biological features, and the defects can be identified by a targeted classifier. But biometric-based forgery detection methods are also susceptible to picture quality, and even biometric generation can be targeted by forgery to circumvent the authentication of such classifiers.
For the forgery detection of a face image forged by a deep learning technology, a depth algorithm is generally adopted, for example, a detection method for generating a forged image by a face disclosed in patent document CN 109344709 a, sampling is performed at the position of each pixel point according to a plurality of color channels of each pixel point of all training images in a training image set, so as to obtain a sampling point set of the training image set and a sampling point set of each training image therein; carrying out distributed modeling on a sampling point set of a training image set, and calculating to obtain parameters of the sampling point set; based on the parameters, the sampling point set of each training image is coded, the detection characteristics of the training image are constructed, the detection characteristics of each training image and the corresponding training image label are subjected to model training to obtain a detection classifier, and the detection classifier is utilized to detect the face generation forged image.
Further, as disclosed in patent document CN 112183501a, a depth-forged image detection method includes: inputting a real face image and a forged face image; adopting a first convolution neural network to carry out primary face feature extraction on the face image; extracting traditional image features containing texture features from the face image, and dynamically adjusting and processing the traditional image features by adopting a second convolutional neural network; superposing the preliminary face features and the traditional image features in the channel dimension to obtain the fusion features of the face images; performing feature re-extraction on the fusion features by adopting a third convolutional neural network, wherein the feature re-extraction enables the feature interaction between the primary face features and the processed traditional image features; and outputting the probability confidence of the true and false classification of the face image according to the re-extracted features.
Although the above two existing documents can detect the forged face image by using the deep learning method, the two existing documents are not suitable for detecting all forged faces because only the visual biological characteristics are considered.
Disclosure of Invention
In view of the above, it is an object of the present invention to provide a false face-change enhancement detection method based on head pose deviation correction, which detects a false face from the perspective of the head pose deviation correction, and enhances the detection capability of a false face-change detection model by enhancing a false sample.
In order to achieve the purpose, the invention provides the following technical scheme:
a fake face changing enhancement detection method based on head posture deviation correction comprises the following steps:
acquiring a normal face image and a forged face image and extracting key points;
aiming at a forged face image, combining a head three-dimensional model, carrying out re-projection on key points to obtain re-projected key points, correcting the projected key points, and according to the extracted initial key points and the corrected key points, adopting a minimum motion-based binary-multiplication algorithm to distort the forged face image to realize enhanced counterfeiting to obtain an enhanced forged face image;
taking a normal face image and an enhanced forged face image as samples, performing head posture evaluation based on key points of the samples to determine a rotation matrix and a translation matrix of the head posture, and optimizing parameters of a forged face-changing detection model by taking the rotation matrix and the translation matrix as construction characteristics;
and performing enhanced detection of fake face change by using the parameter-optimized fake face change detection model.
In one embodiment, all the extracted key points are global key points and are divided into contour key points and central face key points according to regions;
the re-projecting the key points comprises: and carrying out reprojection on the outline key points and the center face key points.
In one embodiment, the reprojection process includes:
mapping two-dimensional key points on the face image to obtain three-dimensional key points corresponding to the head three-dimensional model; performing head posture evaluation based on the key points to determine a rotation matrix and a translation matrix corresponding to the key points; and then, referring to the rotation matrix and the translation matrix, and carrying out re-projection calculation on the three-dimensional key points by adopting a re-projection function to obtain the re-projected two-dimensional key points.
In one embodiment, the reprojecting the contour keypoints comprises: referring to a rotation matrix and a translation matrix corresponding to the central face reference point, and performing reprojection calculation on the contour three-dimensional key points by adopting a reprojection function to obtain reprojected contour two-dimensional key points;
the central face key point reprojection method comprises the following steps: and referring to the rotation matrix and the translation matrix corresponding to the contour reference point, and performing reprojection calculation on the three-dimensional key points of the central face by adopting a reprojection function to obtain the reprojected two-dimensional key points of the central face.
In one embodiment, the correcting the post-projection keypoints includes: and calculating a conversion matrix by using all the key points before and after the re-projection, and carrying out perspective transformation on the projected key points by using the conversion matrix so as to correct the key points.
In one embodiment, the process of warping the fake face image includes:
based on the initial key points and the corrected key points, fitting by adopting a minimum moving quadratic calculus algorithm to solve a transformation function representing the transformation relation between the initial key points and the corrected key points, and transforming the pixel points on the forged face image by using the transformation function to distort.
In one embodiment, the sample-based keypoints for head pose assessment comprises:
based on a computer vision theory, the following formula is constructed and solved through the conversion relation of world coordinates, camera coordinates and picture coordinates, and a rotation matrix and a translation matrix of the head posture are obtained;
Figure BDA0003316798840000041
wherein (x)i,yi) Is the coordinate of the key point of the sample, i is the index of the key point, n is the number of the key points, (U)i,Vi,Wi) Is the world of key pointsWorld coordinate, fxAnd fyIs the focal length of the imaging camera in the x-direction and the y-direction, (c)x,cy) Is the optical center of the imaging camera, s is the scaling parameter to be solved, R is the rotation matrix to be solved,
Figure BDA0003316798840000042
is a translation matrix to be solved.
In one embodiment, the optimizing the counterfeit face-changing detection model parameters by using the rotation matrix and the translation matrix as features includes:
the rotation matrices of the global key points and the center face key points are respectively represented as RaAnd RcThe translation matrices of the global key point and the center face key point are respectively expressed as
Figure BDA0003316798840000051
And
Figure BDA0003316798840000052
according to the formula
Figure BDA0003316798840000053
Calculating head three-dimensional vectors respectively corresponding to the global key points and the central face key points, and respectively representing the head three-dimensional vectors as
Figure BDA0003316798840000054
The constructed characteristics include:
Figure BDA0003316798840000055
Ra-Rc
Figure BDA0003316798840000056
Figure BDA0003316798840000057
the camera has three degrees of freedom in space and time,
Figure BDA0003316798840000058
is three fromThe camera pose (camera) in degrees, called the rodrigors rotation vector, R is
Figure BDA0003316798840000059
The result of flattening (flatten) of (f) is a scalar, i.e., the angle of rotation of the camera in the XY imaging plane.
And optimizing parameters of the fake face-changing detection model by using any constructed characteristic as input data of the fake face-changing detection model.
In one embodiment, the fake face change detection model adopts SVM classifier to construct features
Figure BDA00033167988400000510
And optimizing SVM classifier parameters as input data of the SVM classifier.
In one embodiment, the enhanced detection of fake face changes by using the parameter-optimized fake face change detection model includes:
after key points of a to-be-detected fake face image are extracted, a rotation matrix and a translation matrix of the head posture are obtained based on key point acute head posture assessment, then, features are constructed according to the rotation matrix and the translation matrix and input to a fake face changing detection model with optimized parameters, and detection results are output through calculation.
Compared with the prior art, the invention has the beneficial effects that:
based on key points of the forged face image, the forging is enhanced through reprojection calculation and distortion calculation to obtain an enhanced forged face image, then head pose evaluation is carried out based on key points of a normal face image and the enhanced forged face image, a forged face change detection model is trained by constructing features with a determined rotation matrix and a determined translation matrix, so that the forged face change detection model can learn very deep feature information hidden through the enhancement of the forging, and meanwhile, the forged face change is distinguished based on head pose deviation information, the robustness of the forged face change detection model is improved, the forged face change detection model is used for detection, and the accuracy of forged face change detection is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of a method for detecting false face replacement enhancement based on deviation correction of head pose according to an embodiment;
FIG. 2 is a schematic diagram of a face landworks according to an embodiment;
fig. 3 is a flow chart of enhancement of a forged face image according to an embodiment.
Fig. 4 is a statistical graph of cosine distances of an original picture and a DeepFakes picture according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
In order to improve the detection accuracy of the fake face change, the embodiment provides a fake face change enhancement detection method based on head posture deviation correction, wherein a fake human face is detected from the angle of the head posture deviation correction, and the detection capability of a fake face change detection model is enhanced by enhancing fake samples.
Fig. 1 is a flowchart of a method for detecting false face replacement enhancement based on head pose deviation correction according to an embodiment. As shown in fig. 1, the embodiment provides a method for enhancing detection of fake face replacement based on head pose deviation correction, which includes the following steps:
step 1, acquiring a normal face image and a forged face image and extracting key points.
In the embodiment, a normal face image is obtained, a corresponding forged face image is obtained after the normal face image is processed by the Deepfake, and the normal face image and the forged face image form a sample pair. Then, the key points of the normal face image and the forged face image are respectively extracted as landworks, and 68 key points are extracted as the face landworks by adopting the library function carried by the dlib library, as shown in fig. 2. The landworks group consisting of the serial numbers 1 to 68 corresponding to the global faces is used as global landworks and is marked as a group A, the landworks group consisting of the serial numbers 18 to 36, 49 and 55 corresponding to the central faces is used as central faces and is marked as a group C, the landworks group consisting of the serial numbers 1 to 17, 49 and 55 corresponding to the outlines is used as outline landworks and is marked as a group O.
And 2, enhancing the forged face image to obtain an enhanced forged face image.
As shown in fig. 2, for the forged face image, the head three-dimensional model is combined, the key points are re-projected to obtain re-projected key points, the projected key points are corrected, and the forged face image is distorted by using a minimum motion-based two-product algorithm according to the extracted initial key points and the corrected key points to realize enhanced forging, so as to obtain an enhanced forged face image.
In the embodiment, the enhancement process of the forged face image is to make landworks of the forged face change image as close as possible to the normal face image. The re-projection process comprises the following steps:
mapping two-dimensional key points on the face image to obtain three-dimensional key points corresponding to the head three-dimensional model; performing head posture evaluation based on the key points to determine a rotation matrix and a translation matrix corresponding to the key points; and then, referring to the rotation matrix and the translation matrix, and carrying out re-projection calculation on the three-dimensional key points by adopting a re-projection function to obtain the re-projected two-dimensional key points.
In the embodiment, in the head posture evaluation process, based on a computer vision theory, the following formula is constructed and solved through the conversion relation of world coordinates, camera coordinates and picture coordinates, and a rotation matrix and a translation matrix of the head posture are obtained;
Figure BDA0003316798840000081
wherein (x)i,yi) Is the image coordinates of the key points of the sample, i is the index of the key points, n is the number of key points, (U)i,Vi,Wi) Is the world coordinate of the key point, fxAnd fyIs the focal length of the imaging camera in the x-direction and the y-direction, (c)x,cy) Is the optical center of the imaging camera, s is the scaling parameter to be solved, R is the rotation matrix to be solved,
Figure BDA0003316798840000082
is a translation matrix to be solved.
On the basis of the image coordinates, the world coordinates and the camera coordinates of the known key points, the corresponding rotation matrix R and translation matrix can be directly solved by utilizing the solvePnP function in the openCV
Figure BDA0003316798840000083
Scaling the parameter s. The Levenberg-Marquardt algorithm can also be applied to bring the sets of center face landworks coordinates and global face landworks coordinates into the above formula, to respectively solve a set of corresponding s, R,
Figure BDA0003316798840000084
the problem of reducing the head posture deviation in this way is converted into reducing R found for the two partial point groups,
Figure BDA0003316798840000085
the deviation of (2).
Obtaining a rotation matrix R and a translation matrix
Figure BDA0003316798840000086
And then, carrying out a re-projection operation by using the openCV function projectPoints, thereby changing landworks values and narrowing the gap, so that two-dimensional key points after re-projection can be obtained.
The embodiment provides 2 kinds of re-projection modes, namely a re-projection mode based on the contour key points and a re-projection mode based on the center face key points, wherein the two projection modes are similar to the projection mode, and the selected reference data are different.
Aiming at a reprojection mode based on the contour key points, mapping the two-dimensional contour key points to obtain three-dimensional contour key points corresponding to the head three-dimensional model; performing head posture evaluation based on the global key points to determine a rotation matrix and a translation matrix corresponding to the global key points; and then, referring to a rotation matrix and a translation matrix corresponding to the central face reference point, and carrying out re-projection calculation on the contour three-dimensional key points by adopting a re-projection function to obtain re-projected contour two-dimensional key points.
The contour-based re-projection is simply that the three-dimensional contour key point group O' utilizes a re-projection function project points (the project points function calculates the coordinates of the three-dimensional points projected onto a two-dimensional image plane through given internal parameters and external parameters) and a rotation matrix Rc and a translation matrix corresponding to the central face key points
Figure BDA0003316798840000091
And calculating a new two-dimensional coordinate set as a two-dimensional contour key point Onew after the re-projection.
The advantages of contour-based reprojection include: the main part of the deep counterfeiting face changing is a central face area, and the change of the central face area most affects the visual experience of a person on the face changing effect, so that the central face coordinate is changed, and the risk of face deformation is high. Secondly, the projection operation is carried out based on the contour points, only a few coordinate points are changed, and the distortion operation is convenient to carry out.
Aiming at a re-projection mode based on the key points of the central face, mapping the two-dimensional key points of the central face to obtain three-dimensional key points C' of the central face corresponding to the three-dimensional model of the head; performing head posture evaluation based on the global key points to determine a rotation matrix and a translation matrix corresponding to the global key points; and then, referring to a rotation matrix and a translation matrix corresponding to the contour reference point, and carrying out re-projection calculation on the three-dimensional key points of the central face by adopting a re-projection function to obtain the re-projected two-dimensional key points of the central face.
The center face-based re-projection is simply that the three-dimensional center face key point group C' utilizes a re-projection function project points (the project points function calculates the coordinates of the three-dimensional points projected onto a two-dimensional image plane through given internal parameters and external parameters) and a rotation matrix Ro and a translation matrix corresponding to the outline key points
Figure BDA0003316798840000092
And calculating a new two-dimensional coordinate set as a two-dimensional central face key point Cnew after the re-projection.
The center face based reprojection has more point coordinates than the contour change, and changing the center area is more susceptible to the effects of face deformation.
The embodiment adopts a head three-dimensional model which is a three-dimensional human face general model calculated based on an average face, and the head three-dimensional model can be approximately used in the estimation of the head pose, but the result is not accurate enough. The face 68 key points are calculated by utilizing a solvePnP function to obtain R,
Figure BDA0003316798840000101
and further adding the water-soluble organic compound into the water,
Figure BDA0003316798840000102
combining the parameters with three-dimensional points of the 3D general model of the head, and re-projecting by using projectPoints again, a certain deviation can be found between the parameters and the original landworks. Such deviations severely affect the visual impact of the face region. In order to reduce the deviation, the projected key points are corrected, and the specific process is as follows: and calculating a conversion matrix by using the 68 key points before and after the re-projection, and carrying out perspective transformation on the projected key points by using the conversion matrix so as to correct the key points.
After the correction, a re-faced image can be obtained by using the corrected key points, and in order to detect a re-projected re-faced image, the embodiment refers to an image distortion method based on minimum shift quadratic system (MLS) for image distortion. The idea of MLS-based image transformation is to operate on a small number of points to control the entire mesh deformation, thereby achieving the entire image deformation. Specifically, the process of warping the forged face image includes: based on the initial key points and the corrected key points, fitting is carried out by adopting a minimum moving two-times algorithm so as to solve a transformation function representing the transformation relation between the initial key points and the corrected key points, and the transformation function is utilized to transform the pixel points on the forged face image so as to distort, so that the enhanced forged face image can be obtained.
The principle of image transformation based on MLS is to find a transformation function fv according to the relationship between a set of control points p on the image and a shifted point set q corresponding to known p, so that fv, i.e., the position coordinates corresponding to shifted input points v on the image can be obtained for each input point v on the image. The transformation function fv may be considered as a minimum-range affine transformation, which may be denoted as fv(x) M is a 2 × 2 transformation matrix and T is a 2 × 1 translation matrix.
Applying fv to the control point p defines the following energy function, where wi represents the weight:
Figure BDA0003316798840000103
substitution into fv(x) Is Mx + T, can be obtained
Figure BDA0003316798840000104
The process of fv therefore essentially can be converted into a process of M, T. Further derivation yields the following equation:
Figure BDA0003316798840000111
Figure BDA0003316798840000112
re-substituting T into the energy function yields:
Figure BDA0003316798840000113
Figure BDA0003316798840000114
similarly, the partial derivative of M is calculated based on the energy function, and the expression of M can be obtained:
Figure BDA0003316798840000115
substitute for fv(x) After Mx + T, f can be obtainedvExpression (c):
Figure BDA0003316798840000116
Figure BDA0003316798840000117
and (3) selecting the coordinates of the landworks points of the human face before projection as p, selecting the coordinates of the landworks point group of the human face after re-projection as q, and applying an MLS (Multi-level multisystem) based image warping algorithm to p and q to warp each point of the image by referring to q.
And 3, performing head posture evaluation on the normal face image and the enhanced forged face image, constructing characteristics by using the determined rotation matrix and translation matrix, and optimizing parameters of a forged face-changing detection model.
In the embodiment, a normal face image and an enhanced forged face image are used as samples, head pose evaluation is carried out based on key points of the samples to determine a rotation matrix and a translation matrix of a head pose, and the rotation matrix and the translation matrix are used as construction features to optimize parameters of a forged face change detection model.
The head pose evaluation is performed based on the extracted 68 key points in step 3, and the process of solving the rotation matrix and the translation matrix of the head pose is the same as that in step 2, which is not repeated herein. The rotation matrices of the global key points and the center face key points are respectively represented as RaAnd RcGlobal key points and center face key pointsRespectively expressed as
Figure BDA0003316798840000121
And
Figure BDA0003316798840000122
according to the formula
Figure BDA0003316798840000123
Calculating head three-dimensional vectors respectively corresponding to the global key points and the central face key points, and respectively representing the head three-dimensional vectors as
Figure BDA0003316798840000124
And the direction is the w-axis direction of the real coordinate axis.
To be provided with
Figure BDA0003316798840000125
The cosine distance of the group of original pictures and the cosine distance of the DeepFakes pictures are counted by taking the cosine distance as a deviation distance, and the counting result is shown in figure 4. It can be seen that the cosine distances of the two head pose vectors estimated from the real image are concentrated in a small range, the maximum value can reach 0.02, and most values of the DeepFakes forged image are between 0.02 and 0.08. The difference in the cosine distance distributions of the two head pose vectors indicates that it is effective to detect a fake picture based on this feature.
The characteristics constructed based on the constructed rotation matrix and translation matrix include:
Figure BDA0003316798840000126
Ra-Rc
Figure BDA0003316798840000127
and optimizing parameters of the fake face-changing detection model by using any constructed characteristic as input data of the fake face-changing detection model.
Embodiments use an SVM classifier as a false face change detection model. The output value of the SVM classifier is the prediction probability of face forgery, and the probability of picture forgery is higher as the value is closer to 1. To achieve better results, the fake face-change detection model attempts to use different features as classifier training features. The training effect is shown in table 1:
TABLE 1
Figure BDA0003316798840000131
AUROC is taken as a performance measure, and experiments show that
Figure BDA0003316798840000132
The effect obtained as a feature is the best. Analyzing data, detecting video to
Figure BDA0003316798840000133
As a characteristic, the obtained AUROC is as high as 0.974, and even if the AUROC is started from a picture frame alone, the AUROC reaches a value of 0.890, so that the detection accuracy of a fake face change detection model is fully proved.
And 4, performing enhanced detection of counterfeit face change by using the parameter-optimized counterfeit face change detection model.
In the embodiment, the enhanced detection of the counterfeit face change by using the parameter-optimized counterfeit face change detection model comprises the following steps: after key points of a to-be-detected fake face image are extracted, a rotation matrix and a translation matrix of the head posture are obtained based on key point acute head posture assessment, then, features are constructed according to the rotation matrix and the translation matrix and input to a fake face changing detection model with optimized parameters, and detection results are output through calculation.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A fake face changing enhancement detection method based on head posture deviation correction comprises the following steps:
acquiring a normal face image and a forged face image and extracting key points;
aiming at a forged face image, combining a head three-dimensional model, carrying out re-projection on key points to obtain re-projected key points, correcting the projected key points, and according to the extracted initial key points and the corrected key points, adopting a minimum motion-based binary-multiplication algorithm to distort the forged face image to realize enhanced counterfeiting to obtain an enhanced forged face image;
taking a normal face image and an enhanced forged face image as samples, performing head posture evaluation based on key points of the samples to determine a rotation matrix and a translation matrix of the head posture, and optimizing parameters of a forged face-changing detection model by taking the rotation matrix and the translation matrix as construction characteristics;
and performing enhanced detection of fake face change by using the parameter-optimized fake face change detection model.
2. The method for detecting the enhancement of the false face replacement based on the head pose deviation correction according to claim 1, wherein all the extracted key points are global key points and are divided into contour key points and central face key points according to regions;
the re-projecting the key points comprises: and carrying out reprojection on the outline key points and the center face key points.
3. The method for detecting false face replacement enhancement based on head pose deviation correction according to claim 1 or 2, wherein the re-projection process comprises:
mapping two-dimensional key points on the face image to obtain three-dimensional key points corresponding to the head three-dimensional model; performing head posture evaluation based on the key points to determine a rotation matrix and a translation matrix corresponding to the key points; and then, referring to the rotation matrix and the translation matrix, and carrying out re-projection calculation on the three-dimensional key points by adopting a re-projection function to obtain the re-projected two-dimensional key points.
4. The method for detecting the enhancement of the fake face-changing based on the correction of the head pose deviation according to the claim 3, wherein the re-projecting the key points of the contour comprises: referring to a rotation matrix and a translation matrix corresponding to the central face reference point, and performing reprojection calculation on the contour three-dimensional key points by adopting a reprojection function to obtain reprojected contour two-dimensional key points;
the central face key point reprojection method comprises the following steps: and referring to the rotation matrix and the translation matrix corresponding to the contour reference point, and performing reprojection calculation on the three-dimensional key points of the central face by adopting a reprojection function to obtain the reprojected two-dimensional key points of the central face.
5. The method for detecting the false face-changing enhancement based on the head pose deviation correction according to claim 1, wherein the correcting the projected key points comprises:
and calculating a conversion matrix by using all the key points before and after the re-projection, and carrying out perspective transformation on the projected key points by using the conversion matrix so as to correct the key points.
6. The method for detecting the enhancement of the fake face-changing based on the correction of the head pose deviation according to the claim 1, wherein the process of distorting the fake face image comprises the following steps:
based on the initial key points and the corrected key points, fitting by adopting a minimum moving quadratic calculus algorithm to solve a transformation function representing the transformation relation between the initial key points and the corrected key points, and transforming the pixel points on the forged face image by using the transformation function to distort.
7. The method for detecting false face replacement enhancement based on head pose deviation correction according to claim 1, wherein the evaluating head pose based on key points of samples comprises:
based on a computer vision theory, the following formula is constructed and solved through the transformation relation of world coordinates, camera coordinates and picture coordinates, and a rotation matrix and a translation matrix of the head posture are obtained;
Figure FDA0003316798830000021
wherein (x)i,yi) Is the coordinate of the key point of the sample, i is the index of the key point, n is the number of the key points, (U)i,Vi,Wi) Is the world coordinate of the key point, fxAnd fyIs the focal length of the imaging camera in the x-direction and the y-direction, (c)x,cy) Is the optical center of the imaging camera, s is the scaling parameter to be solved, R is the rotation matrix to be solved,
Figure FDA0003316798830000031
is a translation matrix to be solved.
8. The method for detecting forgery face replacement enhancement based on head pose deviation correction according to claim 2, wherein said optimizing forgery face replacement detection model parameters by using rotation matrix and translation matrix as features comprises:
the rotation matrices of the global key points and the center face key points are respectively represented as RaAnd RcThe translation matrices of the global key point and the center face key point are respectively expressed as
Figure FDA0003316798830000032
And
Figure FDA0003316798830000033
according to the formula
Figure FDA0003316798830000034
Calculating head three-dimensional vectors respectively corresponding to the global key points and the central face key points, and respectively representing the head three-dimensional vectors as
Figure FDA0003316798830000035
The constructed characteristics include:
Figure FDA0003316798830000036
Figure FDA0003316798830000037
and optimizing parameters of the fake face-changing detection model by using any constructed characteristic as input data of the fake face-changing detection model.
9. The method for detecting forgery face-changing enhancement based on head pose deviation correction according to claim 8, wherein said forgery face-changing detection model adopts SVM classifier, constructed characteristics
Figure FDA0003316798830000038
And optimizing SVM classifier parameters as input data of the SVM classifier.
10. The method for detecting forgery face replacement enhancement based on head pose deviation correction according to claim 1, wherein the enhanced detection of forgery face replacement by using the parameter-optimized forgery face replacement detection model comprises:
after key points of a to-be-detected fake face image are extracted, a rotation matrix and a translation matrix of the head posture are obtained based on key point acute head posture assessment, then, features are constructed according to the rotation matrix and the translation matrix and input to a fake face changing detection model with optimized parameters, and detection results are output through calculation.
CN202111233086.9A 2021-10-22 2021-10-22 Fake face change enhancement detection method based on head posture deviation correction Active CN113947794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111233086.9A CN113947794B (en) 2021-10-22 2021-10-22 Fake face change enhancement detection method based on head posture deviation correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111233086.9A CN113947794B (en) 2021-10-22 2021-10-22 Fake face change enhancement detection method based on head posture deviation correction

Publications (2)

Publication Number Publication Date
CN113947794A true CN113947794A (en) 2022-01-18
CN113947794B CN113947794B (en) 2024-07-05

Family

ID=79332207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111233086.9A Active CN113947794B (en) 2021-10-22 2021-10-22 Fake face change enhancement detection method based on head posture deviation correction

Country Status (1)

Country Link
CN (1) CN113947794B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115331263A (en) * 2022-09-19 2022-11-11 北京航空航天大学 Robust attitude estimation method and application thereof in orientation judgment and related method
CN116311481A (en) * 2023-05-19 2023-06-23 广州视景医疗软件有限公司 Construction method, device and storage medium of enhanced vision estimation model
CN116645299A (en) * 2023-07-26 2023-08-25 中国人民解放军国防科技大学 Method and device for enhancing depth fake video data and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170006355A (en) * 2015-07-08 2017-01-18 주식회사 케이티 Method of motion vector and feature vector based fake face detection and apparatus for the same
KR101815697B1 (en) * 2016-10-13 2018-01-05 주식회사 에스원 Apparatus and method for discriminating fake face
CN111027465A (en) * 2019-12-09 2020-04-17 韶鼎人工智能科技有限公司 Video face replacement method based on illumination migration
CN113240575A (en) * 2021-05-12 2021-08-10 中国科学技术大学 Face counterfeit video effect enhancement method
CN113344777A (en) * 2021-08-02 2021-09-03 中国科学院自动化研究所 Face changing and replaying method and device based on three-dimensional face decomposition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170006355A (en) * 2015-07-08 2017-01-18 주식회사 케이티 Method of motion vector and feature vector based fake face detection and apparatus for the same
KR101815697B1 (en) * 2016-10-13 2018-01-05 주식회사 에스원 Apparatus and method for discriminating fake face
CN111027465A (en) * 2019-12-09 2020-04-17 韶鼎人工智能科技有限公司 Video face replacement method based on illumination migration
CN113240575A (en) * 2021-05-12 2021-08-10 中国科学技术大学 Face counterfeit video effect enhancement method
CN113344777A (en) * 2021-08-02 2021-09-03 中国科学院自动化研究所 Face changing and replaying method and device based on three-dimensional face decomposition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈鹏;梁涛;刘锦;戴娇;韩冀中;: "融合全局时序和局部空间特征的伪造人脸视频检测方法", 信息安全学报, no. 02, 15 March 2020 (2020-03-15) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115331263A (en) * 2022-09-19 2022-11-11 北京航空航天大学 Robust attitude estimation method and application thereof in orientation judgment and related method
CN115331263B (en) * 2022-09-19 2023-11-07 北京航空航天大学 Robust attitude estimation method, application of robust attitude estimation method in direction judgment and related method
CN116311481A (en) * 2023-05-19 2023-06-23 广州视景医疗软件有限公司 Construction method, device and storage medium of enhanced vision estimation model
CN116311481B (en) * 2023-05-19 2023-08-25 广州视景医疗软件有限公司 Construction method, device and storage medium of enhanced vision estimation model
CN116645299A (en) * 2023-07-26 2023-08-25 中国人民解放军国防科技大学 Method and device for enhancing depth fake video data and computer equipment
CN116645299B (en) * 2023-07-26 2023-10-10 中国人民解放军国防科技大学 Method and device for enhancing depth fake video data and computer equipment

Also Published As

Publication number Publication date
CN113947794B (en) 2024-07-05

Similar Documents

Publication Publication Date Title
CN110543846B (en) Multi-pose face image obverse method based on generation countermeasure network
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN110348330B (en) Face pose virtual view generation method based on VAE-ACGAN
WO2022111236A1 (en) Facial expression recognition method and system combined with attention mechanism
CN113947794A (en) Fake face changing enhancement detection method based on head posture deviation correction
CN112418074A (en) Coupled posture face recognition method based on self-attention
CN113139479B (en) Micro-expression recognition method and system based on optical flow and RGB modal contrast learning
CN112418041B (en) Multi-pose face recognition method based on face orthogonalization
CN112507617B (en) Training method of SRFlow super-resolution model and face recognition method
CN113283444B (en) Heterogeneous image migration method based on generation countermeasure network
KR20090065965A (en) 3d image model generation method and apparatus, image recognition method and apparatus using the same and recording medium storing program for performing the method thereof
CN114783024A (en) Face recognition system of gauze mask is worn in public place based on YOLOv5
CN110443883A (en) A kind of individual color image plane three-dimensional method for reconstructing based on dropblock
Baek et al. Generative adversarial ensemble learning for face forensics
CN110853119A (en) Robust reference picture-based makeup migration method
CN111832405A (en) Face recognition method based on HOG and depth residual error network
CN113538569A (en) Weak texture object pose estimation method and system
CN111126307A (en) Small sample face recognition method of joint sparse representation neural network
CN115439743A (en) Method for accurately extracting visual SLAM static characteristics in parking scene
Liu et al. Multi-Scale Underwater Image Enhancement in RGB and HSV Color Spaces
CN113269167B (en) Face counterfeiting detection method based on image blocking and disordering
CN113688698B (en) Face correction recognition method and system based on artificial intelligence
Teng et al. Unimodal face classification with multimodal training
CN113553895A (en) Multi-pose face recognition method based on face orthogonalization
CN110503061B (en) Multi-feature-fused multi-factor video occlusion area detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant