CN114529685A - Three-dimensional style face generation method, device, equipment and storage medium - Google Patents

Three-dimensional style face generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN114529685A
CN114529685A CN202210156774.8A CN202210156774A CN114529685A CN 114529685 A CN114529685 A CN 114529685A CN 202210156774 A CN202210156774 A CN 202210156774A CN 114529685 A CN114529685 A CN 114529685A
Authority
CN
China
Prior art keywords
face
dimensional
pca
style
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210156774.8A
Other languages
Chinese (zh)
Inventor
芦爱余
卫华威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Huya Huxin Technology Co ltd
Original Assignee
Foshan Huya Huxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Huya Huxin Technology Co ltd filed Critical Foshan Huya Huxin Technology Co ltd
Priority to CN202210156774.8A priority Critical patent/CN114529685A/en
Publication of CN114529685A publication Critical patent/CN114529685A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Architecture (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Hardware Design (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method, a device, equipment and a storage medium for generating a three-dimensional style face. The method comprises the following steps: acquiring a two-dimensional face image marked with face key points, and constructing a three-dimensional real face which is completely matched with a real face in the two-dimensional face image according to the face key points; establishing a mapping relation between a PCA model of a standard three-dimensional style face and a PCA model of a three-dimensional real face; and adjusting the standard three-dimensional style face according to the mapping relation to obtain a three-dimensional style face corresponding to the two-dimensional face image. The technical scheme of the embodiment of the invention can quickly generate the three-dimensional style face with the reserved face feature information of the user based on a single picture.

Description

Three-dimensional style face generation method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of three-dimensional face reconstruction, in particular to a method, a device, equipment and a storage medium for generating a three-dimensional style face.
Background
Currently, a live broadcast application enables a anchor and a watching user to interact in a three-dimensional virtual space by constructing a virtual interaction space and a three-dimensional virtual image corresponding to the user.
The three-dimensional virtual image can be divided into a three-dimensional real face and a three-dimensional style face, the target of the three-dimensional real face is an image close to the original image of the user, and the target of the three-dimensional style face is a cartoon image of the original image of the user. In the prior art, the technology for generating the three-dimensional real face according to a single picture is relatively mature, but the generation of the three-dimensional style face is more dependent on the 'hand-pinching' of a user. Because the three-dimensional style face has too many dimensions to pinch, the user is not easy to control, and therefore the three-dimensional style face is difficult to be similar to the user.
Disclosure of Invention
The invention provides a method, a device, equipment and a storage medium for generating a three-dimensional style face, which aim to solve the problem of low similarity between the three-dimensional style face pinched by a user and the face of the user.
According to an aspect of the present invention, a method for generating a three-dimensional style face is provided, including:
acquiring a two-dimensional face image marked with face key points, and constructing a three-dimensional real face which is completely matched with a real face in the two-dimensional face image according to the face key points;
establishing a mapping relation between a Principal Component Analysis (PCA) model of a standard three-dimensional style human face and a PCA model of a three-dimensional real human face;
and adjusting the standard three-dimensional style face according to the mapping relation to obtain a three-dimensional style face corresponding to the two-dimensional face image.
Optionally, the establishing of the mapping relationship between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face includes:
and establishing a PCA base mapping relation and an amplitude mapping relation between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face.
Optionally, the establishing of a PCA base mapping relationship and an amplitude mapping relationship between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face includes:
in a PCA model of a standard three-dimensional style face, acquiring a first PCA base corresponding to the shape dimension of a target face and an amplitude value of the first PCA base;
inquiring a second PCA base corresponding to the shape dimension of the target face and an amplitude value of the second PCA base in a PCA model of the three-dimensional real face;
and establishing a mapping relation between the amplitude values of the first PCA base and the amplitude values of the second PCA base and the second PCA base.
Optionally, determining the amplitude value of the first PCA base of the standard three-dimensional style face includes:
acquiring a coordinate minimum value and a coordinate maximum value of a first PCA base under the condition that the standard three-dimensional style face is not deformed;
and taking the mean value of the coordinate minimum value and the coordinate maximum value as the amplitude value of the first PCA base.
Optionally, adjusting the standard three-dimensional style face according to the mapping relationship to obtain a three-dimensional style face corresponding to the two-dimensional face image, including:
updating the face shape coefficient of each PCA base with the mapping relation established in the standard three-dimensional style face into the face shape coefficient of the corresponding PCA base in the three-dimensional real face;
updating the amplitude value of each PCA base with the mapping relation established in the standard three-dimensional style face into the amplitude value of the corresponding PCA base in the three-dimensional real face;
and taking the updated standard three-dimensional style face as a three-dimensional style face corresponding to the two-dimensional face image.
Optionally, before constructing the mapping relationship between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face, the method further includes:
acquiring a standard three-dimensional style face and a three-dimensional style face sample set;
according to the formula S (alpha) ═ mus+Us*diag(σs) Alpha, performing principal component analysis on the three-dimensional style face sample set to construct PCA bases with different dimensions to form a PCA model;
wherein S (alpha) is PCA group, musIs the average, σ, of all three-dimensional style face samplessFor each three-dimensional styleVariance, U, of face samples with meansFeature vectors, diag (σ), obtained by principal component analysis of the difference between each three-dimensional style face sample and the means) As a feature vector UsCorresponding weight, alpha is associated with the feature vector UsThe corresponding characteristic value.
Optionally, the method includes obtaining a two-dimensional face image labeled with face key points, and constructing a three-dimensional real face completely matched with a real face in the two-dimensional face image according to the face key points, including:
acquiring a two-dimensional Face image marked with key points of a Face, and constructing a three-dimensional Face Model based on a Basel Face Model (BFM) topology;
establishing a mapping relation between a face key point in a three-dimensional face model and a face key point in a two-dimensional face image;
adjusting model parameters of the three-dimensional face model until the Euclidean distance between the projection point of the face key point in the three-dimensional face model and the face key point in the two-dimensional face image is minimum;
and taking the adjusted three-dimensional face model as a three-dimensional real face which is completely matched with a real face in the two-dimensional face image.
According to another aspect of the present invention, there is provided an apparatus for generating a three-dimensional style face, including:
the three-dimensional real face construction module is used for acquiring a two-dimensional face image marked with face key points and constructing a three-dimensional real face which is completely matched with a real face in the two-dimensional face image according to the face key points;
the mapping relation establishing module is used for establishing a mapping relation between a PCA model of the standard three-dimensional style face and a PCA model of the three-dimensional real face;
and the three-dimensional style face generation module is used for adjusting the standard three-dimensional style face according to the mapping relation to obtain a three-dimensional style face corresponding to the two-dimensional face image.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the method of generating a three-dimensional style face of any of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement the method for generating a three-dimensional style face according to any one of the embodiments of the present invention when executed.
According to the technical scheme of the embodiment of the invention, a two-dimensional face image marked with face key points is obtained, and a three-dimensional real face completely matched with a real face in the two-dimensional face image is constructed according to the face key points; establishing a mapping relation between a PCA model of a standard three-dimensional style face and a PCA model of a three-dimensional real face; the standard three-dimensional style face is adjusted according to the mapping relation to obtain the three-dimensional style face corresponding to the two-dimensional face image, the problem that the similarity between the three-dimensional style face pinched by the user and the user face is low is solved, and the beneficial effect that the three-dimensional style face with the characteristic information of the user face reserved is quickly generated based on a single picture is achieved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for generating a three-dimensional style face according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a three-dimensional style human face generation method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a PCA model suitable for use in accordance with an embodiment of the present invention;
FIG. 4 is a flowchart of a method for generating a three-dimensional style face according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of an apparatus for generating a three-dimensional style face according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device implementing the method for generating a three-dimensional style face according to the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "object," "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of a method for generating a three-dimensional style face according to an embodiment of the present invention, where the embodiment is applicable to a case where a three-dimensional style face similar to a face of a user in a single picture is quickly generated based on the single picture, and the method may be executed by a three-dimensional style face generation apparatus, where the three-dimensional style face generation apparatus may be implemented in a form of hardware and/or software, and the three-dimensional style face generation apparatus may be configured in an electronic device. As shown in fig. 1, the method includes:
s110, a two-dimensional face image marked with face key points is obtained, and a three-dimensional real face completely matched with a real face in the two-dimensional face image is constructed according to the face key points.
The two-dimensional face image is a picture containing a face of a user, and the face key points are important feature points of each part of the face, usually contour points and corner points. As shown in fig. 2, the feature points marked on the parts of the eyebrows, eyes, nose, mouth, face contour, etc. in the first column of images on the left side are the key points of the face. In this embodiment, after a picture including a front face input by a user is acquired, if a face key point is not marked in the picture, a face key point detection technology is first adopted to locate the face key point in the picture.
In this embodiment, after the face key points in the two-dimensional face image are obtained, a mapping relationship between the three-dimensional face key points and the two-dimensional face key points may be established based on a BFM topology by using a parameter fitting method, and then, by optimizing parameters of the three-dimensional face, projection points of the three-dimensional face key points and the two-dimensional face key points are completely aligned, so that a three-dimensional real face completely matched with a real face of a user in the two-dimensional face image is obtained, as shown in the second left row of images in fig. 2. By generating the three-dimensional real face completely aligned with the real face of the user, the face information of the real face of the user is kept in the three-dimensional real face.
The BFM is a face database stored on the basis of PCA, and the BFM topology is a three-dimensional face model topology established on the basis of the BFM face database. The main purpose of PCA is to use less variables to explain most of the variations in the original data, select fewer variables than the original eigenvalues, and construct new variables that can explain the final output, i.e., principal components.
And S120, establishing a mapping relation between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face.
The standard three-dimensional style face can be understood as a general three-dimensional style face, and the three-dimensional style face matched with a specific real face of a user can be generated only after face feature adjustment is performed on the standard three-dimensional style face. The style face can be a virtual face which cartoonizes the real face, and can also be a virtual face of other styles.
In this embodiment, the three-dimensional real face constructed based on S110 is greatly different from the three-dimensional style face, the facial features of the three-dimensional style face are usually cartoon, and the aspect ratio of the whole face is also greatly different from that of the real face. In order to convert the three-dimensional real face into the three-dimensional style face and to keep the face shape information of the three-dimensional real face in the three-dimensional style face, a standard three-dimensional style face may be prepared first, a PCA model of the three-dimensional style face as shown in fig. 3 may be constructed, and then the face shape information of the three-dimensional real face may be transferred to the standard three-dimensional style face based on a mapping relationship between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face.
The PCA model used for establishing the mapping relation mainly refers to a face shape model, and an expression model is not considered, because the expression model depends on expression driving and does not need to reconstruct the face expression.
Optionally, the establishing of the mapping relationship between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face may include: and establishing a PCA base mapping relation and an amplitude mapping relation between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face.
In this embodiment, the mapping relationship may be understood as an amplitude mapping relationship corresponding to a coordinate value on each coordinate axis of two high-dimensional orthogonal spaces, that is, the mapping relationship includes a mapping relationship between PCA bases and a mapping relationship between amplitudes of the PCA bases. This is because, after finding the PCA bases of corresponding dimensions between the standard three-dimensional style face and the three-dimensional real face, the amplitudes of different PCA bases under the same dimension change coefficient are different, and therefore, it is also necessary to determine the amplitude mapping relationship of the change of the bases, so that the directions and amplitudes of the PCA bases can be considered when the subsequent three-dimensional real face is converted into the three-dimensional style face.
Optionally, the establishing of the PCA base mapping relationship and the amplitude mapping relationship between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face may include: in a PCA model of a standard three-dimensional style face, acquiring a first PCA base corresponding to the shape dimension of a target face and an amplitude value of the first PCA base; inquiring a second PCA base corresponding to the shape dimension of the target face and an amplitude value of the second PCA base in a PCA model of the three-dimensional real face; and establishing a mapping relation between the amplitude values of the first PCA base and the amplitude values of the second PCA base and the second PCA base.
In this embodiment, because the PCA base of the three-dimensional real face has a large dimension, and the PCA base of the three-dimensional style face has a small dimension, it is not possible that the dimension of each PCA base has a mapping relationship, and to transfer the face information of the real face to the three-dimensional style face, it is only necessary to establish a mapping relationship for the PCA bases of part of the face shape dimensions, for example, a face fat-thin dimension, a chin length dimension, an eye distance dimension, and the like. The method can be used for respectively changing the amplitude value of each PCA base to observe the change of each dimension of the face in the standard three-dimensional style face and the three-dimensional real face based on the orthogonality of the PCA bases, finding out the face shape dimension to be mapped, and further establishing the mapping relation for the PCA bases and the amplitude of the PCA bases of the two three-dimensional faces under the dimension.
Optionally, determining the amplitude value of the first PCA base of the standard three-dimensional style face may include: under the condition that the standard three-dimensional style face is not deformed, acquiring a coordinate minimum value and a coordinate maximum value of a first PCA base; and taking the mean value of the coordinate minimum value and the coordinate maximum value as the amplitude value of the first PCA base.
In this embodiment, when the amplitude value of the first PCA base is determined, the coordinate values of the PCA base may be adjusted by using the standard three-dimensional style face with no deformity as a standard, so as to obtain the maximum coordinate value and the minimum coordinate value of the first PCA base, and further, the mean value of the maximum coordinate value and the minimum coordinate value is used as the amplitude value of the first PCA base. Correspondingly, the amplitude value of the second PCA base in the three-dimensional real human face is calculated by adopting the method. The mean value of the maximum coordinate value and the minimum coordinate value of the PCA base is used as the amplitude value of the PCA base, which is only an optional way, and other amplitude value calculation ways can be set according to the requirement.
And S130, adjusting the standard three-dimensional style face according to the mapping relation to obtain a three-dimensional style face corresponding to the two-dimensional face image.
In this embodiment, according to the mapping relationship, the shape dimensions of part of the faces in the standard style face are adjusted according to the face information in the three-dimensional real face, so that the finally obtained three-dimensional style face can retain more face information of the real face of the user and retain less texture information, as shown in the first column of images on the right side in fig. 2. In fig. 2, the eyebrows in the first column of images on the right side should be similar to the eyebrows of the real face of the user, which are not shown in the figure.
Optionally, adjusting the standard three-dimensional style face according to the mapping relationship to obtain a three-dimensional style face corresponding to the two-dimensional face image may include: updating the face shape coefficient of each PCA base with the mapping relation established in the standard three-dimensional style face into the face shape coefficient of the corresponding PCA base in the three-dimensional real face; updating the amplitude value of each PCA base with the mapping relation established in the standard three-dimensional style face into the amplitude value of the corresponding PCA base in the three-dimensional real face; and taking the updated standard three-dimensional style face as a three-dimensional style face corresponding to the two-dimensional face image.
Illustratively, taking the face fat-thin dimension as an example, a first PCA base and shape coefficients corresponding to the face fat-thin dimension in the three-dimensional real face are obtained, a mapping relation is queried to determine an amplitude value of the first PCA base in the three-dimensional real face and a second PCA base corresponding to the face fat-thin dimension in the standard three-dimensional style face, the shape coefficients of the second PCA base are replaced by the shape coefficients of the first PCA base, and the amplitude value of the second PCA base is replaced by the amplitude value of the first PCA base.
According to the technical scheme of the embodiment of the invention, a two-dimensional face image marked with face key points is obtained, and a three-dimensional real face completely matched with a real face in the two-dimensional face image is constructed according to the face key points; establishing a mapping relation between a PCA model of a standard three-dimensional style face and a PCA model of a three-dimensional real face; the standard three-dimensional style face is adjusted according to the mapping relation to obtain the three-dimensional style face corresponding to the two-dimensional face image, the problem that the similarity between the three-dimensional style face pinched by the user and the user face is low is solved, and the beneficial effect that the three-dimensional style face with the characteristic information of the user face reserved is quickly generated based on a single picture is achieved.
Example two
Fig. 4 is a flowchart of a method for generating a three-dimensional style face according to a second embodiment of the present invention, and this embodiment further provides a specific step of acquiring a two-dimensional face image labeled with face key points, and constructing a three-dimensional real face completely matched with a real face in the two-dimensional face image according to the face key points, and a specific step of constructing a PCA model of a standard three-dimensional style face, based on the above-mentioned second embodiment. As shown in fig. 4, the method includes:
s210, obtaining a two-dimensional face image marked with face key points, and constructing a three-dimensional face model based on BFM topology.
In this embodiment, after the face key points in the two-dimensional face image are obtained, based on the BFM topology, a three-dimensional face reconstruction formula is adopted:
Figure BDA0003513061250000101
Figure BDA0003513061250000102
and constructing a three-dimensional face model.
The three-dimensional face reconstruction formula comprises the following parameters: s is a scaling parameter used to scale the three-dimensional face model to align with the dimensions of the user's face in the two-dimensional face image, which is an unknown quantity. P is an orthogonal projection matrix, which is a 3 x 3 identity matrix. R is a rotation matrix, is a 3 × 3 unit orthogonal matrix, and is an unknown quantity.
Figure BDA0003513061250000103
Is the average of the PCA three-dimensional face model and is a known quantity.
The PCA three-dimensional modeling can construct a PCA base of a shape dimension and an expression dimension, wherein the shape dimension is represented as m, the value is 199, the expression dimension is represented as n, and the value is 29. Alpha is alphaiThe dimension of the PCA base is 199 × 65536 × 3, 65536 refers to the number of point clouds of the three-dimensional face model, 3 refers to three coordinate axes XYZ, and i is a dimension independent variable. siIs a shape coefficient, including 199 unknowns. Beta is aiPCA base, expression dimension, whose dimension is 29 × 65536 × 3. e.g. of the typei29 unknowns are included for the expression coefficients. t is t2dIs an unknown quantity used to align a three-dimensional face model with a two-dimensional face image.
S220, establishing a mapping relation between the face key points in the three-dimensional face model and the face key points in the two-dimensional face image.
In this embodiment, after the three-dimensional face model is established, the face key points at corresponding positions may be selected from the three-dimensional face model according to each face key point in the two-dimensional face image, and a corresponding key point mapping relationship is established.
And S230, adjusting model parameters of the three-dimensional face model until the Euclidean distance between the projection point of the face key point in the three-dimensional face model and the face key point in the two-dimensional face image is minimum.
In this embodiment, the projection points of the face key points in the three-dimensional face model are determined, and then the parameters s, R, and s in the three-dimensional face model are optimized by minimizing the euclidean distance between the projection points and the face key points in the two-dimensional face imagei、eiAnd t2dAnd realizing the final face reconstruction.
And S240, taking the adjusted three-dimensional face model as a three-dimensional real face which is completely matched with a real face in the two-dimensional face image.
And S250, constructing a PCA model of the standard three-dimensional style human face.
In this embodiment, in order to convert the three-dimensional real face into the three-dimensional style face with the face information of the user retained, a standard three-dimensional style face may be downloaded from the internet, and a plurality of three-dimensional style faces with different faces generated based on art are used to construct the PCA model of the standard three-dimensional style face as shown in fig. 3, which facilitates the subsequent mapping relationship between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face to transfer the face information of the user to the three-dimensional style face.
Optionally, a standard three-dimensional style face and a three-dimensional style face sample set are obtained; according to the formula S (alpha) ═ mus+Us*diag(σs) Alpha, performing principal component analysis on the three-dimensional style face sample set to construct PCA bases with different dimensions to form a PCA model; wherein S (alpha) is a PCA group,. mu.sIs the average, σ, of all three-dimensional style face samplessFor each three-dimensional style face sample, UsFeature vectors, diag (σ), obtained by principal component analysis of the difference between each three-dimensional style face sample and the means) As a feature vector UsCorresponding weight, alpha is associated with the feature vector UsThe corresponding characteristic value.
S260, establishing a mapping relation between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face.
And S270, adjusting the standard three-dimensional style face according to the mapping relation to obtain a three-dimensional style face corresponding to the two-dimensional face image.
It should be noted that, in this embodiment, a three-dimensional style face with user face feature information can be constructed through a single picture, more user face information can be retained, textures can be used as resources provided by public art, and eyebrows, lipsticks, eyes and the like can be adjusted.
According to the technical scheme of the embodiment of the invention, a two-dimensional face image marked with face key points is obtained, and a three-dimensional real face completely matched with a real face in the two-dimensional face image is constructed according to the face key points; establishing a mapping relation between a PCA model of a standard three-dimensional style face and a PCA model of a three-dimensional real face; the standard three-dimensional style face is adjusted according to the mapping relation to obtain the three-dimensional style face corresponding to the two-dimensional face image, the problem that the similarity between the three-dimensional style face pinched by the user and the face of the user is low is solved, and the advantage that the three-dimensional style face with the feature information of the user face reserved is quickly generated based on a single picture is achieved.
EXAMPLE III
Fig. 5 is a schematic structural diagram of an apparatus for generating a three-dimensional style human face according to a third embodiment of the present invention. The embodiment is applicable to the situation that a three-dimensional style face similar to the face of the user face in a single picture is quickly generated based on the single picture, the three-dimensional style face generation device can be realized in a hardware and/or software mode, and the three-dimensional style face generation device can be configured in electronic equipment. As shown in fig. 5, the apparatus includes:
a three-dimensional real face construction module 510, configured to execute acquiring a two-dimensional face image labeled with face key points, and construct a three-dimensional real face that is completely matched with a real face in the two-dimensional face image according to the face key points;
a mapping relationship establishing module 520, configured to perform establishing a mapping relationship between a PCA model of a standard three-dimensional style face and a PCA model of a three-dimensional real face;
and the three-dimensional style face generation module 530 is configured to perform adjustment on the standard three-dimensional style face according to the mapping relationship to obtain a three-dimensional style face corresponding to the two-dimensional face image.
According to the technical scheme of the embodiment of the invention, a two-dimensional face image marked with face key points is obtained, and a three-dimensional real face completely matched with a real face in the two-dimensional face image is constructed according to the face key points; establishing a mapping relation between a PCA model of a standard three-dimensional style face and a PCA model of a three-dimensional real face; the standard three-dimensional style face is adjusted according to the mapping relation to obtain the three-dimensional style face corresponding to the two-dimensional face image, the problem that the similarity between the three-dimensional style face and the user face is low due to the fact that the user holds the face with hands is solved, and the advantage that the three-dimensional style face with the reserved face feature information of the user is quickly generated based on a single picture is achieved.
Optionally, the mapping relationship establishing module 520 is configured to execute: and establishing a PCA base mapping relation and an amplitude mapping relation between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face.
Optionally, the mapping relationship establishing module 520 includes:
the first acquisition module is used for acquiring a first PCA base corresponding to the shape dimension of the target human face and an amplitude value of the first PCA base in a PCA model of a standard three-dimensional style human face;
the second acquisition module is used for inquiring a second PCA base corresponding to the shape dimension of the target face and the amplitude value of the second PCA base in the PCA model of the three-dimensional real face;
and the establishing module is used for establishing a mapping relation between the amplitude values of the first PCA base and the amplitude values of the second PCA base and the second PCA base.
Optionally, the first obtaining module is configured to perform:
under the condition that the standard three-dimensional style face is not deformed, acquiring a coordinate minimum value and a coordinate maximum value of a first PCA base;
and taking the mean value of the coordinate minimum value and the coordinate maximum value as the amplitude value of the first PCA base.
Optionally, the three-dimensional style face generating module 530 is configured to perform:
updating the face shape coefficient of each PCA base with the mapping relation established in the standard three-dimensional style face into the face shape coefficient of the corresponding PCA base in the three-dimensional real face;
updating the amplitude value of each PCA base with the mapping relation established in the standard three-dimensional style face into the amplitude value of the corresponding PCA base in the three-dimensional real face;
and taking the updated standard three-dimensional style face as a three-dimensional style face corresponding to the two-dimensional face image.
Optionally, the system further includes a PCA model establishing module, configured to perform:
before constructing a mapping relation between a PCA model of a standard three-dimensional style face and a PCA model of a three-dimensional real face, acquiring a standard three-dimensional style face and a three-dimensional style face sample set;
according to the formula S (alpha) ═ mus+Us*diag(σs) Alpha, performing principal component analysis on the three-dimensional style face sample set to construct PCA bases with different dimensions to form a PCA model;
wherein S (alpha) is a PCA group,. mu.sIs the average, σ, of all three-dimensional style face samplessFor each three-dimensional style face sample, UsFeature vectors, diag (σ), obtained by principal component analysis of the difference between each three-dimensional style face sample and the means) As a feature vector UsCorresponding weights, alpha being the sum of the eigenvectors UsThe corresponding characteristic value.
Optionally, the three-dimensional real face constructing module 510 is configured to perform:
acquiring a two-dimensional face image marked with face key points, and constructing a three-dimensional face model based on a Bussel face model BFM topology;
establishing a mapping relation between a face key point in a three-dimensional face model and a face key point in a two-dimensional face image;
adjusting model parameters of the three-dimensional face model until the Euclidean distance between the projection point of the face key point in the three-dimensional face model and the face key point in the two-dimensional face image is minimum;
and taking the adjusted three-dimensional face model as a three-dimensional real face which is completely matched with a real face in the two-dimensional face image.
The device for generating the three-dimensional style face provided by the embodiment of the invention can execute the method for generating the three-dimensional style face provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
FIG. 6 illustrates a schematic structural diagram of an electronic device 10 that may be used to implement an embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM)12, a Random Access Memory (RAM)13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM)12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The processor 11 performs the various methods and processes described above, such as the generation of three-dimensional style faces.
In some embodiments, the method of generating a three-dimensional style face may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the method for generating a three-dimensional style face described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the method of generating a three-dimensional style face by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for generating a three-dimensional style face is characterized by comprising the following steps:
acquiring a two-dimensional face image marked with face key points, and constructing a three-dimensional real face completely matched with a real face in the two-dimensional face image according to the face key points;
establishing a mapping relation between a Principal Component Analysis (PCA) model of a standard three-dimensional style face and a PCA model of the three-dimensional real face;
and adjusting the standard three-dimensional style face according to the mapping relation to obtain a three-dimensional style face corresponding to the two-dimensional face image.
2. The method of claim 1, wherein the establishing a mapping relationship between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face comprises:
and establishing a PCA base mapping relation and an amplitude mapping relation between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face.
3. The method of claim 2, wherein the establishing of the PCA base mapping relationship and the magnitude mapping relationship between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face comprises:
acquiring a first PCA base corresponding to the shape dimension of a target face and an amplitude value of the first PCA base in the PCA model of the standard three-dimensional style face;
inquiring a second PCA base corresponding to the shape dimension of the target face and an amplitude value of the second PCA base in the PCA model of the three-dimensional real face;
establishing a mapping relationship between the amplitude values of the first PCA base and the amplitude values of the second PCA base and the second PCA base.
4. The method of claim 3, wherein determining the amplitude value of the first PCA basis for a standard three-dimensional style face comprises:
under the condition that the standard three-dimensional style face is not deformed, acquiring the coordinate minimum value and the coordinate maximum value of the first PCA base;
and taking the mean value of the coordinate minimum value and the coordinate maximum value as the amplitude value of the first PCA base.
5. The method according to claim 3, wherein the adjusting the standard three-dimensional style face according to the mapping relationship to obtain a three-dimensional style face corresponding to the two-dimensional face image comprises:
updating the face shape coefficient of each PCA base with the mapping relation established in the standard three-dimensional style face into a face shape coefficient of a corresponding PCA base in the three-dimensional real face;
updating the amplitude value of each PCA base with the mapping relation established in the standard three-dimensional style face into the amplitude value of the corresponding PCA base in the three-dimensional real face;
and taking the updated standard three-dimensional style face as a three-dimensional style face corresponding to the two-dimensional face image.
6. The method according to any one of claims 1-5, further comprising, before the constructing a mapping relationship between the PCA model of the standard three-dimensional style face and the PCA model of the three-dimensional real face:
acquiring a standard three-dimensional style face and a three-dimensional style face sample set;
according to the formula S (alpha) ═ mus+Us*diag(σs) Alpha, performing principal component analysis on the three-dimensional style face sample set to construct PCA bases with different dimensions to form a PCA model;
wherein S (alpha) is PCA group, musIs a stand forAverage, σ, of face samples having a three-dimensional stylesFor each three-dimensional style face sample, UsFeature vectors, diag (σ), obtained by principal component analysis of the difference between each three-dimensional style face sample and the means) As a feature vector UsCorresponding weights, alpha being the sum of the eigenvectors UsThe corresponding characteristic value.
7. The method according to claim 1, wherein the obtaining a two-dimensional face image labeled with face key points and constructing a three-dimensional real face completely matching a real face in the two-dimensional face image according to the face key points comprises:
acquiring a two-dimensional face image labeled with face key points, and constructing a three-dimensional face model based on a Basel face model BFM topology;
establishing a mapping relation between a face key point in the three-dimensional face model and a face key point in the two-dimensional face image;
adjusting model parameters of the three-dimensional face model until the Euclidean distance between a projection point of a face key point in the three-dimensional face model and the face key point in the two-dimensional face image is minimum;
and taking the adjusted three-dimensional face model as a three-dimensional real face which is completely matched with a real face in the two-dimensional face image.
8. An apparatus for generating a three-dimensional style face, comprising:
the three-dimensional real face construction module is used for acquiring a two-dimensional face image marked with face key points and constructing a three-dimensional real face which is completely matched with a real face in the two-dimensional face image according to the face key points;
the mapping relation establishing module is used for establishing a mapping relation between a PCA model of a standard three-dimensional style face and a PCA model of the three-dimensional real face;
and the three-dimensional style face generation module is used for adjusting the standard three-dimensional style face according to the mapping relation to obtain a three-dimensional style face corresponding to the two-dimensional face image.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the method of generating a three-dimensional stylized face of any one of claims 1-7.
10. A computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions for causing a processor to implement the method for generating a three-dimensional style face according to any one of claims 1 to 7 when executed.
CN202210156774.8A 2022-02-21 2022-02-21 Three-dimensional style face generation method, device, equipment and storage medium Pending CN114529685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210156774.8A CN114529685A (en) 2022-02-21 2022-02-21 Three-dimensional style face generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210156774.8A CN114529685A (en) 2022-02-21 2022-02-21 Three-dimensional style face generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114529685A true CN114529685A (en) 2022-05-24

Family

ID=81624477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210156774.8A Pending CN114529685A (en) 2022-02-21 2022-02-21 Three-dimensional style face generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114529685A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104756156A (en) * 2013-03-14 2015-07-01 梦工厂动画公司 Compressing data representing computer animated hair
CN106447785A (en) * 2016-09-30 2017-02-22 北京奇虎科技有限公司 Method for driving virtual character and device thereof
CN108062791A (en) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 A kind of method and apparatus for rebuilding human face three-dimensional model
CN108447017A (en) * 2018-05-31 2018-08-24 Oppo广东移动通信有限公司 Face virtual face-lifting method and device
CN112258382A (en) * 2020-10-23 2021-01-22 北京中科深智科技有限公司 Face style transfer method and system based on image-to-image
CN113409454A (en) * 2021-07-14 2021-09-17 北京百度网讯科技有限公司 Face image processing method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104756156A (en) * 2013-03-14 2015-07-01 梦工厂动画公司 Compressing data representing computer animated hair
CN106447785A (en) * 2016-09-30 2017-02-22 北京奇虎科技有限公司 Method for driving virtual character and device thereof
CN108062791A (en) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 A kind of method and apparatus for rebuilding human face three-dimensional model
CN108447017A (en) * 2018-05-31 2018-08-24 Oppo广东移动通信有限公司 Face virtual face-lifting method and device
CN112258382A (en) * 2020-10-23 2021-01-22 北京中科深智科技有限公司 Face style transfer method and system based on image-to-image
CN113409454A (en) * 2021-07-14 2021-09-17 北京百度网讯科技有限公司 Face image processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高翔等: "3DMM与GAN结合的实时人脸表情迁移方法", 计算机软件与应用, 30 April 2020 (2020-04-30) *

Similar Documents

Publication Publication Date Title
CN113643412B (en) Virtual image generation method and device, electronic equipment and storage medium
CN113327278B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
US20230419592A1 (en) Method and apparatus for training a three-dimensional face reconstruction model and method and apparatus for generating a three-dimensional face image
CN115147265B (en) Avatar generation method, apparatus, electronic device, and storage medium
CN115345980A (en) Generation method and device of personalized texture map
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114612600B (en) Virtual image generation method and device, electronic equipment and storage medium
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN114202597B (en) Image processing method and apparatus, device, medium and product
CN116342782A (en) Method and apparatus for generating avatar rendering model
CN113808249B (en) Image processing method, device, equipment and computer storage medium
CN112884889B (en) Model training method, model training device, human head reconstruction method, human head reconstruction device, human head reconstruction equipment and storage medium
CN116524165B (en) Migration method, migration device, migration equipment and migration storage medium for three-dimensional expression model
CN113658035A (en) Face transformation method, device, equipment, storage medium and product
CN113380269A (en) Video image generation method, apparatus, device, medium, and computer program product
CN115359171B (en) Virtual image processing method and device, electronic equipment and storage medium
CN114529685A (en) Three-dimensional style face generation method, device, equipment and storage medium
CN115761196A (en) Method, device, equipment and medium for generating expression of object
CN115082298A (en) Image generation method, image generation device, electronic device, and storage medium
CN114648601A (en) Virtual image generation method, electronic device, program product and user terminal
CN114529649A (en) Image processing method and device
CN114638919A (en) Virtual image generation method, electronic device, program product and user terminal
CN113903071A (en) Face recognition method and device, electronic equipment and storage medium
CN116030150B (en) Avatar generation method, device, electronic equipment and medium
CN114037814B (en) Data processing method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination