CN112419454A - Face reconstruction method and device, computer equipment and storage medium - Google Patents

Face reconstruction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112419454A
CN112419454A CN202011337901.1A CN202011337901A CN112419454A CN 112419454 A CN112419454 A CN 112419454A CN 202011337901 A CN202011337901 A CN 202011337901A CN 112419454 A CN112419454 A CN 112419454A
Authority
CN
China
Prior art keywords
target
face
data
virtual
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011337901.1A
Other languages
Chinese (zh)
Other versions
CN112419454B (en
Inventor
徐胜伟
王权
钱晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202011337901.1A priority Critical patent/CN112419454B/en
Publication of CN112419454A publication Critical patent/CN112419454A/en
Priority to PCT/CN2021/102431 priority patent/WO2022110791A1/en
Priority to JP2022520004A priority patent/JP2023507863A/en
Priority to KR1020227010819A priority patent/KR20220075339A/en
Priority to TW110127356A priority patent/TWI773458B/en
Application granted granted Critical
Publication of CN112419454B publication Critical patent/CN112419454B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Electric Double-Layer Capacitors Or The Like (AREA)
  • Image Generation (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a face reconstruction method, apparatus, computer device and storage medium, wherein the method comprises: generating a first real face model based on the target image; fitting the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models; generating target bone data and a target skin deformation coefficient based on target coefficients respectively corresponding to the plurality of second real face models and virtual face models with preset styles respectively corresponding to the plurality of second real face models; and generating a target virtual human face model corresponding to the first real human face model based on the target skeleton data and the target skin deformation coefficient.

Description

Face reconstruction method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a face reconstruction method, an apparatus, a computer device, and a storage medium.
Background
The face reconstruction can establish a virtual face three-dimensional model according to a real face or the preference of the face reconstruction, and the face reconstruction has wide application in the fields of games, cartoons, virtual social contact and the like. For example, in a game, a player can generate a virtual face three-dimensional model according to a real face included in an image provided by the player through a face reconstruction system provided by a game program, and participate in the game with a more substituted sense by using the created virtual face three-dimensional model.
At present, when face reconstruction is performed based on a face included in a face image, face contour features are generally extracted based on the face image, and then the extracted face contour features are matched and fused with a virtual three-dimensional model generated in advance to generate a virtual face three-dimensional model; but the accuracy rate of matching the human face contour features is low, so that the similarity between the generated virtual human face three-dimensional model and the real human face image is low.
Disclosure of Invention
The embodiment of the disclosure at least provides a face reconstruction method, a face reconstruction device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a face reconstruction method, including: generating a first real face model based on the target image; fitting the first real face model by utilizing a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models; generating target bone data and a target skin deformation coefficient based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models; and generating a target virtual face model corresponding to the first real face model based on the target skeleton data and the target skin deformation coefficient.
In the embodiment, the target coefficients are used as media to establish the incidence relations between a plurality of second real face models and the first real face model, and the incidence relations can represent the incidence relations between the virtual face model established based on the second real face models and the target virtual face model established based on the first real face model; in addition, the target skin deformation coefficient can represent the characteristic that the human face skin in the target image deforms, and if bones are the same, fat and thin differences which can be represented by the skin exist; the target virtual face model determined based on the target coefficient and the target skin deformation coefficient has a preset style and the characteristics of an original face corresponding to the first real face model, and can reflect the fat and thin characteristics of the original face, and the generated target virtual face model has higher similarity with the original face corresponding to the first real face model.
In an alternative embodiment, the virtual face model comprises: the skin deformation coefficient of the skin data of the virtual human face model relative to the pre-generated standard skin data of the standard virtual human face model; generating a target skin deformation coefficient based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models, including: and generating the target skin deformation coefficient of the target skin data of the target virtual face model relative to the standard skin data based on the target coefficients corresponding to the second real face models respectively and the skin deformation coefficients included in the virtual face models respectively.
In the embodiment, the standard skin data of the standard virtual face model is used as a reference, and after the skin deformation coefficient of the skin data of the virtual face model relative to the standard skin data is determined, the target skin deformation coefficient of the target skin data of the target virtual face relative to the standard skin data can be accurately determined based on the target coefficient representing the incidence relation between the virtual face model and the target virtual face model, so that the skin data of the target virtual face can be more accurately determined based on the target skin deformation coefficient, and the generated target virtual face model and the original face corresponding to the first real face model have higher similarity.
In an optional implementation manner, the generating, based on target coefficients corresponding to the plurality of second real face models respectively and skin deformation coefficients included in the plurality of virtual face models respectively, the target skin deformation coefficient of target skin data of the target virtual face model relative to the standard skin data includes: normalizing the target coefficients respectively corresponding to the plurality of second real face models; and obtaining the target skin deformation coefficient based on the target coefficient after normalization processing and the skin deformation coefficients respectively included by the virtual face model.
In this embodiment, by performing normalization processing on the target coefficients corresponding to the plurality of second real face models, when the target skin deformation coefficients are obtained based on the target coefficients after normalization processing and the skin deformation coefficients included in the virtual face models, the data expression is simpler, the processing procedure is simplified, and the processing speed of face reconstruction using the fitting result is increased.
In an alternative embodiment, the generating a target virtual face model corresponding to the first real face model based on the target bone data and the target skinning deformation coefficient includes: based on the target skeleton data and the incidence relation between the standard skeleton data and the standard skinning data in the standard virtual human face model, carrying out position transformation processing on the standard skinning data to generate intermediate skinning data; performing deformation processing on the intermediate skin data based on the target skin deformation coefficient to obtain target skin data; and constructing the target virtual human face model based on the target skeleton data and the target skinning data.
In the embodiment, after the intermediate skin data are generated, the intermediate skin data are subjected to deformation processing by using the target skin deformation coefficient, the obtained target skin data can represent the appearance characteristics of the first real face model and can also represent the fat-thin degree of the first real face, the generated target virtual face model not only has the difference in appearance, but also has the difference in fat-thin degree, so that when different target virtual faces are generated, the original face corresponding to the first real face model has higher similarity.
In an alternative embodiment, the target bone data includes at least one of: the target bone position data, the target bone scaling data, and the target bone rotation data; the bone data corresponding to the plurality of virtual face models respectively comprises at least one of the following: the virtual human face comprises bone rotation data, bone position data and bone scaling data corresponding to each human face bone in a plurality of human face bones of the virtual human face.
In the embodiment, the bone data corresponding to each bone in a plurality of human face bones can be more accurately represented by using the bone data, and the target virtual human face model can be more accurately determined by using the target bone data.
In an optional embodiment, the target bone data includes target bone position data, and the generating of the target bone data based on the target coefficients corresponding to the second real face models respectively and the virtual face models with the preset style corresponding to the second real face models respectively includes: and performing interpolation processing on the bone position data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain the target bone position data.
In an optional embodiment, the target bone data includes target bone scaling data, and the generating the target bone data based on target coefficients corresponding to the second real face models respectively and virtual face models having a preset style corresponding to the second real face models respectively includes: and performing interpolation processing on the skeleton scaling data respectively corresponding to the plurality of virtual face models based on the target coefficients respectively corresponding to the plurality of second real face models to obtain the target skeleton scaling data.
In an optional embodiment, the target bone data includes target bone rotation data, and the generating of the target bone data based on the target coefficients corresponding to the second real face models respectively and the virtual face models with the preset style corresponding to the second real face models respectively includes: converting the bone rotation data respectively corresponding to the virtual face models into quaternion data, and performing regularization processing on the quaternion data to obtain regularized quaternion data; and performing interpolation processing on the regularized quaternion data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain the target bone rotation data.
In an alternative embodiment, the generating a first real face model based on the target image includes: acquiring a target image comprising an original face; and performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face model.
In the embodiment, the face features of the original face in the target image can be represented more accurately and comprehensively by using the first real face model obtained by performing three-dimensional face reconstruction on the original face.
In an alternative embodiment, a plurality of said second real face models are pre-generated according to the following: acquiring a plurality of reference images comprising reference faces; and aiming at each reference image in the plurality of reference images, performing three-dimensional face reconstruction on the reference face included in each reference image to obtain a second real face model corresponding to each reference image.
In this embodiment, a plurality of reference images are used, so that a wider range of human face appearance features can be covered as much as possible, and therefore, a second real human face model obtained by performing three-dimensional human face reconstruction based on each reference image in the plurality of reference images can also cover the wider range of human face appearance features as much as possible.
In an optional embodiment, the method further comprises: acquiring virtual face models with preset styles corresponding to the plurality of second real face models respectively by adopting the following modes: generating an intermediate virtual face model with a preset style corresponding to each second real face model in the plurality of second real face models; generating skin deformation coefficients of the virtual face model corresponding to each second real face model relative to the standard virtual face model based on a plurality of groups of preset skin deformation coefficients relative to the standard virtual face model; and adjusting the intermediate skin data in the intermediate virtual face model by using the skin deformation coefficient, and generating the virtual face model of each second real face model based on the adjusted intermediate skin data and the intermediate skeleton data of the intermediate virtual face model.
In this embodiment, the intermediate skin data of the intermediate virtual face model corresponding to the second real face model is adjusted through the skin deformation coefficient, so that the generated virtual face model has not only the preset style and the appearance characteristics of the second real face model, but also the fat-thin degree of the reference face corresponding to the second real face model, and the virtual face model and the corresponding reference face have higher similarity.
In an optional implementation manner, the fitting the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models respectively includes: and performing least square processing on the plurality of second real face models and the first real face model to obtain target coefficients corresponding to the plurality of second real face models respectively.
In this embodiment, the fitting situation when the plurality of second real face models are used to fit the first real face model can be accurately characterized by using the target coefficients.
In a second aspect, an embodiment of the present disclosure further provides a face reconstruction apparatus, including:
a first generation module for generating a first real face model based on the target image;
the processing module is used for fitting the first real face model by utilizing a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models;
the second generation module is used for generating target bone data and target skin deformation coefficients based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models;
a third generation module for generating a target virtual face model corresponding to the first real face model based on the target skeleton data and the target skin deformation coefficient
In an alternative embodiment, the virtual face model includes: the skin deformation coefficient of the skin data of the virtual human face model relative to the pre-generated standard skin data of the standard virtual human face model;
the second generating module is configured to, when generating a target skin deformation coefficient based on the target coefficients corresponding to the plurality of second real face models, and the virtual face models having the preset styles and corresponding to the plurality of second real face models,: and generating the target skin deformation coefficient of the target skin data of the target virtual face model relative to the standard skin data based on the target coefficients corresponding to the second real face models respectively and the skin deformation coefficients included in the virtual face models respectively.
In an optional embodiment, the second generating module, when generating the target skin deformation coefficients of the target skin data of the target virtual face model relative to the standard skin data based on the target coefficients corresponding to the plurality of second real face models respectively and the skin deformation coefficients included in the plurality of virtual face models respectively, is configured to: normalizing the target coefficients respectively corresponding to the plurality of second real face models; and obtaining the target skin deformation coefficient based on the target coefficient after normalization processing and the skin deformation coefficients respectively included by the virtual face model.
In an alternative embodiment, the third generation module, when generating the target virtual face model corresponding to the first real face model based on the target bone data and the target skinning deformation coefficient, is configured to: based on the target skeleton data and the incidence relation between the standard skeleton data and the standard skinning data in the standard virtual human face model, carrying out position transformation processing on the standard skinning data to generate intermediate skinning data; performing deformation processing on the intermediate skin data based on the target skin deformation coefficient to obtain target skin data; and constructing the target virtual human face model based on the target skeleton data and the target skinning data.
In an alternative embodiment, the target bone data includes at least one of: the target bone position data, the target bone scaling data, and the target bone rotation data;
the bone data corresponding to the plurality of virtual face models respectively comprises at least one of the following: the virtual human face comprises bone rotation data, bone position data and bone scaling data corresponding to each human face bone in a plurality of human face bones of the virtual human face.
In an optional embodiment, the target bone data includes the target bone position data, and the second generating module, when generating the target bone data based on the target coefficients corresponding to the second real face models respectively and the virtual face models with the preset styles corresponding to the second real face models respectively, is configured to: and performing interpolation processing on the bone position data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain the target bone position data.
In an optional embodiment, the target bone data includes the target bone scaling data, and the second generating module, when generating the target bone data based on the target coefficients corresponding to the second real face models respectively and the virtual face models with the preset styles corresponding to the second real face models respectively, is configured to: and performing interpolation processing on the skeleton scaling data respectively corresponding to the plurality of virtual face models based on the target coefficients respectively corresponding to the plurality of second real face models to obtain the target skeleton scaling data.
In an optional embodiment, the target bone data includes the target bone rotation data, and the second generating module, when generating the target bone data based on the target coefficients corresponding to the second real face models respectively and the virtual face models with the preset styles corresponding to the second real face models respectively, is configured to: converting the bone rotation data respectively corresponding to the virtual face models into quaternion data, and performing regularization processing on the quaternion data to obtain regularized quaternion data; and performing interpolation processing on the regularized quaternion data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain the target bone rotation data.
In an alternative embodiment, the first generating module, when generating the first real face model based on the target image, is configured to: acquiring a target image comprising an original face; and performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face model.
In an alternative embodiment, the processing module pre-generates the plurality of second real face models according to the following: acquiring a plurality of reference images comprising reference faces; and aiming at each reference image in the plurality of reference images, carrying out three-dimensional face reconstruction on the reference face included in each reference image to obtain a second real face model corresponding to each reference image.
In an optional implementation manner, the system further includes an obtaining module, configured to obtain virtual face models with preset styles corresponding to the plurality of second real face models respectively by using the following manners: generating an intermediate virtual face model with a preset style corresponding to each second real face model in the plurality of second real face models; generating skin deformation coefficients of the virtual face model corresponding to each second real face model relative to the standard virtual face model based on a plurality of groups of preset skin deformation coefficients relative to the standard virtual face model; and adjusting the intermediate skin data in the intermediate virtual face model by using the skin deformation coefficient, and generating the virtual face model of each second real face model based on the adjusted intermediate skin data and the intermediate skeleton data of the intermediate virtual face model.
In an optional implementation manner, the processing module is configured to, when fitting the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models, use: and performing least square processing on the plurality of second real face models and the first real face model to obtain target coefficients corresponding to the plurality of second real face models respectively.
In a third aspect, this disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the above-mentioned face reconstruction apparatus, computer device, and computer-readable storage medium, reference is made to the description of the above-mentioned face reconstruction method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a face reconstruction method provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating a particular method of generating a virtual face model corresponding to each second real face model provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating a specific method for obtaining a target skin deformation coefficient according to an embodiment of the disclosure;
FIG. 4 is a flowchart illustrating a specific method for generating a target virtual face model corresponding to a first real face model based on target skeleton data and a target skinning deformation coefficient according to an embodiment of the present disclosure;
fig. 5 illustrates a plurality of faces and an example of a face model included in a face reconstruction method according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating a face reconstruction apparatus provided in an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Research shows that the method for reconstructing the human face can be used for establishing a virtual human face three-dimensional model according to the real human face or the preference of the human face. Under the condition of face reconstruction based on a face in a portrait image, feature extraction is generally performed on the face contained in the portrait image to obtain a face contour feature, the face contour feature is then matched with a feature in a pre-generated face virtual three-dimensional model, and the face contour feature is fused with the face virtual three-dimensional model based on a matching result to obtain a virtual face three-dimensional model corresponding to the face in the portrait image. When the human face contour features are matched with the features in the human face virtual three-dimensional model generated in advance, the matching accuracy is low, so that the matching error between the human face virtual three-dimensional model and the human face contour features is large, and the problem that the human face similarity between the virtual human face three-dimensional model obtained by fusing the human face contour features and the human face virtual three-dimensional model according to the matching result and the human face image is low is easily caused.
Based on the research, the present disclosure provides a face reconstruction method, which includes fitting a first real face model with a plurality of pre-generated real face models to obtain target coefficients corresponding to a plurality of second real face models, generating a target virtual face model corresponding to a target image by using the target coefficients, virtual face models with preset styles and target skin deformation coefficients corresponding to the plurality of second real face models, respectively, so that the target coefficients are used as media to establish an association relationship between the plurality of second real face models and the first real face model, and transferring the association relationship between a virtual face model established based on the second real face model and a target face virtual face model established based on the first real face model; meanwhile, the target skinning coefficient can represent the characteristic that the human face skinning in the target image is deformed, namely the target skinning coefficient can be used for adaptively adjusting the existing fat-thin difference, so that a target virtual human face model determined based on the target coefficient and the virtual human face model not only has the preset style and the characteristics of an original human face corresponding to the first real human face model, but also can embody the fat-thin characteristics of the original human face, and the generated target virtual human face model has higher similarity with the original human face corresponding to the first real human face model.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a face reconstruction method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the face reconstruction method provided in the embodiments of the present disclosure is generally a computer device with certain computing power, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the face reconstruction method may be implemented by a processor calling computer readable instructions stored in a memory.
The following explains the face reconstruction method provided by the embodiment of the present disclosure.
Referring to fig. 1, which is a flowchart of a face reconstruction method provided in the embodiment of the present disclosure, the method includes steps S101 to S104, where:
s101: generating a first real face model based on the target image;
s102: fitting the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models;
s103: generating target bone data and a target skin deformation coefficient based on target coefficients respectively corresponding to the plurality of second real face models and virtual face models with preset styles respectively corresponding to the plurality of second real face models;
s104: and generating a target virtual human face model corresponding to the first real human face model based on the target skeleton data and the target skin deformation coefficient.
The present disclosure provides a face reconstruction method, which establishes an association relationship between a plurality of second real face models and a first real face model by using a target coefficient as a medium, the association can characterize an association between a virtual face model established on the basis of the second real face model and a target virtual face model established on the basis of the first real face model, meanwhile, the characteristics of human face skin deformation in the target image are represented by the target skin deformation coefficient, for example, under the condition that bones are the same, fat and thin differences exist, the target virtual face model determined based on the target coefficient and the virtual face model has the preset style and the characteristics of the original face corresponding to the first real face model, and can reflect the fat and thin characteristics of the original face, and the generated target virtual face model has higher similarity with the original face corresponding to the first real face model.
The following describes the details of S101 to S104.
For the above S101, the target image is, for example, an acquired image including a human face, or an image including a human face acquired when a certain object is photographed by a photographing apparatus of a camera. At this time, for example, any one of faces included in the image may be determined as an original face, and the original face may be an object of face reconstruction.
Specifically, when the face reconstruction method provided by the embodiment of the present disclosure is applied to different scenes, the target image acquisition methods are also different.
For example, in the case of applying the face reconstruction method to a game, an image including a face of a game player may be acquired by an image acquisition device installed in the game device, or an image including a face of a game player may be selected from an album in the game device and the acquired image including a face of a game player may be used as a target image.
For another example, when the face reconstruction method is applied to a terminal device such as a mobile phone, a camera of the terminal device may collect an image including a face of a user, select an image including a face of a user from an album of the terminal device, or receive an image including a face of a user from another application installed in the terminal device.
For another example, when the face reconstruction method is applied to a live broadcast scene, a video frame image including a face may be determined from a plurality of frame video frame images included in a video stream acquired by a live broadcast device; and the video frame image containing the human face is taken as a target image. Here, the target image may have, for example, a plurality of frames; the multiple frames of target images may be obtained by sampling a video stream, for example.
In generating the first real face model based on the target image, for example, the following manner may be adopted: acquiring a target image comprising an original face; and performing three-dimensional face reconstruction on the original face included in the target image to obtain a first real face model.
Here, when performing three-dimensional face reconstruction on an original face included in the target image, for example, a three-dimensional deformable face model (3Dimensions mobile Models, 3DMM) may be used to obtain a first real face model corresponding to the original face. The first real face model includes, for example, position information of each of a plurality of key points of an original face in a target image in a preset camera coordinate system.
For the above S102, the second real face model is generated based on the reference image including the reference face. Wherein, the reference human faces in different reference images can be different; for example, a plurality of persons different in at least one of sex, age, skin color, fat and thin degree, and the like may be determined, and for each of the plurality of persons, a face image of each person is acquired, and the acquired face image is used as a reference image. Therefore, the second real face model obtained based on the reference image can cover a wider face appearance characteristic as much as possible.
The reference face includes, for example, faces corresponding to N different objects, (N is an integer greater than 1). Illustratively, N pictures respectively corresponding to N different objects may be obtained by respectively photographing the N different objects, and each picture corresponds to one reference face. At this time, the N photographs may be used as N reference images; or, determining N reference images from a plurality of pre-shot images comprising different human faces.
Illustratively, the method of generating a plurality of second real face models comprises: acquiring a plurality of reference images comprising reference faces; and aiming at each reference image in the multiple reference images, performing three-dimensional face reconstruction on the reference face included in each reference image to obtain a second real face model corresponding to each reference image.
The method for reconstructing the three-dimensional face of the reference face is similar to the method for reconstructing the three-dimensional face of the original face, and is not described herein again. The obtained second real face model comprises position information of each key point in a plurality of key points of the reference face in the reference image in a preset camera coordinate system. At this time, the coordinate system of the second real face model and the coordinate system of the first real face model may be the same coordinate system.
When fitting the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models, for example, the following method may be adopted: and performing least square processing on the plurality of second real face models and the first real face model to obtain target coefficients respectively corresponding to the plurality of second real face models.
Illustratively, the model data corresponding to the first real face model may be represented as DaWill beModel data corresponding to the N second real face models are represented as Dbi(i∈[1,N]) Wherein D isbiRepresenting the ith one of the N second real face models.
By using DaTo Db1To DbNIs subjected to least square processing to obtain N fitting values, which are expressed as alphai(i∈[1,N]). Wherein alpha isiAnd representing a fitting value corresponding to the ith second real face model. Using the N fit values, a target coefficient Alpha may be determined, which may be represented, for example, by a coefficient matrix, i.e., Alpha ═ Alpha12,…,αN]。
Here, in the process of fitting the first real face model by the second real face model, the data obtained by weighting and summing the second real face model by the target coefficient may be as close as possible to the data of the first real face model.
The target coefficient may be regarded as an expression coefficient of each second real face model when the plurality of second real face models are used to express the first real face model. That is, the fitting values of the plurality of second real face models respectively corresponding to the expression coefficients can be used to transform and fit the second real face models to the first real face models.
For the above S103, the preset style may be, for example, a card ventilation style, an ancient style, an abstract style, or the like, and may be specifically set according to actual needs. For example, for the case that the preset style is a cartoon style, the virtual face model with the preset style may be a virtual face model with a certain cartoon style, for example.
Wherein, virtual face model includes: the skin deformation coefficients of the skeleton data, the skin data and the skin data of the virtual human face model relative to the pre-generated standard skin data of the standard virtual human face model.
Referring to fig. 2, an embodiment of the present disclosure provides a specific method for generating each second real face model, including:
s201: and generating an intermediate virtual face model with a preset style corresponding to each second real face model in the plurality of second real face models.
Here, the method of generating the intermediate virtual face model having the preset style corresponding to each of the plurality of second real face models includes, for example, at least one of the following (a1) and (a 2):
(a1) taking the example of obtaining an intermediate virtual face model corresponding to a second real face model, a virtual face image having reference face features and a preset style can be made based on a reference image, and three-dimensional modeling is performed on a virtual face in the virtual face image to obtain skeleton data and skin data of the virtual face in the virtual face image.
Wherein the bone data comprises: the method comprises the steps of bone rotation data, bone scaling data and bone position data of a plurality of bones preset for the virtual face in a preset coordinate system. Here, the plurality of bones may be divided into a plurality of levels, for example; including, for example, the root skeleton, the five sense organs skeleton, the details of the five sense organs skeleton; wherein the five sense bones may include: eyebrow bones, nasal bones, zygomatic bones, mandible bones, mouth bones, etc.; the details of the five sense organs and bones can be further divided into different details. The setting can be specifically performed according to the requirements of virtual images of different styles, which is not limited herein.
The skin data includes: the method comprises the following steps of obtaining position information of a plurality of position points in the surface of the virtual human face in a preset model coordinate system, and association relation information of each position point and at least one bone in a plurality of bones. Wherein, the model coordinate system is a three-dimensional coordinate system established aiming at the virtual human face model.
And taking a virtual model obtained by three-dimensional modeling of the virtual face in the virtual face image as an intermediate virtual face model corresponding to the second real face model.
(a2) And generating a standard virtual human face model with a preset style in advance. The standard virtual face model also comprises standard bone data, standard skinning data and an incidence relation between the standard bone data and the standard skinning data. Based on the face features of the reference face corresponding to each reference image in the multiple reference images, standard skeleton data in the standard virtual face model are adjusted, so that the adjusted standard virtual face model has a preset style and also comprises the features of the reference face in the reference image; and then, based on the incidence relation between the standard skeleton data and the standard skinning data, adjusting the standard skinning data, simultaneously adding characteristic information of a reference human face to the standard skinning data, and generating an intermediate virtual human face model corresponding to the second real human face model based on the modified standard skeleton data and the modified standard skinning data.
Here, the specific data representation of the intermediate virtual face model may be referred to as (a1) above, and is not described herein again.
S202: and generating skin deformation coefficients of the virtual human face model corresponding to each second real human face model relative to the standard virtual human face model based on a plurality of groups of preset skin deformation coefficients relative to the standard virtual human face model.
Here, when generating a plurality of sets of skin deformation coefficients for the standard virtual face model, the adjustment coefficients are adjustment coefficients for adjusting only at least part of position points corresponding to specific positions, such as cheekbones, representing the standard virtual face model in the labeled skin data of the standard virtual face model when the skeleton of the standard virtual face model is not changed.
And each group of skin deformation coefficients represent the result of adjusting the positions of at least part of position points in the standard skin data in the model coordinate system, so that the parts corresponding to the adjusted position points in the standard virtual human face model have the effect of becoming fat or thin.
When skin deformation coefficients corresponding to the reference face are combined through the multiple groups of preset skin data, for example, the multiple groups of preset skin data can be fitted, so that the fitted result is similar to the face shape of the reference face.
S203: and adjusting the intermediate skin data in the intermediate virtual face model by using the skin deformation coefficient, and generating the virtual face model of each second real face model based on the adjusted intermediate skin data and the intermediate skeleton data of the intermediate virtual face model.
For example, in one possible implementation, R sets of preset skin deformation coefficients Blendshape may be obtained; here, each set of the preset skin deformation coefficients includes deformation coefficient values corresponding to a plurality of position points in the skin data. For example, if there are W position points in the skin data, and each position point corresponds to one deformation coefficient value, the dimension of each set of skin deformation coefficients in the R sets of preset skin deformation coefficients is W.
Wherein a Blenshape is utilizedi(i∈[1,R]) And the ith group of preset skin deformation coefficients are represented. And modifying the thickness of the standard virtual face model by utilizing the R groups of preset skin deformation coefficients so as to obtain R standard virtual face models with the thickness characteristics adjusted.
When the virtual face model is generated, R groups of preset skin deformation coefficients Blenshape can be combined to obtain the skin deformation coefficients of the virtual face model. Here, for example, corresponding weights may be added to different preset skinning deformation coefficients, and the weights are used to perform weighted summation on R groups of preset skinning deformation coefficients to obtain the skinning deformation coefficient of a certain virtual face model.
Illustratively, under the condition that N second real face models are generated in advance and R groups of preset skin deformation coefficients are obtained, the skin deformation coefficient Blendshape of the ith real faceiDimension of (d) is R × W. The skin deformation coefficients corresponding to the N second real face models respectively can form a matrix with one dimension of NxRxW; the matrix comprises skin deformation coefficients of the virtual face model corresponding to each of the N second real face models.
In addition, when the skin deformation coefficient is used for adjusting skin data in the intermediate virtual human face model, fine adjustment can be performed on skeleton data of the intermediate virtual human face model, and the face detail characteristics of the generated virtual human face model are optimized, so that the generated virtual human face model has higher similarity with a reference human face.
After the virtual face models corresponding to the N second real face models are obtained, the target virtual face model can be fitted by using the N virtual face models and the target coefficient, and target skeleton data and target skin deformation data are generated.
Specifically, the target virtual face model includes: target bone data, and target skinning data; wherein the target skinning data is determined based on the target bone data and target skinning deformation data of the target virtual face model.
When the target bone data is obtained based on the target coefficients corresponding to the plurality of second real face models and the bone data corresponding to the plurality of virtual face models, for example, the method includes: and based on the target coefficients corresponding to the plurality of second real face models, carrying out interpolation processing on the bone data respectively corresponding to the plurality of virtual face models to obtain target bone data.
The bone data corresponding to the virtual face models respectively comprises at least one of the following: the virtual human face comprises bone rotation data, bone position data and bone scaling data corresponding to each human face bone in a plurality of human face bones of the virtual human face. The obtained target bone data includes at least one of: target bone position data, target bone scaling data, and target bone rotation data.
For example, when the target bone data is obtained by interpolating the bone data corresponding to each of the plurality of virtual face models based on the target coefficients corresponding to the plurality of second real face models, at least one of the following b1 to b3 may be used:
(b1) and performing interpolation processing on the bone position data respectively corresponding to the plurality of virtual face models based on the target coefficients respectively corresponding to the plurality of second real face models to obtain target bone position data.
(b2) And based on the target coefficients respectively corresponding to the second real face models, carrying out interpolation processing on the skeleton scaling data respectively corresponding to the virtual face models to obtain target skeleton scaling data.
(b3) Converting the bone rotation data respectively corresponding to the virtual face models into quaternion data, and performing regularization processing on the quaternion data to obtain regularized quaternion data; and based on the target coefficients respectively corresponding to the second real face models, performing interpolation processing on the regularized quaternion data respectively corresponding to the virtual face models to obtain target bone rotation data.
In an implementation, in the case of acquiring the bone position data and the bone scaling data according to the method (b1) and the method (b2), the method further includes determining local coordinate systems corresponding to each level of the bone and each level of the bone based on the plurality of second real face models. In the case of performing skeleton level layering on the face model, for example, the skeleton level may be determined directly according to a biological skeleton layering method, or may be determined according to a requirement of face reconstruction, and a specific layering method may be determined according to an actual situation, which is not described herein again.
After the bone levels are determined, a bone coordinate system corresponding to each bone level can be established based on each bone level. Illustratively, each hierarchy level may be represented as Bonei
At this time, the Bone position data may include each hierarchy of Bone bones Bone in the virtual face modeliThree-dimensional coordinate values under the corresponding skeleton coordinate systems respectively; the Bone scaling data may include levels of Bone in the virtual face modeliThe percentage used to characterize the degree of bone scaling is, for example, 80%, 90%, and 100% in the corresponding bone coordinate system, respectively.
In one possible implementation, the bone position data corresponding to the ith virtual face model is represented as PosiCorresponding skeletal Scaling data as Scalingi. At this time, the bone position data PosiContains skeleton position data corresponding to multiple levels of skeleton, and skeleton Scaling data ScalingiContains the scaling data of the bones corresponding to a plurality of levels of bones.
The corresponding target coefficient is ai. On the basis of the target coefficients corresponding to the M second real face models, M virtual face model pairs are processedCorresponding position bone data PosiAnd carrying out interpolation processing to obtain target bone position data.
For example, the target coefficient may be used as a weight corresponding to each virtual face model, and the position skeleton data Pos corresponding to the virtual face model may be obtainediAnd carrying out weighted summation processing to realize the process of interpolation processing. At this time, target bone position data PosnewSatisfies the following formula (1):
Figure BDA0002797718170000151
similarly, the Scaling data of the bones corresponding to the M virtual face models based on the target coefficients corresponding to the M second real face models is represented as ScalingiThe target coefficients corresponding to the M second real face models respectively may be used as weights of corresponding virtual face models, and the skeleton scaling data corresponding to the M virtual face models respectively is subjected to weighted summation processing, so as to perform interpolation processing on the M virtual face models; in this case, the target bone Scaling data ScalingnewSatisfies the following formula (2):
Figure BDA0002797718170000152
for the method (b3), the bone rotation data may include vector values representing the degree of transformation of the rotation coordinates of the bones, including the rotation axis and the rotation angle, in the corresponding bone coordinate system for each bone in the virtual face model. In one possible implementation, the bone rotation data corresponding to the ith virtual face model is represented as Transi. Because the rotation angle of the bone rotation data clock has the problem of universal joint deadlock, the bone rotation data is converted into quaternion data, and the quaternion data is normalized to obtain normalized quaternion data which is expressed as Trans'iSo as to prevent the phenomenon of overfitting when directly carrying out weighted summation processing on quaternion data.
Regularization quaternion data Trans 'corresponding to the M virtual face models based on target coefficients corresponding to the M second real face models'iWhen interpolation processing is carried out, the regularized quaternion data corresponding to the M virtual face models can be subjected to weighted summation by taking the target coefficients corresponding to the M second real face models as weights; in this case, target bone rotation data TransnewSatisfies the following formula (3):
Figure BDA0002797718170000161
in addition, other interpolation methods can be adopted to obtain target bone position data PosnewScaling data of target bonenewAnd target bone rotation data TransnewThe specific details may be determined according to actual needs, and the disclosure is not limited.
Based on the target bone position data Pos obtained in the above (b1), (b2), and (b3)newScaling data of target bonenewAnd target bone rotation data TransnewThereafter, the target Bone data, denoted Bone, may be determinednew. Illustratively, the target bone data may be represented in vector form as (Pos)new,Scalingnew,Transnew)。
When the target coefficients corresponding to the plurality of second real face models and the virtual face models having the preset styles corresponding to the plurality of second real face models are determined, for example, the following method may be adopted when generating the target skin deformation coefficients: and generating the target skin deformation coefficient of the target virtual face model relative to the standard virtual face model based on the target coefficients corresponding to the plurality of second real face models respectively and the skin deformation coefficients included by the plurality of virtual face models respectively. Wherein, the virtual human face model includes: and the skin deformation coefficient of the skin data of the virtual human face model is relative to the skin deformation coefficient of the standard skin data of the standard virtual human face model generated in advance.
Referring to fig. 3, an embodiment of the present disclosure further provides a specific method for obtaining a target skin deformation coefficient, including:
s301: and carrying out normalization processing on the target coefficients respectively corresponding to the plurality of second real face models.
In the case of performing normalization processing on the target coefficients corresponding to the plurality of second real face models, for example, a normalization function (Softmax) may be used to obtain a probability value, to represent ratios of the target coefficients corresponding to the plurality of second real face models in the plurality of target coefficients, and the normalized target coefficient is AlphaNorm
Exemplarily, in the case that there are N second real face models, the target coefficient Alpha obtained by the normalization processing is obtainedNormIs N.
S302: and based on the normalized target coefficient, performing interpolation processing on skin deformation coefficients respectively included by the plurality of virtual face models to obtain a target skin deformation coefficient.
And fitting skin deformation coefficients included in the virtual face model by using target coefficients respectively corresponding to the second real face models, wherein the obtained fitting result can represent the influence of the second real face models on the virtual face model, and the target skin deformation coefficients of the target virtual face model relative to the standard virtual face model are generated. The target skin deformation coefficient may adjust the fat-thin of the face, for example, so that the obtained target virtual face model conforms to the fat-thin feature of the face in the target image.
For example, based on the normalized target coefficient, weighted summation may be performed on the skin deformation coefficients corresponding to the multiple virtual face models, so as to implement a process of performing interpolation processing on the skin deformation coefficients corresponding to the multiple virtual face models, and obtain the target skin deformation coefficient.
Target coefficient Alpha obtained by normalization processingNormThe method can represent a first vector with the dimension of N, skin deformation coefficients corresponding to R virtual face models respectively, and can form a first vector with the dimension of NA second vector of NxR; at this time, the skin deformation coefficients corresponding to the multiple virtual face models are subjected to weighted summation, for example, the first vector and the second vector may be directly multiplied to obtain a target skin deformation coefficient.
Illustratively, the target skin deformation coefficient, denoted Blendshape ', may be obtained, for example, using the following formula, and Blendshape' satisfies the following formula (4):
Blendshape'=Blendshape×AlphaNorm (4)
for the above S104, referring to fig. 4, an embodiment of the present disclosure further provides a specific method for generating a target virtual face model corresponding to a first real face model based on target bone data and a target skin deformation coefficient, including:
s401: and performing position transformation processing on the skin data based on the target bone data and the incidence relation between the standard bone data and the standard skin data in the standard virtual human face model to generate intermediate skin data.
The association relationship between the standard bone data and the standard skinning data in the standard virtual human face model is, for example, the association relationship between the standard bone data and the standard skinning data corresponding to each level of bone. Based on the incidence relation, the skin can be bound on the skeleton in the virtual human face model.
By using the target bone data and the association relationship between the standard bone data and the standard skinning data in the standard virtual face model, position transformation processing may be performed on the skinning data at the corresponding positions of the multiple levels of bones, so that the positions of the corresponding levels of bones in the generated target skinning data may be matched with the positions in the corresponding target bone data, and at this time, for example, the skinning data subjected to the position transformation processing may be used as the generated intermediate skinning data.
S402: and carrying out deformation processing on the intermediate skin data based on the target skin deformation coefficient to obtain the target skin data.
S403: and forming a target virtual human face model based on the target skeleton data and the target skin data.
Here, with the target skeleton data, the skeleton of each level used for constructing the target virtual face model can be determined; and the skin binding the model to the skeleton can be determined by using the target skin data, so that the target virtual human face model is formed.
The method for determining the target virtual face model comprises at least one of the following steps: directly establishing a target virtual human face model based on the target skeleton data and the target skin data; and replacing corresponding skeleton data of each level in the first real human face model by using the corresponding target skeleton data of each level of skeleton, and establishing a target virtual human face model by using target skinning data. The specific method for establishing the target virtual face model can be determined according to actual conditions, and is not described herein again.
The embodiment of the disclosure also provides a face reconstruction method for acquiring the target image PicAThe target virtual human face model Mod corresponding to the original human face AAimThe description of the specific process of (a).
Determining a target virtual face model ModAimThe steps (c) include the following (c1) to (c 6):
(c1) preparing a material; wherein, preparing the material includes: preparing materials of a standard virtual face model and preparing materials of a virtual picture.
When preparing materials of the standard virtual human face model, taking the cartoon style as the preset style as an example, firstly setting a standard virtual human face model Mod with the cartoon styleBase
Generating 9 groups of preset skin deformation coefficients; wherein, use 9 group's skinning deformation coefficients to carry out different positions, and/or the change of different degree respectively to the standard skinning data of standard virtual human face model, can adjust the fat thin of standard virtual human face, cover most face type characteristics.
When preparing the material of the virtual picture, 24 virtual pictures Pic are collected1~Pic24(ii) a Virtual human face B in 24 collected virtual pictures1~B24The corresponding boys and girls are balanced in number and as wide as possibleThe distribution of features of the five sense organs.
(c2) Reconstructing a human face model; wherein, the face model reconstruction comprises: using the target image PicAGenerating a first real face model Mod from the original face AfstAnd using the virtual face B in the virtual picture1~B24Generating a second real face model Modsnd-1~Modsnd-24
Generating a first real face model Mod after determining the original face AfstFirstly, the face in the target image is corrected and cut, and then a pre-trained RGB (red, green and blue) reconstruction neural network is utilized to generate a first real face model Mod corresponding to the original face Afst. Similarly, the virtual face B can be determined by using the pre-trained RGB reconstruction neural network1~B24Respectively corresponding second real face model Modsnd-1~Modsnd-24
In determining the second real face model Modsnd-1~Modsnd-24And then, the method further comprises the following steps: determining a second real face model Mod by using a preset style and a manual adjustment modesnd-1~Modsnd-24Respectively corresponding virtual human face model Mod with preset stylefic-1~Modfic-24
In addition, skin deformation coefficients of 24 virtual face models are generated based on 9 groups of preset skin deformation coefficients.
(c3) Fitting; wherein the fitting comprises: fitting the first real face model by using a plurality of second real face models to obtain target coefficients alpha [ alpha ] corresponding to the plurality of second real face models respectivelysnd-1,alphasnd-2,…,alphasnd-24]。
When a plurality of second real face models are used for fitting the first real face model, a least square method is selected for fitting to obtain the 24-dimensional coefficient alpha.
(c4) Determining a target skin deformation coefficient; when the target skin deformation coefficient is determined, the following (c4-1), (c4-2) and (c4-3) are also included.
(c4-1) reading a virtual human face model Mod with a preset stylefic-1~Modfic-24Respectively corresponding skin deformation coefficients blendshapefic-1~blendshapefic-24
(c4-2) carrying out normalization processing on the target coefficients alpha respectively corresponding to the plurality of second real face models;
(c4-3) and respectively applying the skinning deformation coefficients blendshape which are respectively included by the plurality of virtual face models by using the target coefficients alpha respectively corresponding to the plurality of second real face modelsfic-1~blendshapefic-24Carrying out interpolation processing to generate a target skin deformation coefficient blendshapeAim
(c5) Determining target bone data; the target bone data is determined by the following steps (c5-1) and (c 5-2).
(c5-1), reading the bone data; wherein the bone data comprises: bone at each leveliVirtual human face model Mod with preset stylefic-1~Modfic-24Respectively corresponding bone position data PosiScaling data of boneiAnd bone rotation data Transi
(c5-2) utilizing the target coefficient alpha to match the virtual human face model Mod with the preset stylefic-1~Modfic-24The corresponding Bone data are respectively processed by interpolation to generate target Bone data BonenewIncluding target bone position data PosnewScaling data of target bonenewAnd target bone rotation data Transnew
(c6) And generating a target virtual human face model.
Replacing the target skeleton data to a standard virtual human face model Mod based on the target skeleton data and the target skin deformation coefficientBaseAnd utilizing the deformation coefficient of the target skin, blendshapeAimAnd fitting the skin with the skeleton to generate a target virtual human face model corresponding to the first real human face model.
Referring to fig. 5, an example of specific data used in a plurality of processes included in the above specific example is provided for an embodiment of the present disclosure. Wherein, a in fig. 5 represents a target image, and 51 represents an original face a; b in fig. 5 shows a schematic diagram of a standard virtual face model with cartoon style; in fig. 5, c is a schematic diagram of the relative position relationship of each position point in the target skin data obtained after the position points in the standard skin data are adjusted by using the target skin deformation coefficient; a schematic diagram of the resulting target virtual face model generated corresponding to the original face a is shown as d in fig. 5.
Here, it should be noted that (c1) - (c6) above are only one specific example of a method for completing face reconstruction, and do not limit the face reconstruction method provided in the embodiments of the present disclosure.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a face reconstruction device corresponding to the face reconstruction method, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the face reconstruction method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 6, a schematic diagram of a face reconstruction apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: a first generation module 61, a processing module 62, a second generation module 63, and a third generation module 64; wherein the content of the first and second substances,
a first generating module 61 for generating a first real face model based on the target image;
a processing module 62, configured to perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models respectively;
a second generating module 63, configured to generate target bone data and a target skinning deformation coefficient based on target coefficients corresponding to the plurality of second real face models respectively and virtual face models having a preset style corresponding to the plurality of second real face models respectively;
a third generating module 64 for generating a target virtual face model corresponding to the first real face model based on the target skeleton data and the target skinning deformation coefficient
In an alternative embodiment, the virtual face model includes: the skin deformation coefficient of the skin data of the virtual human face model relative to the pre-generated standard skin data of the standard virtual human face model;
the second generating module 63 is configured to, when generating a target skin deformation coefficient based on the target coefficients corresponding to the plurality of second real face models respectively and the virtual face models having the preset styles and corresponding to the plurality of second real face models respectively,: and generating the target skin deformation coefficient of the target skin data of the target virtual face model relative to the standard skin data based on the target coefficients corresponding to the second real face models respectively and the skin deformation coefficients included in the virtual face models respectively.
In an optional embodiment, the second generating module 63, when generating the target skin deformation coefficients of the target skin data of the target virtual face model relative to the standard skin data based on the target coefficients corresponding to the plurality of second real face models respectively and the skin deformation coefficients included in the plurality of virtual face models respectively, is configured to: normalizing the target coefficients respectively corresponding to the plurality of second real face models; and obtaining the target skin deformation coefficient based on the target coefficient after normalization processing and the skin deformation coefficients respectively included by the virtual face model.
In an alternative embodiment, the third generating module 64, when generating the target virtual face model corresponding to the first real face model based on the target bone data and the target skinning deformation coefficient, is configured to: based on the target skeleton data and the incidence relation between the standard skeleton data and the standard skinning data in the standard virtual human face model, carrying out position transformation processing on the standard skinning data to generate intermediate skinning data; performing deformation processing on the intermediate skin data based on the target skin deformation coefficient to obtain target skin data; and constructing the target virtual human face model based on the target skeleton data and the target skinning data.
In an alternative embodiment, the target bone data includes at least one of: the target bone position data, the target bone scaling data, and the target bone rotation data;
the bone data corresponding to the plurality of virtual face models respectively comprises at least one of the following: the virtual human face comprises bone rotation data, bone position data and bone scaling data corresponding to each human face bone in a plurality of human face bones of the virtual human face.
In an alternative embodiment, the target bone data includes the target bone position data, and the second generating module 63, when generating the target bone data based on the target coefficients corresponding to the second real face models respectively and the virtual face models with the preset style corresponding to the second real face models respectively, is configured to: and performing interpolation processing on the bone position data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain the target bone position data.
In an alternative embodiment, the target bone data includes the target bone scaling data, and the second generating module 63, when generating the target bone data based on the target coefficients corresponding to the second real face models respectively and the virtual face models with the preset style corresponding to the second real face models respectively, is configured to: and performing interpolation processing on the skeleton scaling data respectively corresponding to the plurality of virtual face models based on the target coefficients respectively corresponding to the plurality of second real face models to obtain the target skeleton scaling data.
In an alternative embodiment, the target bone data includes the target bone rotation data, and the second generating module 63, when generating the target bone data based on the target coefficients corresponding to the second real face models respectively and the virtual face models with the preset style corresponding to the second real face models respectively, is configured to: converting the bone rotation data respectively corresponding to the virtual face models into quaternion data, and performing regularization processing on the quaternion data to obtain regularized quaternion data; and performing interpolation processing on the regularized quaternion data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain the target bone rotation data.
In an alternative embodiment, the first generating module 61, when generating the first real face model based on the target image, is configured to: acquiring a target image comprising an original face; and performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face model.
In an alternative embodiment, the processing module 62 pre-generates the plurality of second real face models according to the following: acquiring a plurality of reference images comprising reference faces; and aiming at each reference image in the plurality of reference images, carrying out three-dimensional face reconstruction on the reference face included in each reference image to obtain a second real face model corresponding to each reference image.
In an optional embodiment, the method further includes an obtaining module 65, configured to obtain virtual face models with preset styles corresponding to the plurality of second real face models respectively by: generating an intermediate virtual face model with a preset style corresponding to each second real face model in the plurality of second real face models; generating skin deformation coefficients of the virtual face model corresponding to each second real face model relative to the standard virtual face model based on a plurality of groups of preset skin deformation coefficients relative to the standard virtual face model; and adjusting the intermediate skin data in the intermediate virtual face model by using the skin deformation coefficient, and generating the virtual face model of each second real face model based on the adjusted intermediate skin data and the intermediate skeleton data of the intermediate virtual face model.
In an optional implementation manner, when the processing module 62 performs fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models, the processing module is configured to: and performing least square processing on the plurality of second real face models and the first real face model to obtain target coefficients corresponding to the plurality of second real face models respectively.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides a computer device, as shown in fig. 7, which is a schematic structural diagram of the computer device provided in the embodiment of the present disclosure, and includes:
a processor 71 and a memory 72; the memory 72 stores machine-readable instructions executable by the processor 71, the processor 71 being configured to execute the machine-readable instructions stored in the memory 72, the processor 71 performing the following steps when the machine-readable instructions are executed by the processor 71:
generating a first real face model based on the target image; fitting the first real face model by utilizing a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models; generating target bone data and a target skin deformation coefficient based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models; and generating a target virtual face model corresponding to the first real face model based on the target skeleton data and the target skin deformation coefficient.
The memory 72 includes a memory 721 and an external memory 722; the memory 721 is also referred to as an internal memory, and temporarily stores operation data in the processor 71 and data exchanged with an external memory 722 such as a hard disk, and the processor 71 exchanges data with the external memory 722 through the memory 721.
The specific execution process of the instruction may refer to the steps of the face reconstruction method described in the embodiments of the present disclosure, and details are not repeated here.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the face reconstruction method in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the face reconstruction method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (15)

1. A face reconstruction method, comprising:
generating a first real face model based on the target image;
fitting the first real face model by utilizing a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models;
generating target bone data and a target skin deformation coefficient based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models;
and generating a target virtual face model corresponding to the first real face model based on the target skeleton data and the target skin deformation coefficient.
2. The face reconstruction method of claim 1, wherein the virtual face model comprises: the skin deformation coefficient of the skin data of the virtual human face model relative to the pre-generated standard skin data of the standard virtual human face model;
generating a target skin deformation coefficient based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models, including:
and generating the target skin deformation coefficient of the target skin data of the target virtual face model relative to the standard skin data based on the target coefficients corresponding to the second real face models respectively and the skin deformation coefficients included in the virtual face models respectively.
3. The method according to claim 2, wherein the generating the target skinning deformation coefficient of the target skinning data of the target virtual face model relative to the standard skinning data based on the target coefficient corresponding to each of the plurality of second real face models and the skinning deformation coefficient included in each of the plurality of virtual face models comprises:
normalizing the target coefficients respectively corresponding to the plurality of second real face models;
and obtaining the target skin deformation coefficient based on the target coefficient after normalization processing and the skin deformation coefficients respectively included by the virtual face model.
4. The method according to any one of claims 1 to 3, wherein the generating a target virtual face model corresponding to the first real face model based on the target bone data and the target skinning deformation coefficient comprises:
based on the target skeleton data and the incidence relation between the standard skeleton data and the standard skinning data in the standard virtual human face model, carrying out position transformation processing on the standard skinning data to generate intermediate skinning data;
performing deformation processing on the intermediate skin data based on the target skin deformation coefficient to obtain target skin data;
and constructing the target virtual human face model based on the target skeleton data and the target skinning data.
5. The face reconstruction method according to any one of claims 1 to 4, wherein the target bone data comprises at least one of: the target bone position data, the target bone scaling data, and the target bone rotation data;
the bone data corresponding to the plurality of virtual face models respectively comprises at least one of the following: the virtual human face comprises bone rotation data, bone position data and bone scaling data corresponding to each human face bone in a plurality of human face bones of the virtual human face.
6. The face reconstruction method of claim 5, wherein the target bone data comprises target bone location data; generating target bone data based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models, including:
and performing interpolation processing on the bone position data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain the target bone position data.
7. The face reconstruction method according to claim 5 or 6, wherein the target bone data comprises target bone scaling data; generating target bone data based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models, including:
and performing interpolation processing on the skeleton scaling data respectively corresponding to the plurality of virtual face models based on the target coefficients respectively corresponding to the plurality of second real face models to obtain the target skeleton scaling data.
8. The face reconstruction method according to any of claims 5-7, wherein the target bone data comprises target bone rotation data; generating target bone data based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models, including:
converting the bone rotation data respectively corresponding to the virtual face models into quaternion data, and performing regularization processing on the quaternion data to obtain regularized quaternion data;
and performing interpolation processing on the regularized quaternion data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain the target bone rotation data.
9. The method according to any of claims 1-8, wherein the generating a first real face model based on the target image comprises:
acquiring a target image comprising an original face;
and performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face model.
10. The face reconstruction method according to any of claims 1 to 9, characterized in that the plurality of second real face models are pre-generated according to the following way:
acquiring a plurality of reference images comprising reference faces;
and aiming at each reference image in the plurality of reference images, carrying out three-dimensional face reconstruction on the reference face included in each reference image to obtain a second real face model corresponding to each reference image.
11. The face reconstruction method according to any one of claims 1 to 10, further comprising: acquiring virtual face models with preset styles corresponding to the plurality of second real face models respectively by adopting the following modes:
generating an intermediate virtual face model with a preset style corresponding to each second real face model in the plurality of second real face models;
generating skin deformation coefficients of the virtual face model corresponding to each second real face model relative to the standard virtual face model based on a plurality of groups of preset skin deformation coefficients relative to the standard virtual face model;
and adjusting the intermediate skin data in the intermediate virtual face model by using the skin deformation coefficient, and generating the virtual face model of each second real face model based on the adjusted intermediate skin data and the intermediate skeleton data of the intermediate virtual face model.
12. The method according to any one of claims 1 to 11, wherein the fitting process of the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models respectively comprises:
and performing least square processing on the plurality of second real face models and the first real face model to obtain target coefficients corresponding to the plurality of second real face models respectively.
13. A face reconstruction apparatus, comprising:
a first generation module for generating a first real face model based on the target image;
the processing module is used for fitting the first real face model by utilizing a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models;
the second generation module is used for generating target bone data and target skin deformation coefficients based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models;
and the third generation module is used for generating a target virtual human face model corresponding to the first real human face model based on the target bone data and the target skin deformation coefficient.
14. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the processor being configured to execute the machine-readable instructions stored in the memory, the processor performing the steps of the face reconstruction method according to any one of claims 1 to 12 when the machine-readable instructions are executed by the processor.
15. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when executed by a computer device, performs the steps of the face reconstruction method according to any one of claims 1 to 12.
CN202011337901.1A 2020-11-25 2020-11-25 Face reconstruction method, device, computer equipment and storage medium Active CN112419454B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202011337901.1A CN112419454B (en) 2020-11-25 2020-11-25 Face reconstruction method, device, computer equipment and storage medium
PCT/CN2021/102431 WO2022110791A1 (en) 2020-11-25 2021-06-25 Method and apparatus for face reconstruction, and computer device, and storage medium
JP2022520004A JP2023507863A (en) 2020-11-25 2021-06-25 Face reconstruction method, apparatus, computer device, and storage medium
KR1020227010819A KR20220075339A (en) 2020-11-25 2021-06-25 Face reconstruction method, apparatus, computer device and storage medium
TW110127356A TWI773458B (en) 2020-11-25 2021-07-26 Method, device, computer equipment and storage medium for reconstruction of human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011337901.1A CN112419454B (en) 2020-11-25 2020-11-25 Face reconstruction method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112419454A true CN112419454A (en) 2021-02-26
CN112419454B CN112419454B (en) 2023-11-28

Family

ID=74842193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011337901.1A Active CN112419454B (en) 2020-11-25 2020-11-25 Face reconstruction method, device, computer equipment and storage medium

Country Status (5)

Country Link
JP (1) JP2023507863A (en)
KR (1) KR20220075339A (en)
CN (1) CN112419454B (en)
TW (1) TWI773458B (en)
WO (1) WO2022110791A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610992A (en) * 2021-08-04 2021-11-05 北京百度网讯科技有限公司 Bone driving coefficient determining method and device, electronic equipment and readable storage medium
CN113808249A (en) * 2021-08-04 2021-12-17 北京百度网讯科技有限公司 Image processing method, device, equipment and computer storage medium
CN113805532A (en) * 2021-08-26 2021-12-17 福建天泉教育科技有限公司 Method and terminal for making physical robot action
CN114529640A (en) * 2022-02-17 2022-05-24 北京字跳网络技术有限公司 Moving picture generation method and device, computer equipment and storage medium
WO2022110791A1 (en) * 2020-11-25 2022-06-02 北京市商汤科技开发有限公司 Method and apparatus for face reconstruction, and computer device, and storage medium
CN114693876A (en) * 2022-04-06 2022-07-01 北京字跳网络技术有限公司 Digital human generation method, device, storage medium and electronic equipment
WO2022237249A1 (en) * 2021-05-10 2022-11-17 上海商汤智能科技有限公司 Three-dimensional reconstruction method, apparatus and system, medium, and computer device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101757642B1 (en) * 2016-07-20 2017-07-13 (주)레벨소프트 Apparatus and method for 3d face modeling
CN109395390A (en) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of game role facial model
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment
CN110675475A (en) * 2019-08-19 2020-01-10 腾讯科技(深圳)有限公司 Face model generation method, device, equipment and storage medium
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
CN111784821A (en) * 2020-06-30 2020-10-16 北京市商汤科技开发有限公司 Three-dimensional model generation method and device, computer equipment and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5178662B2 (en) * 2009-07-31 2013-04-10 富士フイルム株式会社 Image processing apparatus and method, data processing apparatus and method, and program
US9314692B2 (en) * 2012-09-21 2016-04-19 Luxand, Inc. Method of creating avatar from user submitted image
KR101696007B1 (en) * 2013-01-18 2017-01-13 한국전자통신연구원 Method and device for creating 3d montage
JP6207210B2 (en) * 2013-04-17 2017-10-04 キヤノン株式会社 Information processing apparatus and method
CN110111417B (en) * 2019-05-15 2021-04-27 浙江商汤科技开发有限公司 Method, device and equipment for generating three-dimensional local human body model
CN110599573B (en) * 2019-09-03 2023-04-11 电子科技大学 Method for realizing real-time human face interactive animation based on monocular camera
CN111724457A (en) * 2020-03-11 2020-09-29 长沙千博信息技术有限公司 Realistic virtual human multi-modal interaction implementation method based on UE4
CN111714885A (en) * 2020-06-22 2020-09-29 网易(杭州)网络有限公司 Game role model generation method, game role model generation device, game role adjustment device and game role adjustment medium
CN112419454B (en) * 2020-11-25 2023-11-28 北京市商汤科技开发有限公司 Face reconstruction method, device, computer equipment and storage medium
CN112419485B (en) * 2020-11-25 2023-11-24 北京市商汤科技开发有限公司 Face reconstruction method, device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101757642B1 (en) * 2016-07-20 2017-07-13 (주)레벨소프트 Apparatus and method for 3d face modeling
CN109395390A (en) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of game role facial model
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment
CN110675475A (en) * 2019-08-19 2020-01-10 腾讯科技(深圳)有限公司 Face model generation method, device, equipment and storage medium
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
CN111784821A (en) * 2020-06-30 2020-10-16 北京市商汤科技开发有限公司 Three-dimensional model generation method and device, computer equipment and storage medium

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
PENGRUI WANG 等: "LOW-FREQUENCY GUIDED SELF-SUPERVISED LEARNING FOR HIGH-FIDELITY 3D", 《IEEE》 *
PENGRUI WANG 等: "LOW-FREQUENCY GUIDED SELF-SUPERVISED LEARNING FOR HIGH-FIDELITY 3D", 《IEEE》, 9 July 2020 (2020-07-09) *
廖海斌等: "面向形变模型的三维人脸建模研究及其改进", 《武汉大学学报(信息科学版)》 *
廖海斌等: "面向形变模型的三维人脸建模研究及其改进", 《武汉大学学报(信息科学版)》, no. 02, 5 February 2011 (2011-02-05) *
署光等: "基于稀疏形变模型的三维卡通人脸生成", 《电子学报》 *
署光等: "基于稀疏形变模型的三维卡通人脸生成", 《电子学报》, no. 08, 31 August 2010 (2010-08-31), pages 1798 - 1802 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022110791A1 (en) * 2020-11-25 2022-06-02 北京市商汤科技开发有限公司 Method and apparatus for face reconstruction, and computer device, and storage medium
WO2022237249A1 (en) * 2021-05-10 2022-11-17 上海商汤智能科技有限公司 Three-dimensional reconstruction method, apparatus and system, medium, and computer device
CN113610992A (en) * 2021-08-04 2021-11-05 北京百度网讯科技有限公司 Bone driving coefficient determining method and device, electronic equipment and readable storage medium
CN113808249A (en) * 2021-08-04 2021-12-17 北京百度网讯科技有限公司 Image processing method, device, equipment and computer storage medium
CN113805532A (en) * 2021-08-26 2021-12-17 福建天泉教育科技有限公司 Method and terminal for making physical robot action
CN114529640A (en) * 2022-02-17 2022-05-24 北京字跳网络技术有限公司 Moving picture generation method and device, computer equipment and storage medium
CN114529640B (en) * 2022-02-17 2024-01-26 北京字跳网络技术有限公司 Moving picture generation method, moving picture generation device, computer equipment and storage medium
CN114693876A (en) * 2022-04-06 2022-07-01 北京字跳网络技术有限公司 Digital human generation method, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN112419454B (en) 2023-11-28
TW202221651A (en) 2022-06-01
TWI773458B (en) 2022-08-01
KR20220075339A (en) 2022-06-08
JP2023507863A (en) 2023-02-28
WO2022110791A1 (en) 2022-06-02

Similar Documents

Publication Publication Date Title
CN112419454A (en) Face reconstruction method and device, computer equipment and storage medium
CN112419485B (en) Face reconstruction method, device, computer equipment and storage medium
WO2020192568A1 (en) Facial image generation method and apparatus, device and storage medium
CN110717977B (en) Method, device, computer equipment and storage medium for processing game character face
CN111354079B (en) Three-dimensional face reconstruction network training and virtual face image generation method and device
CN111784821B (en) Three-dimensional model generation method and device, computer equipment and storage medium
CN111632374B (en) Method and device for processing face of virtual character in game and readable storage medium
CN110399849A (en) Image processing method and device, processor, electronic equipment and storage medium
CN113838176B (en) Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
US20230073340A1 (en) Method for constructing three-dimensional human body model, and electronic device
JP2013524357A (en) Method for real-time cropping of real entities recorded in a video sequence
WO2013078404A1 (en) Perceptual rating of digital image retouching
CN111950430B (en) Multi-scale dressing style difference measurement and migration method and system based on color textures
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
CN112419144A (en) Face image processing method and device, electronic equipment and storage medium
CN114333034A (en) Face pose estimation method and device, electronic equipment and readable storage medium
CN112396693A (en) Face information processing method and device, electronic equipment and storage medium
CN107766803B (en) Video character decorating method and device based on scene segmentation and computing equipment
CN114529640B (en) Moving picture generation method, moving picture generation device, computer equipment and storage medium
CN113095206A (en) Virtual anchor generation method and device and terminal equipment
CN112396692A (en) Face reconstruction method and device, computer equipment and storage medium
CN108717730B (en) 3D character reconstruction method and terminal
CN116311474A (en) Face image face filling method, system and storage medium
CN114612614A (en) Human body model reconstruction method and device, computer equipment and storage medium
CN114677476A (en) Face processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40040544

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant