CN112419485A - Face reconstruction method and device, computer equipment and storage medium - Google Patents

Face reconstruction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112419485A
CN112419485A CN202011342169.7A CN202011342169A CN112419485A CN 112419485 A CN112419485 A CN 112419485A CN 202011342169 A CN202011342169 A CN 202011342169A CN 112419485 A CN112419485 A CN 112419485A
Authority
CN
China
Prior art keywords
target
data
face
bone
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011342169.7A
Other languages
Chinese (zh)
Other versions
CN112419485B (en
Inventor
徐胜伟
王权
钱晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202011342169.7A priority Critical patent/CN112419485B/en
Publication of CN112419485A publication Critical patent/CN112419485A/en
Priority to KR1020237021453A priority patent/KR20230110607A/en
Priority to PCT/CN2021/102404 priority patent/WO2022110790A1/en
Priority to JP2022519295A priority patent/JP2023507862A/en
Priority to TW110127359A priority patent/TWI778723B/en
Application granted granted Critical
Publication of CN112419485B publication Critical patent/CN112419485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Electric Double-Layer Capacitors Or The Like (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Generation (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a face reconstruction method, apparatus, computer device and storage medium, wherein the method comprises: generating a first real face model based on the target image; fitting the first real face model by utilizing a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models; and generating a target virtual face model corresponding to the target image based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models.

Description

Face reconstruction method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a face reconstruction method, an apparatus, a computer device, and a storage medium.
Background
The face reconstruction can establish a virtual face three-dimensional model according to a real face or the preference of the face reconstruction, and the face reconstruction has wide application in the fields of games, cartoons, virtual social contact and the like. For example, in a game, a player can generate a virtual face three-dimensional model according to a real face included in an image provided by the player through a face reconstruction system provided by a game program, and participate in the game with a more substituted sense by using the created virtual face three-dimensional model.
At present, when face reconstruction is performed based on a face included in a face image, face contour features are generally extracted based on the face image, and then the extracted face contour features are matched and fused with a virtual three-dimensional model generated in advance to generate a virtual face three-dimensional model; but the accuracy rate of matching the human face contour features is low, so that the similarity between the generated virtual human face three-dimensional model and the real human face image is low.
Disclosure of Invention
The embodiment of the disclosure at least provides a face reconstruction method, a face reconstruction device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a face reconstruction method, including: generating a first real face model based on the target image; fitting the first real face model by utilizing a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models; and generating a target virtual face model corresponding to the target image based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models.
In this embodiment, the target coefficients are used as media to establish an association relationship between a plurality of second real face models and the first real face model, and the association relationship can represent the association between a virtual face model established based on the second real face model and a target virtual face model established based on the first real face model, so that the target virtual face model determined based on the target coefficients has a preset style and characteristics of an original face corresponding to the first real face model, and the generated target virtual face model has a higher similarity with the original face corresponding to the first real face model.
In an optional implementation manner, the generating a target virtual face model corresponding to the target image based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models includes: determining target bone data based on target coefficients respectively corresponding to the plurality of second real face models and bone data respectively corresponding to the plurality of virtual face models; generating the target virtual face model based on the target skeletal data.
In an alternative embodiment, the generating the target virtual face model based on the target skeletal data includes: based on the target skeleton data and the incidence relation between the standard skeleton data and the standard skinning data in the standard virtual human face model, carrying out position transformation processing on the standard skinning data to generate target skinning data; and constructing the target virtual human face model based on the target skeleton data and the target skinning data.
In the embodiment, the target skin data and the incidence relation between the standard bone data and the standard skin data in the standard virtual human face model can be used for enabling the obtained target virtual human face model to better fit the target bone data and the target skin data, so that the target virtual human face model formed by the target bone data is less prone to displaying abnormal bulges or depressions due to changes of the bone data.
In an optional embodiment, the bone data respectively corresponding to the virtual face models comprises at least one of the following data: bone rotation data, bone position data and bone scaling data corresponding to each human face bone in a plurality of human face bones of the virtual human face; the target bone data includes at least one of: the target bone position data, target bone scaling data, and the target bone rotation data.
In the embodiment, the bone data corresponding to each bone in a plurality of human face bones can be more accurately represented by using the bone data, and the target virtual human face model can be more accurately determined by using the target bone data.
In an optional embodiment, the target bone data includes the target bone position data, and the determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models includes: and performing interpolation processing on the bone position data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain target bone position data.
In an optional embodiment, the target bone data includes the target bone scaling data, and the determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models includes: and based on the target coefficients respectively corresponding to the plurality of second real face models, carrying out interpolation processing on the skeleton scaling data respectively corresponding to the plurality of virtual face models to obtain target skeleton scaling data.
In an alternative embodiment, the target bone data includes the target bone rotation data, and the determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models includes: converting the bone rotation data respectively corresponding to the virtual face models into quaternion data, and performing regularization processing on the quaternion data to obtain regularized quaternion data; and based on the target coefficients respectively corresponding to the second real face models, performing interpolation processing on the regularized quaternion data respectively corresponding to the virtual face models to obtain target bone rotation data.
In the embodiment, the virtual face model can be more accurately adjusted by using the target bone data, so that the obtained bone details in the target virtual face model are more detailed and are more similar to the bone details of the original face, and the target virtual face model and the original face have higher similarity.
In an alternative embodiment, the generating a first real face model based on the target image includes: acquiring a target image comprising an original face; and performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face model.
In the embodiment, the face features of the original face in the target image can be represented more accurately and comprehensively by using the first real face model obtained by performing three-dimensional face reconstruction on the original face.
In an alternative embodiment, a plurality of second real face models are generated according to the following manner: acquiring a plurality of reference images comprising reference faces; and aiming at each reference image in the plurality of reference images, performing three-dimensional face reconstruction on the reference face included in each reference image to obtain a second real face model corresponding to each reference image.
In this embodiment, a plurality of reference images are used, so that a wider range of human face appearance features can be covered as much as possible, and therefore, a second real human face model obtained by performing three-dimensional human face reconstruction based on each reference image in the plurality of reference images can also cover the wider range of human face appearance features as much as possible.
In an optional implementation manner, the fitting the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models respectively includes: and performing least square processing on the plurality of second real face models and the first real face model to obtain target coefficients corresponding to the plurality of second real face models respectively.
In this embodiment, the fitting situation when the plurality of second real face models are used to fit the first real face model can be accurately characterized by using the target coefficients.
In a second aspect, an embodiment of the present disclosure further provides a face reconstruction apparatus, including:
a first generation module for generating a first real face model based on the target image;
the processing module is used for fitting the first real face model by utilizing a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models;
and the second generation module is used for generating a target virtual face model corresponding to the target image based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models.
In an optional implementation manner, when generating a target virtual face model corresponding to the target image based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with the preset styles respectively corresponding to the plurality of second real face models, the second generating module is configured to: determining target bone data based on target coefficients respectively corresponding to the plurality of second real face models and bone data respectively corresponding to the plurality of virtual face models; generating the target virtual face model based on the target skeletal data.
In an alternative embodiment, the second generation module, when generating the target virtual face model based on the target skeletal data, is configured to: based on the target skeleton data and the incidence relation between the standard skeleton data and the standard skinning data in the standard virtual human face model, carrying out position transformation processing on the standard skinning data to generate target skinning data; and constructing the target virtual human face model based on the target skeleton data and the target skinning data.
In an alternative embodiment, the bone data of the virtual face model includes at least one of the following data: bone rotation data, bone position data and bone scaling data corresponding to each human face bone in a plurality of human face bones of the virtual human face; the target bone data includes at least one of: the target bone position data, target bone scaling data, and the target bone rotation data.
In an optional embodiment, the target bone data includes the target bone position data, and the second generation module, when determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models, is configured to: and performing interpolation processing on the bone position data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain the target bone position data.
In an optional embodiment, the target bone data includes the target bone scaling data, and the second generation module, when determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models, is configured to: and performing interpolation processing on the skeleton scaling data respectively corresponding to the plurality of virtual face models based on the target coefficients respectively corresponding to the plurality of second real face models to obtain the target skeleton scaling data.
In an optional embodiment, the target bone data includes the target bone rotation data, and the second generation module, when determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models, is configured to: converting the bone rotation data respectively corresponding to the virtual face models into quaternion data, and performing regularization processing on the quaternion data to obtain regularized quaternion data; and based on the target coefficients respectively corresponding to the second real face models, performing interpolation processing on the regularized quaternion data respectively corresponding to the virtual face models to obtain the target skeleton rotation data.
In an alternative embodiment, the first generating module, when generating the first real face model based on the target image, is configured to: acquiring a target image comprising an original face; and performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face model.
In an alternative embodiment, the processing module generates a plurality of second real face models according to the following: acquiring a plurality of reference images comprising reference faces; and aiming at each reference image in the plurality of reference images, carrying out three-dimensional face reconstruction on the reference face included in each reference image to obtain a second real face model corresponding to each reference image.
In an optional implementation manner, the processing module is configured to, when fitting the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models, use: and performing least square processing on the plurality of second real face models and the first real face model to obtain target coefficients corresponding to the plurality of second real face models respectively.
In a third aspect, this disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the above-mentioned face reconstruction apparatus, computer device, and computer-readable storage medium, reference is made to the description of the above-mentioned face reconstruction method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a face reconstruction method provided by an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a method for generating a target virtual face model corresponding to a target image according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a specific method for generating a target virtual face model corresponding to a first real face model based on target skeleton data according to an embodiment of the present disclosure;
fig. 4 illustrates a plurality of faces and an example of a face model included in a face reconstruction method according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating a face reconstruction apparatus provided in an embodiment of the present disclosure;
fig. 6 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Research shows that the method for reconstructing the human face can be used for establishing a virtual human face three-dimensional model according to the real human face or the preference of the human face. Under the condition of face reconstruction based on a face in a portrait image, feature extraction is generally performed on the face contained in the portrait image to obtain a face contour feature, the face contour feature is then matched with a feature in a pre-generated face virtual three-dimensional model, and the face contour feature is fused with the face virtual three-dimensional model based on a matching result to obtain a virtual face three-dimensional model corresponding to the face in the portrait image. When the human face contour features are matched with the features in the human face virtual three-dimensional model generated in advance, the matching accuracy is low, so that the matching error between the human face virtual three-dimensional model and the human face contour features is large, and the problem that the human face similarity between the virtual human face three-dimensional model obtained by fusing the human face contour features and the human face virtual three-dimensional model according to the matching result and the human face image is low is easily caused.
Based on the above research, the present disclosure provides a face reconstruction method, which obtains target coefficients corresponding to a plurality of second real face models respectively by using a process of fitting a plurality of pre-generated real face models to a first real face model, and then generates a target virtual face model corresponding to the target image by using the target coefficients and virtual face models having a predetermined style corresponding to the plurality of second real face models respectively, thereby establishing an association relationship between the plurality of second real face models and the first real face model using the target coefficients as a medium, the association relationship being capable of representing an association between a virtual face model established based on the second real face model and a target virtual face model established based on the first real face model, so that the target virtual face model determined based on the target coefficients and the virtual face model, the method has the advantages that the method has a preset style and has the characteristics of the original face corresponding to the first real face model, and the generated target virtual face model has higher similarity with the original face corresponding to the first real face model.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a face reconstruction method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the face reconstruction method provided in the embodiments of the present disclosure is generally a computer device with certain computing power, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the face reconstruction method may be implemented by a processor calling computer readable instructions stored in a memory.
The following explains the face reconstruction method provided by the embodiment of the present disclosure.
Referring to fig. 1, which is a flowchart of a face reconstruction method provided in the embodiment of the present disclosure, the method includes steps S101 to S103, where:
s101: generating a first real face model based on the target image;
s102: fitting the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models;
s103: and generating a target virtual face model corresponding to the target image based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models.
The method comprises the steps of fitting a plurality of pre-generated real face models to a first real face model to obtain target coefficients corresponding to a plurality of second real face models respectively, establishing association relations between the plurality of second real face models and the first real face model by using the target coefficients as media, and generating a target virtual face model corresponding to a target image by using the target coefficients and virtual face models with preset styles corresponding to the plurality of second real face models respectively. The method ensures that the target virtual face model determined based on the target coefficient and the virtual face model has the preset style and the characteristics of the original face corresponding to the first real face model, and the generated target virtual face model has higher similarity with the original face corresponding to the first real face model.
The following describes details of S101 to S103.
For the above S101, the target image is, for example, an acquired image including a human face, or an image including a human face acquired when a certain object is photographed by a photographing apparatus of a camera, and any one of the human faces included in the image may be an original human face.
Specifically, when the face reconstruction method provided by the embodiment of the present disclosure is applied to different scenes, the target image acquisition methods are also different.
For example, in the case of applying the face reconstruction method to a game, an image including a face of a game player may be acquired by an image acquisition device installed in the game device, or an image including a face of a game player may be selected from an album in the game device and the acquired image including a face of a game player may be used as a target image.
For another example, when the face reconstruction method is applied to a terminal device such as a mobile phone, a camera of the terminal device may collect an image including a face of a user, select an image including a face of a user from an album of the terminal device, or receive an image including a face of a user from another application installed in the terminal device.
For another example, when the face reconstruction method is applied to a live broadcast scene, a video frame image including a face may be determined from a plurality of frame video frame images included in a video stream acquired by a live broadcast device; and the video frame image containing the human face is taken as a target image. Here, the target image may have, for example, a plurality of frames; the multiple frames of target images may be obtained by sampling a video stream, for example.
In generating the first real face model based on the target image, for example, the following manner may be adopted: acquiring a target image comprising an original face; and performing three-dimensional face reconstruction on the original face included in the target image to obtain a first real face model.
Here, when performing three-dimensional face reconstruction on an original face included in the target image, for example, a three-dimensional deformable face model (3Dimensions mobile Models, 3DMM) may be used to obtain a first real face model corresponding to the original face. The first real face model includes, for example, position information of each of a plurality of key points of an original face in a target image in a preset camera coordinate system.
For the above S102, the second real face model is generated based on the reference image including the reference face. In the reference images, reference faces in different reference images can be different; for example, a plurality of persons different in at least one of sex, age, skin color, fat and thin degree, and the like may be determined, and for each of the plurality of persons, a face image of each person is acquired, and the acquired face image is used as a reference image. Therefore, the second real face model obtained based on the reference image can cover a wider face appearance characteristic as much as possible.
The reference face includes, for example, faces corresponding to N different individual objects, (N is an integer greater than 1). In the case of acquiring a plurality of reference images including reference faces, for example, N photographs respectively corresponding to N different individual objects may be obtained by respectively photographing the N different individual objects, each photograph corresponding to one reference face, and the N photographs are taken as reference images; or, determining N reference images from a plurality of pre-shot images comprising different human faces.
Illustratively, the method of generating a plurality of second real face models comprises: acquiring a plurality of reference images comprising reference faces; and aiming at each reference image in the multiple reference images, performing three-dimensional face reconstruction on the reference face included in each reference image to obtain a second real face model corresponding to each reference image.
The method for reconstructing the three-dimensional face of the reference face is similar to the method for reconstructing the three-dimensional face of the original face, and is not described herein again. The obtained second real face model comprises position information of each key point in a plurality of key points of the reference face in the reference image in a preset camera coordinate system. Here, the coordinate system of the second real face model and the coordinate system of the first real face model may be the same coordinate system.
When fitting the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models, for example, the following method may be adopted: and performing least square processing on the plurality of second real face models and the first real face model to obtain target coefficients respectively corresponding to the plurality of second real face models.
For example, in the case where N second real face models are generated in advance, the model data corresponding to the first real face model may be represented as DaExpressing model data respectively corresponding to the N second real face models as Dbi(i∈[1,N]) Wherein D isbiRepresenting the ith one of the N second real face models.
By using DaTo Db1To DbNIs subjected to least square processing to obtain N fitting values, which are expressed as alphai(i∈[1,N]). Wherein alpha isiAnd representing a fitting value corresponding to the ith second real face model. Using the N fit values, a target coefficient Alpha may be determined, which may be represented, for example, by a coefficient matrix, i.e., Alpha ═ Alpha12,…,αN]。
Here, in the process of fitting the first real face model by the second real face model, data obtained by weighting and summing the second real face model by the target coefficient is as close as possible to the data of the first real face model.
The target coefficient may be regarded as an expression coefficient of each second real face model when the plurality of second real face models are used to express the first real face model. That is, the fitting values of the plurality of second real face models respectively corresponding to the expression coefficients can be used to transform and fit the second real face models to the first real face models.
For the above S103, the preset style may be, for example, a card ventilation style, an ancient style, an abstract style, or the like, and may be specifically set according to actual needs. For example, for the case that the preset style is a cartoon style, the virtual face model with the preset style may be a virtual face model with a certain cartoon style, for example.
The method for acquiring the virtual face models with the preset styles respectively corresponding to the plurality of second real face models comprises at least one of the following (a1) and (a 2):
(a1) taking the example of obtaining a virtual face model corresponding to a second real face model, a virtual face image with reference face characteristics and a preset style can be designed and manufactured based on a reference image, and three-dimensional modeling is performed on a virtual face in the virtual face image to obtain skeleton data and skin data of the virtual face in the virtual face image.
Wherein the bone data comprises: the method comprises the steps of bone rotation data, bone scaling data and bone position data of a plurality of bones preset for the virtual face in a preset coordinate system. Here, the plurality of bones may be divided into a plurality of levels, for example; including, for example, the root skeleton, the five sense organs skeleton, the details of the five sense organs skeleton; wherein the five sense bones may include: eyebrow bones, nasal bones, zygomatic bones, mandible bones, mouth bones, etc.; the details of the five sense organs and bones can be further divided into different details. The setting can be specifically performed according to the requirements of virtual images of different styles, which is not limited herein.
The skin data includes: the method comprises the following steps of obtaining position information of a plurality of position points in the surface of the virtual human face in a preset coordinate system and incidence relation information of each position point and at least one bone in a plurality of bones.
And taking a virtual model obtained by three-dimensional modeling of the virtual face in the virtual face image as a virtual face model corresponding to the second real face model.
(a2) And generating a standard virtual human face model with a preset style in advance. The standard virtual face model also comprises standard bone data, standard skinning data and an incidence relation between the standard bone data and the standard skinning data. Based on the reference face, standard skeleton data in the standard virtual face model is designed and modified, so that the standard virtual face model after design and modification has a preset style and also comprises the characteristics of the reference face in the reference image; and then, based on the incidence relation between the standard skeleton data and the standard skinning data, adjusting the standard skinning data, simultaneously adding characteristic information of a reference face to the standard skinning data, and generating a virtual face model corresponding to the second real face model based on the modified standard skeleton data and the modified standard skinning data.
Here, the specific data representation of the virtual face model may be referred to as (a1) above, and is not described herein again.
Referring to fig. 2, an embodiment of the present disclosure further provides a method for generating a target virtual face model corresponding to a target image based on target coefficients corresponding to a plurality of second real face models respectively and bone data of virtual face models having preset styles corresponding to the plurality of second real face models respectively, where the method includes:
s201: and determining target bone data based on the target coefficients respectively corresponding to the plurality of second real face models and the bone data respectively corresponding to the plurality of virtual face models.
The bone data corresponding to the virtual face models respectively comprises at least one of the following data: the virtual human face comprises bone rotation data, bone position data and bone scaling data corresponding to each human face bone in a plurality of human face bones of the virtual human face.
In a possible implementation manner, based on the target coefficients corresponding to the plurality of second real faces, interpolation processing may be performed on the bone data corresponding to the plurality of virtual face models, respectively, to obtain target bone data. The obtained target bone data includes at least one of: target bone position data, target bone scaling data, and target bone rotation data.
Wherein the target bone position data, for example, comprises three-dimensional coordinate values of the central point of the bone in the model coordinate system; the target bone scaling data includes, for example, a scaling of the target bone relative to the bone in the standard virtual face model; the target bone rotation data includes, for example, the rotation angle of the axis of the bone in the model coordinate system.
For example, when determining the target bone data based on the target coefficients corresponding to the plurality of second real face models and the bone data corresponding to the plurality of virtual face models, at least one of the following b1 to b3 may be used:
(b1) and performing interpolation processing on the bone position data respectively corresponding to the plurality of virtual face models based on the target coefficients respectively corresponding to the plurality of second real face models to obtain target bone position data.
(b2) And based on the target coefficients respectively corresponding to the second real face models, carrying out interpolation processing on the skeleton scaling data respectively corresponding to the virtual face models to obtain target skeleton scaling data.
(b3) Converting the bone rotation data respectively corresponding to the virtual face models into quaternion data, and performing regularization processing on the quaternion data to obtain regularized quaternion data; and based on the target coefficients respectively corresponding to the second real face models, performing interpolation processing on the regularized quaternion data respectively corresponding to the virtual face models to obtain target bone rotation data.
In an implementation, in the above methods (b1) and (b2), when the bone position data and the bone scaling data are acquired, the method further includes determining local coordinate systems corresponding to each level of bone and each level of bone based on the plurality of second real face models. In the case of performing skeleton level layering on the face model, for example, the skeleton level may be determined directly according to a biological skeleton layering method, or may be determined according to a requirement of face reconstruction, and a specific layering method may be determined according to an actual situation, which is not described herein again.
After the bone levels are determined, a bone coordinate system corresponding to each bone level can be established based on each bone level. Illustratively, each hierarchy level may be represented as Bonei
At this time, the Bone position data may include each hierarchy of Bone bones Bone in the virtual face modeliThree-dimensional coordinate values under the corresponding skeleton coordinate systems respectively; the Bone scaling data may include levels of Bone in the virtual face modeliThe percentage used to characterize the degree of bone scaling is, for example, 80%, 90%, and 100% in the corresponding bone coordinate system, respectively.
In one possible implementation, the bone position data corresponding to the ith virtual face model is represented as PosiCorresponding skeletal Scaling data as Scalingi. At this time, the bone position data PosiContains skeleton position data corresponding to multiple levels of skeleton, and skeleton Scaling data ScalingiContains the scaling data of the bones corresponding to a plurality of levels of bones.
The corresponding target coefficient is ai. Based on M second real face model correspondencesFor the position skeleton data Pos corresponding to the M virtual face modelsiAnd carrying out interpolation processing to obtain target bone position data.
For example, the target coefficient may be used as a weight corresponding to each virtual face model, and the position skeleton data Pos corresponding to the virtual face model may be obtainediAnd carrying out weighted summation processing to realize the process of interpolation processing. At this time, target bone position data PosnewSatisfies the following formula (1):
Figure BDA0002798860360000161
similarly, the Scaling data of the bones corresponding to the M virtual face models based on the target coefficients corresponding to the M second real face models is represented as ScalingiThe target coefficients corresponding to the M second real face models respectively may be used as weights of corresponding virtual face models, and the skeleton scaling data corresponding to the M virtual face models respectively is subjected to weighted summation processing, so as to perform interpolation processing on the M virtual face models; in this case, the target bone Scaling data ScalingnewSatisfies the following formula (2):
Figure BDA0002798860360000162
for the method (b3), the bone rotation data may include vector values representing the degree of transformation of the rotation coordinates of the bones, including the rotation axis and the rotation angle, in the corresponding bone coordinate system for each bone in the virtual face model. In one possible implementation, the bone rotation data corresponding to the ith virtual face model is represented as Transi. Because the rotation angle of the bone rotation data clock has the problem of universal joint deadlock, the bone rotation data is converted into quaternion data, and the quaternion data is normalized to obtain normalized quaternion data which is expressed as Trans'iTo prevent generation of overfitting when directly performing weighted summation processing on quaternion dataThe phenomenon of synthesis.
Regularization quaternion data Trans 'corresponding to the M virtual face models based on target coefficients corresponding to the M second real face models'iWhen interpolation processing is carried out, the regularized quaternion data corresponding to the M virtual face models can be subjected to weighted summation by taking the target coefficients corresponding to the M second real face models as weights; in this case, target bone rotation data TransnewSatisfies the following formula (3):
Figure BDA0002798860360000163
based on the target bone position data Pos obtained in the above (b1), (b2), and (b3)newScaling data of target bonenewAnd target bone rotation data TransnewThe target Bone data, denoted Bone, can be determinednew. Illustratively, the target bone data may be represented in vector form as (Pos)new,Scalingnew,Transnew)。
The method for receiving the step S201 and generating the target virtual face model corresponding to the target image further includes:
s202: and generating a target virtual human face model based on the target skeleton data.
Referring to fig. 3, a specific method for generating a target virtual face model corresponding to a first real face model based on target skeleton data according to an embodiment of the present disclosure includes:
s301: based on the target skeleton data and the incidence relation between the standard skeleton data and the standard skinning data in the standard virtual human face model, performing position transformation processing on the standard skinning data to generate target skinning data;
s302: and forming a target virtual human face model based on the target skeleton data and the target skin data.
The association relationship between the standard bone data and the standard skinning data in the standard virtual human face model is, for example, the association relationship between the standard bone data and the standard skinning data corresponding to each level of bone. Based on the incidence relation, the skin can be bound on the skeleton in the virtual human face model.
By using the target bone data and the incidence relation between the standard bone data and the standard skinning data in the standard virtual human face model, position transformation processing can be performed on the skinning data at the corresponding positions of the multiple levels of bones, so that the positions of the corresponding levels of bones in the generated target skinning data can be consistent with the positions in the corresponding target bone data.
Here, the association relationship between the skeleton data and the standard skinning data in the standard virtual face model includes: and the coordinate value of each position point in the standard skin deformation data in the model coordinate system is in an association relationship with at least one of the bone position data, the bone scaling data and the bone rotation data of the bone.
When the target bone data and the incidence relation between the standard bone data and the standard skinning data in the standard virtual human face model are used for carrying out position transformation processing on the skinning data at the corresponding positions of a plurality of levels of bones, under the condition that the target bone data is determined, namely under the condition that at least one of the target bone position data, the target bone scaling data and the target bone rotation data of the target bones is determined, the new coordinate values of all position points in the standard skinning data under a model coordinate system can be determined by using the incidence relation after the bones are transformed from the standard bone data to the target bone data, and therefore the target skinning data of the target virtual human face is obtained based on the new coordinate values of all the position points in the standard skinning data under the model coordinate system.
By using the target skeleton data, the skeleton of each level for constructing the target virtual human face model can be determined; and the skin binding the model to the skeleton can be determined by using the target skin data, so that the target virtual human face model is formed.
The method for determining the target virtual face model can be that the target virtual face model is directly established based on target skeleton data and target skin data; or target bone data corresponding to each level of bone can be used for replacing corresponding each level of bone data in the first real face model, and then the target skin data is used for building the target virtual face model. The specific method for establishing the target virtual face model can be determined according to actual conditions, and is not described herein again.
The embodiment of the disclosure also provides a face reconstruction method for acquiring the target image PicAThe target virtual human face model Mod corresponding to the original human face AAimThe description of the specific process of (a).
Determining a target virtual face model ModAimThe steps (c) include the following (c1) to (c 5):
(c1) preparing a material; wherein, preparing the material includes: preparing materials of a standard virtual face model and preparing materials of a virtual picture.
When preparing materials of the standard virtual human face model, taking the cartoon style as the preset style as an example, firstly setting a standard virtual human face model Mod with the cartoon styleBase
When preparing the material of the virtual picture, 24 virtual pictures Pic are collected1~Pic24(ii) a Virtual human face B in 24 collected virtual pictures1~B24The corresponding boys and girls are balanced in number and contain a wider distribution of the characteristics of the five sense organs as far as possible.
(c2) Reconstructing a human face model; wherein, the face model reconstruction comprises: using the target image PicAGenerating a first real face model Mod from the original face AfstAnd using the virtual face B in the virtual picture1~B24Generating a second real face model Modsnd-1~Modsnd-24
Generating a first real face model Mod after determining the original face AfstFirstly, the face in the target image is corrected and cut, and then a pre-trained RGB (red, green and blue) reconstruction neural network is utilized to generate a first real face model Mod corresponding to the original face Afst. Similarly, the virtual face B can be determined by using the pre-trained RGB reconstruction neural network1~B24Respectively corresponding second real face model Modsnd-1~Modsnd-24
In determining the second real face model Modsnd-1~Modsnd-24And then, the method further comprises the following steps: determining a second real face model Mod by using a preset style and a manual adjustment modesnd-1~Modsnd-24Respectively corresponding virtual human face model Mod with preset stylefic-1~Modfic-24
(c3) Fitting; wherein the fitting comprises: fitting the first real face model by using a plurality of second real face models to obtain target coefficients alpha [ alpha ] corresponding to the plurality of second real face models respectivelysnd-1,alphasnd-2,…,alphasnd-24]。
When a plurality of second real face models are used for fitting the first real face model, a least square method is selected for fitting to obtain the 24-dimensional coefficient alpha.
(c4) Determining target bone data; the target bone data is determined by the following steps (c4-1) and (c 4-2).
(c4-1), reading the bone data; wherein the bone data comprises: bone at each leveliVirtual human face model Mod with preset stylefic-1~Modfic-24Respectively corresponding bone position data PosiScaling data of boneiAnd bone rotation data Transi
(c4-2) utilizing the target coefficient alpha to match the virtual human face model Mod with the preset stylefic-1~Modfic-24Respectively carrying out difference processing on the corresponding Bone data to generate target Bone data BonenewIncluding target bone position data PosnewScaling data of target bonenewAnd target bone rotation data Transnew
(c5) And generating a target virtual human face model.
Determining each level of skeleton for constructing the target virtual face model by using the target skeleton data,target skeleton data is replaced to a standard virtual human face model ModBaseAnd determining a skin for binding the model to the skeleton by using the target skin data, and then generating a target virtual face model corresponding to the first real face model by using the association relationship between the standard skeleton data and the standard skin data in the predetermined standard virtual face model.
Referring to fig. 4, an example of specific data used in a plurality of processes included in the above specific example is provided for an embodiment of the present disclosure. Wherein, a in fig. 4 represents a target image, 41 represents an original face a; b in fig. 4 is a schematic diagram of a standard virtual face model in cartoon style; in fig. 4 c is a schematic diagram of the generated target virtual face model corresponding to the first real face model.
Here, it should be noted that (c1) - (c5) above are only one specific example of a method for completing face reconstruction, and do not limit the face reconstruction method provided in the embodiments of the present disclosure.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a face reconstruction device corresponding to the face reconstruction method, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the face reconstruction method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 5, a schematic diagram of a face reconstruction apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: a first generation module 51, a processing module 52, and a second generation module 53; wherein the content of the first and second substances,
a first generating module 51 for generating a first real face model based on the target image;
the processing module 52 is configured to perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models respectively;
a second generating module 53, configured to generate a target virtual face model corresponding to the target image based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models.
In an optional embodiment, the second generating module 53, when generating a target virtual face model corresponding to the target image based on the target coefficients corresponding to the plurality of second real face models respectively and the virtual face models with the preset styles corresponding to the plurality of second real face models respectively, is configured to: determining target bone data based on target coefficients respectively corresponding to the plurality of second real face models and bone data respectively corresponding to the plurality of virtual face models; generating the target virtual face model based on the target skeletal data.
In an alternative embodiment, the second generating module 53, when generating the target virtual face model based on the target skeleton data, is configured to: based on the target skeleton data and the incidence relation between the standard skeleton data and the standard skinning data in the standard virtual human face model, carrying out position transformation processing on the standard skinning data to generate target skinning data; and constructing the target virtual human face model based on the target skeleton data and the target skinning data.
In an alternative embodiment, the bone data of the virtual face model includes at least one of the following data: bone rotation data, bone position data and bone scaling data corresponding to each human face bone in a plurality of human face bones of the virtual human face; the target bone data includes at least one of: the target bone position data, target bone scaling data, and the target bone rotation data.
In an alternative embodiment, the target bone data includes the target bone position data, and the second generating module 53, when determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models, is configured to: and performing interpolation processing on the bone position data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain the target bone position data.
In an alternative embodiment, the target bone data includes the target bone scaling data, and the second generating module 53, when determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models, is configured to: and performing interpolation processing on the skeleton scaling data respectively corresponding to the plurality of virtual face models based on the target coefficients respectively corresponding to the plurality of second real face models to obtain the target skeleton scaling data.
In an alternative embodiment, the target bone data includes the target bone rotation data, and the second generating module 53, when determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models, is configured to: converting the bone rotation data respectively corresponding to the virtual face models into quaternion data, and performing regularization processing on the quaternion data to obtain regularized quaternion data; and based on the target coefficients respectively corresponding to the second real face models, performing interpolation processing on the regularized quaternion data respectively corresponding to the virtual face models to obtain the target skeleton rotation data.
In an alternative embodiment, the first generating module 51, when generating the first real face model based on the target image, is configured to: acquiring a target image comprising an original face; and performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face model.
In an alternative embodiment, the processing module 52 generates a plurality of second real face models according to the following: acquiring a plurality of reference images comprising reference faces; and aiming at each reference image in the plurality of reference images, carrying out three-dimensional face reconstruction on the reference face included in each reference image to obtain a second real face model corresponding to each reference image.
In an optional implementation manner, when the processing module 52 performs fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models, the processing module is configured to: and performing least square processing on the plurality of second real face models and the first real face model to obtain target coefficients corresponding to the plurality of second real face models respectively.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides a computer device, as shown in fig. 6, which is a schematic structural diagram of the computer device provided in the embodiment of the present disclosure, and the computer device includes:
a processor 61 and a memory 62; the memory 62 stores machine-readable instructions executable by the processor 61, the processor 61 being configured to execute the machine-readable instructions stored in the memory 62, the processor 61 performing the following steps when the machine-readable instructions are executed by the processor 61:
generating a first real face model based on the target image; fitting the first real face model by utilizing a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models; and generating a target virtual face model corresponding to the target image based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models.
The memory 62 includes a memory 621 and an external memory 622; the memory 621 is also referred to as an internal memory, and temporarily stores operation data in the processor 61 and data exchanged with the external memory 622 such as a hard disk, and the processor 61 exchanges data with the external memory 622 via the memory 621.
The specific execution process of the instruction may refer to the steps of the face reconstruction method described in the embodiments of the present disclosure, and details are not repeated here.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the face reconstruction method in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the face reconstruction method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A face reconstruction method, comprising:
generating a first real face model based on the target image;
fitting the first real face model by utilizing a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models;
and generating a target virtual face model corresponding to the target image based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models.
2. The method according to claim 1, wherein the generating a target virtual face model corresponding to the target image based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models comprises:
determining target bone data based on target coefficients respectively corresponding to the plurality of second real face models and bone data respectively corresponding to the plurality of virtual face models;
generating the target virtual face model based on the target skeletal data.
3. The method of claim 2, wherein generating the target virtual face model based on the target skeletal data comprises:
based on the target skeleton data and the incidence relation between the standard skeleton data and the standard skinning data in the standard virtual human face model, carrying out position transformation processing on the standard skinning data to generate target skinning data;
and constructing the target virtual human face model based on the target skeleton data and the target skinning data.
4. A face reconstruction method as claimed in claim 2 or 3, characterized in that the skeletal data of the virtual face model comprises at least one of the following data: bone rotation data, bone position data and bone scaling data corresponding to each human face bone in a plurality of human face bones of the virtual human face;
the target bone data includes at least one of: the target bone position data, target bone scaling data, and the target bone rotation data.
5. The method of claim 4, wherein the target bone data comprises the target bone position data, and wherein the determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models comprises:
and performing interpolation processing on the bone position data respectively corresponding to the virtual face models based on the target coefficients respectively corresponding to the second real face models to obtain the target bone position data.
6. The method of claim 4 or 5, wherein the target bone data comprises the target bone scaling data, and the determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models comprises:
and performing interpolation processing on the skeleton scaling data respectively corresponding to the plurality of virtual face models based on the target coefficients respectively corresponding to the plurality of second real face models to obtain the target skeleton scaling data.
7. The method of any of claims 4-6, wherein the target bone data comprises the target bone rotation data, and the determining the target bone data based on the target coefficients corresponding to the second real face models and the bone data corresponding to the virtual face models comprises:
converting the bone rotation data respectively corresponding to the virtual face models into quaternion data, and performing regularization processing on the quaternion data to obtain regularized quaternion data;
and based on the target coefficients respectively corresponding to the second real face models, performing interpolation processing on the regularized quaternion data respectively corresponding to the virtual face models to obtain the target skeleton rotation data.
8. The method according to any of claims 1-7, wherein the generating a first real face model based on the target image comprises:
acquiring a target image comprising an original face;
and performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face model.
9. The method of any of claims 1 to 8, wherein the plurality of second real face models are generated according to the following:
acquiring a plurality of reference images comprising reference faces;
and aiming at each reference image in the plurality of reference images, carrying out three-dimensional face reconstruction on the reference face included in each reference image to obtain a second real face model corresponding to each reference image.
10. The method according to any one of claims 1 to 9, wherein the fitting process of the first real face model by using a plurality of pre-generated second real face models to obtain target coefficients corresponding to the plurality of second real face models respectively comprises:
and performing least square processing on the plurality of second real face models and the first real face model to obtain target coefficients corresponding to the plurality of second real face models respectively.
11. A face reconstruction apparatus, comprising:
a first generation module for generating a first real face model based on the target image;
the processing module is used for fitting the first real face model by utilizing a plurality of pre-generated second real face models to obtain target coefficients respectively corresponding to the second real face models;
and the second generation module is used for generating a target virtual face model corresponding to the target image based on the target coefficients respectively corresponding to the plurality of second real face models and the virtual face models with preset styles respectively corresponding to the plurality of second real face models.
12. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the processor being configured to execute the machine-readable instructions stored in the memory, the processor performing the steps of the face reconstruction method according to any one of claims 1 to 10 when the machine-readable instructions are executed by the processor.
13. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when executed by a computer device, performs the steps of the face reconstruction method according to any one of claims 1 to 10.
CN202011342169.7A 2020-11-25 2020-11-25 Face reconstruction method, device, computer equipment and storage medium Active CN112419485B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202011342169.7A CN112419485B (en) 2020-11-25 2020-11-25 Face reconstruction method, device, computer equipment and storage medium
KR1020237021453A KR20230110607A (en) 2020-11-25 2021-06-25 Face reconstruction methods, devices, computer equipment and storage media
PCT/CN2021/102404 WO2022110790A1 (en) 2020-11-25 2021-06-25 Face reconstruction method and apparatus, computer device, and storage medium
JP2022519295A JP2023507862A (en) 2020-11-25 2021-06-25 Face reconstruction method, apparatus, computer device, and storage medium
TW110127359A TWI778723B (en) 2020-11-25 2021-07-26 Method, device, computer equipment and storage medium for reconstruction of human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011342169.7A CN112419485B (en) 2020-11-25 2020-11-25 Face reconstruction method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112419485A true CN112419485A (en) 2021-02-26
CN112419485B CN112419485B (en) 2023-11-24

Family

ID=74843538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011342169.7A Active CN112419485B (en) 2020-11-25 2020-11-25 Face reconstruction method, device, computer equipment and storage medium

Country Status (5)

Country Link
JP (1) JP2023507862A (en)
KR (1) KR20230110607A (en)
CN (1) CN112419485B (en)
TW (1) TWI778723B (en)
WO (1) WO2022110790A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114078184A (en) * 2021-11-11 2022-02-22 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and medium
CN114529640A (en) * 2022-02-17 2022-05-24 北京字跳网络技术有限公司 Moving picture generation method and device, computer equipment and storage medium
WO2022110790A1 (en) * 2020-11-25 2022-06-02 北京市商汤科技开发有限公司 Face reconstruction method and apparatus, computer device, and storage medium
WO2022110791A1 (en) * 2020-11-25 2022-06-02 北京市商汤科技开发有限公司 Method and apparatus for face reconstruction, and computer device, and storage medium
CN115187822A (en) * 2022-07-28 2022-10-14 广州方硅信息技术有限公司 Face image data set analysis method, live broadcast face image processing method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016029768A1 (en) * 2014-08-29 2016-03-03 厦门幻世网络科技有限公司 3d human face reconstruction method and apparatus
CN109395390A (en) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of game role facial model
CN110111417A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Generation method, device and the equipment of three-dimensional partial body's model
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment
CN110400369A (en) * 2019-06-21 2019-11-01 苏州狗尾草智能科技有限公司 A kind of method of human face rebuilding, system platform and storage medium
CN110717977A (en) * 2019-10-23 2020-01-21 网易(杭州)网络有限公司 Method and device for processing face of game character, computer equipment and storage medium
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
US20200302668A1 (en) * 2018-02-09 2020-09-24 Tencent Technology (Shenzhen) Company Limited Expression animation data processing method, computer device, and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101696007B1 (en) * 2013-01-18 2017-01-13 한국전자통신연구원 Method and device for creating 3d montage
JP6207210B2 (en) * 2013-04-17 2017-10-04 キヤノン株式会社 Information processing apparatus and method
CN104851123B (en) * 2014-02-13 2018-02-06 北京师范大学 A kind of three-dimensional face change modeling method
US11127163B2 (en) * 2015-06-24 2021-09-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Skinned multi-infant linear body model
CN109978989B (en) * 2019-02-26 2023-08-01 腾讯科技(深圳)有限公司 Three-dimensional face model generation method, three-dimensional face model generation device, computer equipment and storage medium
CN111710035B (en) * 2020-07-16 2023-11-07 腾讯科技(深圳)有限公司 Face reconstruction method, device, computer equipment and storage medium
CN112419485B (en) * 2020-11-25 2023-11-24 北京市商汤科技开发有限公司 Face reconstruction method, device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016029768A1 (en) * 2014-08-29 2016-03-03 厦门幻世网络科技有限公司 3d human face reconstruction method and apparatus
US20200302668A1 (en) * 2018-02-09 2020-09-24 Tencent Technology (Shenzhen) Company Limited Expression animation data processing method, computer device, and storage medium
CN109395390A (en) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 Processing method, device, processor and the terminal of game role facial model
CN110111417A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Generation method, device and the equipment of three-dimensional partial body's model
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment
CN110400369A (en) * 2019-06-21 2019-11-01 苏州狗尾草智能科技有限公司 A kind of method of human face rebuilding, system platform and storage medium
CN110717977A (en) * 2019-10-23 2020-01-21 网易(杭州)网络有限公司 Method and device for processing face of game character, computer equipment and storage medium
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
署光等: "基于稀疏形变模型的三维卡通人脸生成", 《电子学报》 *
署光等: "基于稀疏形变模型的三维卡通人脸生成", 《电子学报》, no. 08, 31 August 2010 (2010-08-31), pages 1798 - 1802 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022110790A1 (en) * 2020-11-25 2022-06-02 北京市商汤科技开发有限公司 Face reconstruction method and apparatus, computer device, and storage medium
WO2022110791A1 (en) * 2020-11-25 2022-06-02 北京市商汤科技开发有限公司 Method and apparatus for face reconstruction, and computer device, and storage medium
CN114078184A (en) * 2021-11-11 2022-02-22 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and medium
CN114529640A (en) * 2022-02-17 2022-05-24 北京字跳网络技术有限公司 Moving picture generation method and device, computer equipment and storage medium
CN114529640B (en) * 2022-02-17 2024-01-26 北京字跳网络技术有限公司 Moving picture generation method, moving picture generation device, computer equipment and storage medium
CN115187822A (en) * 2022-07-28 2022-10-14 广州方硅信息技术有限公司 Face image data set analysis method, live broadcast face image processing method and device

Also Published As

Publication number Publication date
TW202221652A (en) 2022-06-01
TWI778723B (en) 2022-09-21
WO2022110790A1 (en) 2022-06-02
JP2023507862A (en) 2023-02-28
KR20230110607A (en) 2023-07-24
CN112419485B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN112419454A (en) Face reconstruction method and device, computer equipment and storage medium
CN112419485A (en) Face reconstruction method and device, computer equipment and storage medium
WO2020192568A1 (en) Facial image generation method and apparatus, device and storage medium
CN110717977B (en) Method, device, computer equipment and storage medium for processing game character face
CN111632374B (en) Method and device for processing face of virtual character in game and readable storage medium
CN110399849A (en) Image processing method and device, processor, electronic equipment and storage medium
WO2021253788A1 (en) Three-dimensional human body model construction method and apparatus
CN111694430A (en) AR scene picture presentation method and device, electronic equipment and storage medium
JP2013524357A (en) Method for real-time cropping of real entities recorded in a video sequence
CN113838176B (en) Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN111784821A (en) Three-dimensional model generation method and device, computer equipment and storage medium
WO2013078404A1 (en) Perceptual rating of digital image retouching
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
CN111652987A (en) Method and device for generating AR group photo image
CN112419144A (en) Face image processing method and device, electronic equipment and storage medium
CN114333034A (en) Face pose estimation method and device, electronic equipment and readable storage medium
CN107766803B (en) Video character decorating method and device based on scene segmentation and computing equipment
CN114529640B (en) Moving picture generation method, moving picture generation device, computer equipment and storage medium
WO2022110855A1 (en) Face reconstruction method and apparatus, computer device, and storage medium
CN114373044A (en) Method, device, computing equipment and storage medium for generating three-dimensional face model
CN114612614A (en) Human body model reconstruction method and device, computer equipment and storage medium
CN115393487A (en) Virtual character model processing method and device, electronic equipment and storage medium
CN113240811B (en) Three-dimensional face model creating method, system, equipment and storage medium
CN115550563A (en) Video processing method, video processing device, computer equipment and storage medium
CN114677476A (en) Face processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40040545

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant