WO2022110790A1 - 重建人脸的方法、装置、计算机设备及存储介质 - Google Patents

重建人脸的方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2022110790A1
WO2022110790A1 PCT/CN2021/102404 CN2021102404W WO2022110790A1 WO 2022110790 A1 WO2022110790 A1 WO 2022110790A1 CN 2021102404 W CN2021102404 W CN 2021102404W WO 2022110790 A1 WO2022110790 A1 WO 2022110790A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
data
target
real
virtual
Prior art date
Application number
PCT/CN2021/102404
Other languages
English (en)
French (fr)
Inventor
徐胜伟
王权
钱晨
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2022519295A priority Critical patent/JP2023507862A/ja
Priority to KR1020237021453A priority patent/KR20230110607A/ko
Publication of WO2022110790A1 publication Critical patent/WO2022110790A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to a method, an apparatus, a computer device and a storage medium for reconstructing a human face.
  • a three-dimensional model of a virtual face can be established according to a real face or one's own preferences, so as to realize the reconstruction of the face, which has a wide range of applications in the fields of games, animation, and virtual social interaction.
  • the player can use the face reconstruction system provided by the game program to generate a virtual face 3D model according to the real face included in the image provided by the player, and use the generated virtual face 3D model to have a more sense of substitution participation in the game.
  • face contour features are usually extracted based on face images, and then the extracted face contour features are matched and fused with the pre-generated virtual three-dimensional model.
  • the generated virtual face 3D model and the real face image have a low matching degree. Similarity is low.
  • the embodiments of the present disclosure provide at least a method, an apparatus, a computer device, and a storage medium for reconstructing a human face.
  • an embodiment of the present disclosure provides a method for reconstructing a face, including: generating a first real face model based on a target image; performing a fitting process on the face model to obtain fitting coefficients corresponding to a plurality of second real face models respectively; based on the fitting coefficients corresponding to the plurality of second real face models respectively, and
  • the face models respectively correspond to virtual face models with preset styles, and a target virtual face model corresponding to the target image is generated.
  • the fitting coefficient is used as a medium to establish a plurality of associations between the second real face models and the first real face
  • the association between the virtual face model and the target virtual face model established based on the first real face model enables the target virtual face model determined based on the fitting coefficient to have both the preset style and the first real face
  • the features of the original face corresponding to the model, the generated target virtual face model, and the original face corresponding to the first real face model have a higher degree of similarity.
  • the fitting coefficients corresponding to the plurality of second real face models and the virtual simulations with preset styles corresponding to the plurality of second real face models respectively. face model, generating a target virtual face model corresponding to the target image, including: based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the corresponding corresponding virtual face models The skeleton data is used to determine target skeleton data; based on the target skeleton data, the target virtual face model is generated.
  • generating the target virtual face model based on the target skeleton data includes: based on the target skeleton data, and standard skeleton data and standard skin in the standard virtual face model. According to the relationship between the data, the standard skin data is subjected to position transformation processing to generate target skin data; based on the target bone data and the target skin data, the target virtual face model is generated.
  • the target skin data and the correlation between the standard bone data and the standard skin data in the standard virtual face model can be used, so that the obtained target virtual face model can better combine the target bone data with the standard skin data.
  • the target skin data is fitted, so that in the target virtual face model formed by using the target bone data, the target virtual face model caused by the change of the bone data shows less abnormal convexity or concave.
  • the bone data corresponding to the virtual face model includes at least one of the following data: bone rotation data corresponding to each face bone in the multiple face bones of the virtual face model, Bone position data, and bone scaling data;
  • the target bone data includes at least one of the following data: target bone position data, target bone scaling data, and target bone rotation data.
  • the bone data corresponding to each of the multiple face bones can be more accurately represented by using the bone data, and the target virtual face model can be determined more accurately by using the target bone data.
  • the target skeleton data is determined based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively, including: : Based on the fitting coefficients corresponding to the plurality of second real face models respectively, perform interpolation processing on the bone position data corresponding to the plurality of virtual face models to obtain the target bone position data.
  • the target skeleton data is determined based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively, including: : based on the fitting coefficients corresponding to the plurality of second real face models respectively, perform interpolation processing on the bone scaling data corresponding to the plurality of virtual face models to obtain the target bone scaling data.
  • the target skeleton data is determined based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively, including: : converting the skeleton rotation data corresponding to the plurality of virtual face models into quaternions, and performing regularization processing on the quaternions corresponding to the plurality of virtual face models respectively, to obtain a regularized quaternion; Based on the fitting coefficients corresponding to the plurality of second real face models, interpolation processing is performed on the regularization quaternions corresponding to the plurality of virtual face models to obtain target bone rotation data.
  • the target skeleton data can be used to adjust the virtual face model more accurately, so that the skeleton details in the obtained target virtual face model are more detailed and more similar to the skeleton details of the original face, so that the target The virtual face model has a higher similarity with the original face.
  • the generating the first real face model based on the target image includes: acquiring a target image including an original face; performing a three-dimensional face analysis on the original face included in the target image. Reconstruction to obtain the first real face model.
  • the face features of the original face in the target image can be more accurately and comprehensively represented by using the first real face model obtained by reconstructing the original face in three dimensions.
  • the second real face model is generated according to the following manner: obtaining a multiple reference images; for each reference image in the multiple reference images, perform three-dimensional face reconstruction on the reference face included in the reference image to obtain the second real face corresponding to the reference image face model.
  • the second real face model obtained by performing 3D face reconstruction based on each reference image in the multiple reference images is also the same as It can cover as wide a range of facial features as possible.
  • the first real face model is fitted by using a plurality of pre-generated second real face models, and the corresponding simulations of the plurality of second real face models are obtained.
  • the fitting coefficients include: performing least squares processing on a plurality of the second real face models and the first real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively.
  • the fitting coefficient by using the fitting coefficient, the fitting situation when the first real face model is fitted by using a plurality of second real face models can be accurately represented.
  • an embodiment of the present disclosure further provides an apparatus for reconstructing a human face, including:
  • a first generation module for generating a first real face model based on the target image
  • a processing module configured to perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively;
  • a second generation module configured to be based on the fitting coefficients corresponding to the plurality of second real face models and the virtual face models with preset styles corresponding to the plurality of second real face models respectively, A target virtual face model corresponding to the target image is generated.
  • the second generation module is based on the fitting coefficients corresponding to the plurality of second real face models and the The virtual face model of the preset style, when generating the target virtual face model corresponding to the target image, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, and a plurality of the The skeleton data corresponding to the virtual face models are determined to determine target skeleton data; based on the target skeleton data, the target virtual face model is generated.
  • the second generation module when the second generation module generates the target virtual face model based on the target skeleton data, it is used for: based on the target skeleton data and the standard virtual face model.
  • the relationship between the standard bone data and the standard skin data, the standard skin data is subjected to position transformation processing, and the target skin data is generated; based on the target bone data and the target skin data, the target virtual data is generated. face model.
  • the bone data corresponding to the virtual face model includes at least one of the following data: bone rotation data corresponding to each face bone in the multiple face bones of the virtual face model, Bone position data, and bone scaling data;
  • the target bone data includes at least one of the following data: target bone position data, target bone scaling data, and target bone rotation data.
  • the second generation module determines based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively.
  • the target skeleton data is used, it is used to: perform interpolation processing on the skeleton position data corresponding to the plurality of virtual face models based on the fitting coefficients corresponding to the plurality of second real face models respectively, to obtain the target skeleton position data .
  • the second generation module determines based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively.
  • the target skeleton data is used, it is used to: perform interpolation processing on the skeleton scaling data corresponding to the plurality of virtual face models based on the fitting coefficients corresponding to the plurality of second real face models respectively, to obtain the target skeleton scaling data .
  • the second generation module determines based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively.
  • the target skeleton data it is used to: convert the skeleton rotation data corresponding to the plurality of virtual face models into quaternions, and perform regularization processing on the quaternions corresponding to the plurality of virtual face models respectively, obtaining a regularized quaternion; based on the fitting coefficients corresponding to the multiple second real face models respectively, perform interpolation processing on the regularized quaternions corresponding to the multiple virtual face models respectively, to obtain the target bone rotation data.
  • the first generation module when generating the first real face model based on the target image, is used to: acquire a target image including the original face; Three-dimensional face reconstruction is performed on the original face to obtain the first real face model.
  • the processing module generates the second real face model for each second real face model in the plurality of second real face models according to the following manner: obtaining the second real face model includes: Multiple reference images of the reference face; for each reference image in the multiple reference images, perform three-dimensional face reconstruction on the reference face included in the reference image, and obtain the first reference image corresponding to the reference image. Two real face models.
  • the processing module uses a plurality of pre-generated second real face models to perform fitting processing on the first real face model, and obtains a plurality of second real face models corresponding to
  • the fitting coefficient it is used to: perform least squares processing on a plurality of the second real face models and the first real face models to obtain the corresponding fitting coefficients of the plurality of second real face models respectively. Combined coefficient.
  • an optional implementation manner of the present disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the instructions stored in the memory.
  • machine-readable instructions when the machine-readable instructions are executed by the processor, when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or any possible implementation of the first aspect, is executed steps in the method.
  • an optional implementation manner of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program executes the first aspect, or any of the first aspect, when the computer program is run. steps in one possible implementation.
  • FIG. 1 shows a flowchart of a method for reconstructing a human face provided by an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of a method for generating a target virtual face model corresponding to a target image provided by an embodiment of the present disclosure
  • FIG. 3 shows a flowchart of a specific method for generating a target virtual face model corresponding to a first real face model based on target skeleton data provided by an embodiment of the present disclosure
  • FIG. 4 shows an example of multiple faces and face models involved in the method for reconstructing a face provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of an apparatus for reconstructing a human face provided by an embodiment of the present disclosure
  • FIG. 6 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
  • the method of face reconstruction can establish a three-dimensional model of virtual face according to the real face or one's own preferences.
  • face reconstruction based on the real face in the portrait image the feature extraction of the real face in the portrait image is usually performed first to obtain the face contour feature, and then the face contour feature is combined with the pre-generated virtual three-dimensional feature.
  • the features in the model are matched, and based on the matching results, the face contour features are fused with the virtual three-dimensional model to obtain a virtual three-dimensional model of the face corresponding to the real face in the portrait image.
  • the matching accuracy is low when the face contour features are matched with the features in the pre-generated virtual three-dimensional model, the matching error between the virtual three-dimensional model and the face contour features is relatively large, and it is easy to cause the matching results based on the matching results.
  • the embodiments of the present disclosure provide a method for reconstructing a human face, which can generate a target virtual face model with a preset style, and the target virtual face model has a relatively high difference between the target virtual face model and the real face. high similarity.
  • the execution subject of the method for reconstructing a face provided by the embodiment of the present disclosure is generally a computer with a certain computing capability.
  • equipment the computer equipment for example includes: terminal equipment or server or other processing equipment, the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant) Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
  • the method for reconstructing a human face may be implemented by the processor calling computer-readable instructions stored in the memory.
  • FIG. 1 is a flowchart of a method for reconstructing a face provided by an embodiment of the present disclosure. As shown in FIG. 1 , the method includes steps S101 to S103, wherein:
  • S101 Generate a first real face model based on the target image
  • S102 Perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively;
  • S103 Generate a target virtual person corresponding to the target image based on the fitting coefficients corresponding to the plurality of second real face models and the virtual face models with preset styles corresponding to the plurality of second real face models respectively face model.
  • a process of fitting a first real face model with multiple pre-generated real face models is used to obtain fitting coefficients corresponding to a second real face model, and the fitting coefficients are used as a medium to establish multiple The relationship between a second real face model and the first real face model, and then use the fitting coefficient and the virtual faces with preset styles corresponding to the plurality of second real face models respectively. model, and generate a target virtual face model corresponding to the target image.
  • This method makes the target virtual face model determined based on the fitting coefficient and the virtual face model not only has a preset style, but also has the characteristics of the original face corresponding to the first real face model, that is, the generated target There is a high degree of similarity between the virtual face model and the original face corresponding to the first real face model.
  • the target image is, for example, an acquired image including a human face. All faces can be used as original faces.
  • the method for reconstructing a face provided by the embodiment of the present disclosure is applied to different scenarios, the method for acquiring the target image is also different.
  • an image containing the face of the game player can be acquired through an image acquisition device installed in the game device, or an image containing the game player's face can be selected from an album of the game device The image of the face of the game player, and the acquired image containing the face of the game player is used as the target image.
  • an image including the user's face can be collected by the camera of the terminal device, or an image including the user's face can be selected from an album of the terminal device, Or receive images containing the user's face from other applications installed in the terminal device.
  • a video frame image containing a human face can be obtained from multiple frames of video frame images included in the video stream of the live broadcast device; image as the target image.
  • the target image may have multiple frames; for example, the multiple-frame target image may be obtained by sampling a video stream.
  • the following methods may be adopted: obtaining a target image including the original face; performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face Model.
  • a three-dimensional deformable face model (3 Dimensions Morphable Models, 3DMM) can be used to obtain the first real face model corresponding to the original face.
  • the first real face model includes, for example, the position information of each key point in the preset camera coordinate system among the multiple key points of the original face in the target image.
  • the second real face model is generated based on the reference image including the reference face.
  • the reference faces in different reference images may be different; exemplarily, multiple people with different at least one of gender, age, skin color, degree of fatness and thinness, etc. may be selected, and the person of each person may be obtained for each of the multiple people face image, and use the acquired face image as a reference image.
  • the plurality of second real face models obtained based on the plurality of reference images can cover a relatively wide range of face shape features as much as possible.
  • the reference face includes, for example, N faces corresponding to N different individual objects, (N is an integer greater than 1).
  • N photos corresponding to the N different individual objects can be obtained by separately photographing N different individual objects, and each photo is Corresponding to a reference face, the N photos are used as reference images; or, N reference images are determined from a plurality of images including different faces that have been photographed in advance.
  • the method for generating the second real face model includes: acquiring a plurality of reference images including a reference face; For each reference image in the reference images, three-dimensional face reconstruction is performed on the reference face included in the reference image to obtain a second real face model corresponding to the reference image.
  • the method for performing 3D face reconstruction on the reference face is similar to the above-mentioned method for performing 3D face reconstruction on the original face, and will not be repeated here.
  • the obtained second real face model includes position information of each key point in the preset camera coordinate system among the multiple key points of the reference face in the reference image.
  • the coordinate system of the second real face model and the coordinate system of the first real face model may be the same coordinate system.
  • the following method may be used: The second real face model and the first real face model are subjected to least squares processing to obtain fitting coefficients corresponding to the plurality of second real face models respectively.
  • the model data corresponding to the first real face model can be represented as D a
  • the model data corresponding to the second real face model can be represented as D bi (i ⁇ [1, N]), where D bi represents the i-th second real face model among the N second real face models.
  • N fitting values can be obtained, and the fitting values are expressed as ⁇ i (i ⁇ [1,N]).
  • ⁇ i represents the fitting value corresponding to the ith second real face model.
  • the fitting coefficient Alpha can be determined, for example, it can be represented by a coefficient matrix, that is,
  • Alpha [ ⁇ 1 , ⁇ 2 ,..., ⁇ N ].
  • the fitting coefficient can also be regarded as an expression coefficient of each second real face model when the first real face model is expressed by using a plurality of second real face models. That is, by using the fitting values corresponding to the expression coefficients of the plurality of second real face models respectively, the second real face model can be transformed and fitted to the first real face model.
  • the preset style may be, for example, a cartoon style, an ancient style, or an abstract style, and may be specifically set according to actual needs.
  • the virtual face model with the preset style may be, for example, a virtual face model with a certain cartoon style.
  • the method for acquiring multiple virtual face models with preset styles corresponding to multiple second real face models respectively includes, for example, at least one of the following (a1) and (a2).
  • a virtual face image with reference face features and a preset style can be designed and produced based on the reference image, and the virtual face image can be designed and produced based on the reference image.
  • the virtual face in the image is subjected to three-dimensional modeling, and the skeleton data and skin data of the virtual face model in the virtual face image are obtained.
  • the skeleton data includes skeleton rotation data, skeleton scaling data, and skeleton position data of a plurality of bones preset for the virtual face in a preset coordinate system.
  • multiple bones can be divided into multiple levels; for example, they include root (root) bones, facial bones, and facial detail bones; where facial bones can include eyebrow bones, nasal bones, cheekbones, mandibles, and mouth bones, etc. ;
  • the detailed bones of the facial features can further divide the bones of different facial features in detail. Specific settings can be made according to different styles of virtual image requirements, which are not limited here.
  • the skinned data includes position information of multiple position points on the surface of the virtual face in a preset coordinate system and information about the association relationship between each position point and at least one of the multiple bones.
  • the virtual model obtained by performing three-dimensional modeling on the virtual face in the virtual face image is used as the virtual face model corresponding to the second real face model.
  • the standard virtual face model also includes standard bone data, standard skin data, and an association relationship between the standard bone data and the standard skin data. Based on the reference face, design and modify the standard skeleton data in the standard virtual face model, so that the modified standard virtual face model has the preset style and also includes the features of the reference face in the reference image. ; Then, based on the relationship between the standard bone data and the standard skin data, adjust the standard skin data, and add the feature information of the reference face to the standard skin data, based on the modified standard bones The data and the modified standard skinning data are used to generate a virtual face model corresponding to the second real face model.
  • an embodiment of the present disclosure further provides a fitting coefficient corresponding to a plurality of second real face models, and a virtual simulation with a preset style corresponding to the plurality of second real face models respectively.
  • the skeleton data of the face model, and the method for generating the target virtual face model corresponding to the target image including:
  • S201 Determine target skeleton data based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively.
  • the bone data includes at least one of the following data: bone rotation data, bone position data, and bone scaling data corresponding to each face bone in the multiple face bones of the virtual face model.
  • interpolation processing may be performed on the skeleton data corresponding to the plurality of virtual face models to obtain the target skeleton data.
  • the obtained target bone data includes at least one of the following: target bone position data, target bone scaling data, and target bone rotation data.
  • the target bone position data includes, for example, the three-dimensional coordinate value of the center point of the bone in the model coordinate system;
  • the target bone scaling data includes, for example, the scaling ratio of the target bone relative to the bone in the standard virtual face model;
  • the target bone rotation data includes, for example, the bone The rotation angle of the axis in the model coordinate system.
  • the target skeleton data is determined based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models, for example, the following (b1) to (b3) can be used. to achieve at least one of them.
  • the method further includes determining each level of bones and the corresponding local coordinate system.
  • the bone level can be determined directly according to the biological bone layering method, or the bone level can be determined according to the requirements of face reconstruction.
  • the specific layering method can be determined according to the actual situation. The situation is determined and will not be repeated here.
  • each bone level is determined, a bone coordinate system corresponding to each bone level can be established based on each bone level.
  • each level of bone can be represented as Bone i .
  • the bone position data may include the three-dimensional coordinate values of each level of bone Bone i in the virtual face model in the corresponding bone coordinate system; the bone scaling data may include the corresponding bone Bone i of each level in the virtual face model.
  • the percentage used to represent the scale of the bone in the bone coordinate system such as 80%, 90% or 100%.
  • the bone position data corresponding to the ith virtual face model is represented as Pos i
  • the bone scaling data corresponding to the ith virtual face model is represented as Scaling i
  • the bone position data Pos i includes the bone position data corresponding to the plurality of hierarchical bones respectively
  • the bone scaling data Scaling i includes the bone scaling data corresponding to the plurality of hierarchical bones respectively.
  • the corresponding fitting coefficient at this time is a i .
  • interpolation processing is performed on the positional bone data Pos i corresponding to the M virtual face models to obtain target bone position data.
  • the fitting coefficient may be used as the weight corresponding to each virtual face model, and the weighted summation processing is performed on the bone position data Pos i corresponding to the virtual face model to implement the interpolation processing process.
  • the target bone position data Pos new satisfies the following formula (1):
  • interpolation processing is performed on the bone scaling data corresponding to the M virtual face models to obtain target bone scaling data, wherein the i-th virtual face model is
  • the corresponding bone scaling data is represented as Scaling i
  • the fitting coefficients corresponding to the M second real face models can be used as the weights of the corresponding virtual face models, and the bone scaling data corresponding to the M virtual face models can be performed.
  • the bone rotation data may include vector values used to represent the degree of rotation coordinate transformation of each bone in the virtual face model in the corresponding bone coordinate system, for example, including a rotation axis and a rotation angle.
  • the bone rotation data corresponding to the ith virtual face model is represented as Trans i . Since the rotation angle contained in the bone rotation data has the problem of gimbal deadlock, the bone rotation data is converted into a quaternion, and the quaternion is regularized to obtain the regularized quaternion data, which is expressed as Trans' i , in order to prevent overfitting when the quaternion is directly weighted and summed.
  • the M second real face models may also correspond to The fitting coefficient of is used as a weight, and the regularization quaternion corresponding to the M virtual face models is weighted and summed; in this case, the target bone rotation data Trans new satisfies the following formula (3):
  • the target bone data can be determined, which is represented as Bone new .
  • the target bone data can be represented as (Pos new , Scaling new , Trans new ) in the form of a vector.
  • the method for generating the target virtual face model corresponding to the target image further includes:
  • a specific method for generating a target virtual face model corresponding to a first real face model based on target skeleton data includes:
  • S301 Based on the target skeleton data and the relationship between the standard skeleton data and the standard skin data in the standard virtual face model, perform position transformation processing on the standard skin data to generate target skin data;
  • S302 Generate a target virtual face model based on the target bone data and the target skin data.
  • the association relationship between the standard skeleton data and the standard skin data in the standard virtual face model is, for example, the association relationship between the standard skeleton data corresponding to each level of bones and the standard skin data. Based on this association, the skin can be bound to the bones in the virtual face model.
  • the skin data at the corresponding positions of multiple levels of bones can be subjected to position transformation, so that the generated target skin data
  • the position of the corresponding level bone in the data can be consistent with the position in the corresponding target bone data.
  • the relationship between the bone data in the standard virtual face model and the standard skin data includes: the coordinate value of each position point in the standard skin deformation data in the model coordinate system, the bone position data of the bone, the bone scaling The relationship between the data and at least one item of the bone rotation data.
  • the target skeleton data and the relationship between the standard skeleton data and the standard skin data in the standard virtual face model can be used to determine that after the bone is transformed from the standard bone data to the target bone data , the new coordinate values of each position point in the standard skinning data under the model coordinate system, so as to obtain the target virtual face model based on the new coordinate value of each position point in the standard skinning data under the model coordinate system Target skin data.
  • target bone data it is possible to determine the bones of each level used to construct the target virtual face model; and using the target skin data, it is possible to determine the skin that binds the model to the bones, thereby forming the target virtual face model.
  • the method of determining the target virtual face model may be to directly establish the target virtual face model based on the target skeleton data and the target skin data; Corresponding skeleton data at each level, and then use the target skin data to build the target virtual face model.
  • the specific method for establishing the target virtual face model can be determined according to the actual situation, and will not be repeated here.
  • the embodiment of the present disclosure also provides an explanation of the specific process of obtaining the target virtual face model Mod Aim corresponding to the original face A in the target image Pic A by using the method for reconstructing a face provided by the embodiment of the present disclosure.
  • Determining the target virtual face model Mod Aim includes the following steps (c1) to (c5):
  • Preparing materials including: preparing materials for standard virtual face models; and preparing materials for virtual pictures.
  • (c2) face model reconstruction including: using the original face A in the target image Pic A to generate a first real face model Mod fst ; and using the virtual faces B 1 to B 24 in the virtual picture to generate a second real face model Face model Mod snd-1 ⁇ Mod snd-24 .
  • the face in the target image is straightened and cropped, and then the pre-trained RGB is used to reconstruct the neural network to generate the first real face corresponding to the original face A.
  • Face model Mod fst Similarly, by using the pre-trained RGB reconstruction neural network, the second real face models Mod snd-1 to Mod snd-24 corresponding to the virtual faces B 1 to B 24 respectively can be determined.
  • the method further includes: determining the second real face models Mod snd- 1 to Mod snd -24 by using a preset style and manual adjustment The corresponding virtual face models Mod fic-1 to Mod fic-24 with preset styles respectively.
  • the method of least squares is selected for fitting, and a 24-dimensional coefficient alpha is obtained.
  • the skeleton data includes: the skeleton position data Pos i , respectively corresponding to the virtual face models Mod fic-1 to Mod fic-24 with preset styles under the bones Bone i of each level. Bone scaling data Scaling i and bone rotation data Trans i .
  • target bone data to determine the bones of each level used to construct the target virtual face model, replace the target bone data with the standard virtual face model Mod Base , and use the target skin data to determine the skeleton to bind the model to the bone. skin, and then use the pre-determined relationship between the standard skeleton data and the standard skin data in the standard virtual face model to generate a target virtual face model corresponding to the first real face model.
  • FIG. 4 an example of specific data used in a plurality of processes included in the above-mentioned specific example provided by an embodiment of the present disclosure.
  • Figure 4 a represents the target image
  • 41 represents the original face A
  • Figure 4 b represents a schematic diagram of a cartoon-style standard virtual face model
  • Figure 4 c represents the generated target corresponding to the first real face model Schematic diagram of the virtual face model.
  • steps (c1) to (c5) are only a specific example of a method for reconstructing a human face, and do not limit the method for reconstructing a human face provided by the embodiments of the present disclosure.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • an apparatus for reconstructing a face corresponding to the method for reconstructing a face is also provided in the embodiment of the present disclosure, because the principle of solving the problem by the apparatus in the embodiment of the present disclosure is the same as the above-mentioned method for reconstructing a face in the embodiment of the present disclosure. Similar, therefore, the implementation of the apparatus may refer to the implementation of the method, and repeated descriptions will not be repeated.
  • an embodiment of the present disclosure provides an apparatus for reconstructing a human face.
  • the apparatus includes a first generating module 51 , a processing module 52 and a second generating module 53 .
  • the first generating module 51 is configured to generate a first real face model based on the target image.
  • the processing module 52 is configured to perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively.
  • the second generating module 53 is configured to be based on the fitting coefficients corresponding to the plurality of second real face models and a plurality of virtual people with preset styles corresponding to the plurality of second real face models respectively face model, generating a target virtual face model corresponding to the target image.
  • the second generation module 53 is based on the fitting coefficients corresponding to the plurality of second real face models and the corresponding fitting coefficients of the plurality of second real face models respectively.
  • a virtual face model with a preset style when generating a target virtual face model corresponding to the target image, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, and a plurality of According to the skeleton data corresponding to the virtual face models, target skeleton data is determined; based on the target skeleton data, the target virtual face model is generated.
  • the second generation module 53 when the second generation module 53 generates the target virtual face model based on the target skeleton data, it is used to: based on the target skeleton data and the standard virtual face model The relationship between the standard bone data and the standard skin data, the standard skin data is subjected to position transformation processing, and the target skin data is generated; based on the target bone data and the target skin data, the target skin data is generated. Virtual face model.
  • the skeleton data of the virtual face model includes at least one of the following data: skeleton rotation data corresponding to each face skeleton in the multiple face skeletons of the virtual face model, skeleton Position data and bone scaling data;
  • the target bone data includes at least one of the following data: target bone position data, target bone scaling data, and target bone rotation data.
  • the target skeleton data includes the target skeleton position data
  • the second generation module 53 is based on the fitting coefficients corresponding to the plurality of second real face models
  • the The skeleton data corresponding to each of the virtual face models when determining the target skeleton data, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, for the corresponding skeleton data of the plurality of virtual face models
  • the bone position data is subjected to interpolation processing to obtain the target bone position data.
  • the target skeleton data includes the target skeleton scaling data
  • the second generation module 53 is based on the fitting coefficients corresponding to the plurality of second real face models
  • the The skeleton data corresponding to each of the virtual face models when determining the target skeleton data, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, for the corresponding skeleton data of the plurality of virtual face models
  • the bone scaling data is subjected to interpolation processing to obtain the target bone scaling data.
  • the target skeleton data includes the target skeleton rotation data
  • the second generation module 53 is based on the fitting coefficients corresponding to the plurality of second real face models
  • the The skeleton data corresponding to each of the virtual face models when determining the target skeleton data, is used for: converting the skeleton rotation data corresponding to the plurality of virtual face models respectively into quaternions, and performing analysis on the plurality of virtual face models. Perform regularization processing on the quaternions corresponding to the face models to obtain a regularized quaternion; based on the fitting coefficients corresponding to the plurality of second real face models, respectively The quaternion is used for interpolation processing to obtain the rotation data of the target bone.
  • the first generation module 51 when generating the first real face model based on the target image, is used to: acquire a target image including the original face; 3D face reconstruction is performed on the original face to obtain the first real face model.
  • the processing module 52 generates the second real face model for each of the plurality of second real face models according to the following manner: obtaining the second real face model. including multiple reference images of the reference face; for each reference image in the multiple reference images, perform three-dimensional face reconstruction on the reference face included in the reference image to obtain the reference image corresponding to the reference image.
  • the second real face model is
  • the processing module 52 performs fitting processing on the first real face model by using a plurality of pre-generated second real face models, and obtains a plurality of second real face models respectively.
  • the corresponding fitting coefficient is used, it is used to: perform least squares processing on a plurality of the second real face models and the first real face models, so as to obtain the corresponding fitting coefficients.
  • an embodiment of the present disclosure further provides a computer device including a processor 61 and a memory 62 .
  • the memory 62 stores machine-readable instructions executable by the processor 61, and the processor 61 is used to execute the machine-readable instructions stored in the memory 62.
  • the processor 61 executes the following steps : generating a first real face model based on the target image; using a plurality of pre-generated second real face models to perform fitting processing on the first real face model, to obtain a plurality of second real face models corresponding respectively fitting coefficients; based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the virtual face models with preset styles corresponding to the plurality of second real face models respectively, generating The target virtual face model corresponding to the target image.
  • the above-mentioned memory 62 includes a memory 621 and an external memory 622; the memory 621 here is also called an internal memory, and is used to temporarily store the operation data in the processor 61 and the data exchanged with the external memory 622 such as the hard disk.
  • the external memory 622 performs data exchange.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the method for reconstructing a face described in the foregoing method embodiments is executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the methods for reconstructing a face described in the foregoing method embodiments.
  • the computer program product carries program codes
  • the instructions included in the program codes can be used to execute the methods for reconstructing a face described in the foregoing method embodiments.
  • the foregoing methods please refer to the foregoing methods. The embodiments are not repeated here.
  • the above-mentioned computer program product can be specifically implemented by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
  • the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Electric Double-Layer Capacitors Or The Like (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Generation (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种重建人脸的方法、装置、计算机设备及存储介质,其中,该方法包括:基于目标图像生成第一真实人脸模型;利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型。

Description

重建人脸的方法、装置、计算机设备及存储介质
相关申请的交叉引用
本专利申请要求于2020年11月25日提交的、申请号为202011342169.7、发明名称为“一种人脸重建方法、装置、计算机设备及存储介质”的中国专利申请的优先权,该申请以引用的方式并入本文中。
技术领域
本公开涉及图像处理技术领域,具体而言,涉及一种重建人脸的方法、装置、计算机设备及存储介质。
背景技术
通常,能够根据真实人脸或自身喜好建立虚拟人脸三维模型,以实现人脸的重建,在游戏、动漫、虚拟社交等领域具有广泛应用。例如在游戏中,玩家可以通过游戏程序提供的人脸重建***来依照玩家提供的图像中包括的真实人脸而生成虚拟人脸三维模型,并利用所生成的虚拟人脸三维模型更有代入感的参与游戏。
目前,在基于人像图像中的真实人脸进行人脸重建时,通常是基于人脸图像来提取人脸轮廓特征,然后将提取的人脸轮廓特征和预先生成的虚拟三维模型进行匹配、融合,以生成与真实人脸对应的虚拟人脸三维模型;但是,由于人脸轮廓特征与预先生成的虚拟三维模型的匹配度较低,使得生成的虚拟人脸三维模型与真实人脸形象之间的相似度较低。
发明内容
本公开实施例至少提供一种重建人脸的方法、装置、计算机设备及存储介质。
第一方面,本公开实施例提供了一种重建人脸的方法,包括:基于目标图像生成第一真实人脸模型;利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型。
本实施例中,利用拟合系数作为媒介,建立了多个第二真实人脸模型与第一真实人脸模型之间的关联关系,该关联关系,能够表征基于第二真实人脸模型建立的虚拟人脸模型、和基于第一真实人脸模型建立的目标虚拟人脸模型之间的关联,使得基于拟合系数确定的目标虚拟人脸模型,既具有预设风格、及第一真实人脸模型对应的原始人脸的 特征,所生成的目标虚拟人脸模型,和第一真实人脸模型对应的原始人脸之间具有更高的相似度。
一种可选的实施方式中,所述基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型,包括:基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据;基于所述目标骨骼数据,生成所述目标虚拟人脸模型。
一种可选的实施方式中,所述基于所述目标骨骼数据,生成所述目标虚拟人脸模型,包括:基于所述目标骨骼数据、以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,对标准蒙皮数据进行位置变换处理,生成目标蒙皮数据;基于所述目标骨骼数据、以及所述目标蒙皮数据,生成所述目标虚拟人脸模型。
该实施方式中,可以通过目标蒙皮数据、以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,使得得到的目标虚拟人脸模型可以更好的将目标骨骼数据与目标蒙皮数据贴合,从而使得利用目标骨骼数据构成的目标虚拟人脸模型中,较少的出现由于骨骼数据的变化导致的目标虚拟人脸模型显示异常凸起或者凹陷的情况。
一种可选的实施方式中,所述虚拟人脸模型对应的骨骼数据包括以下至少一种数据:所述虚拟人脸模型的多块人脸骨骼中每块人脸骨骼对应的骨骼旋转数据、骨骼位置数据、以及骨骼缩放数据;所述目标骨骼数据包括以下至少一种数据:目标骨骼位置数据、目标骨骼缩放数据、以及目标骨骼旋转数据。
该实施方式中,利用骨骼数据能够更精确的表征多块人脸骨骼中每块骨骼对应的骨骼数据,并且利用目标骨骼数据,能够更精确的确定目标虚拟人脸模型。
一种可选的实施方式中,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据,包括:基于多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼位置数据进行插值处理,得到目标骨骼位置数据。
一种可选的实施方式中,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据,包括:基于多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到目标骨骼缩放数据。
一种可选的实施方式中,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据,包括:将所述多个虚拟人脸模型分别对应的骨骼旋转数据转换为四元数,并对所述多个虚拟人脸模型分别对应的四元数进行正则化处理,得到正则化四元数;基于多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的正则化四元数进行插值处理,得到目标骨骼旋转数据。
该实施方式中,利用目标骨骼数据可以更精确的对虚拟人脸模型进行调整,使得得到的目标虚拟人脸模型中的骨骼细节更加细致,并且与原始人脸的骨骼细节更相似,从而使得目标虚拟人脸模型与原始人脸之间具有更高的相似度。
一种可选的实施方式中,所述基于目标图像生成第一真实人脸模型,包括:获取包括原始人脸的目标图像;对所述目标图像中包括的所述原始人脸进行三维人脸重建,得到所述第一真实人脸模型。
该实施方式中,利用对原始人脸进行三维人脸重建得到的第一真实人脸模型,可以更准确且全面的表征目标图像中原始人脸的人脸特征。
一种可选的实施方式中,针对所述多个第二真实人脸模型中的每个第二真实人脸模型,根据以下方式生成所述第二真实人脸模型:获取包括参考人脸的多张参考图像;针对所述多张参考图像中的每张参考图像,对所述参考图像中包括的所述参考人脸进行三维人脸重建,得到所述参考图像对应的所述第二真实人脸模型。
该实施方式中,利用多张参考图像,可以尽量覆盖到较为广泛的人脸外形特征,因此,基于多张参考图像中的每张参考图像进行三维人脸重建得到的第二真实人脸模型同样可以尽量覆盖到较为广泛的人脸外形特征。
一种可选的实施方式中,所述利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数,包括:对多个所述第二真实人脸模型以及所述第一真实人脸模型进行最小二乘处理,得到所述多个第二真实人脸模型分别对应的拟合系数。
该实施方式中,利用拟合系数,可以准确的表征在利用多个第二真实人脸模型拟合第一真实人脸模型时的拟合情况。
第二方面,本公开实施例还提供一种重建人脸的装置,包括:
第一生成模块,用于基于目标图像生成第一真实人脸模型;
处理模块,用于利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;
第二生成模块,用于基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型。
一种可选的实施方式中,所述第二生成模块在基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据;基于所述目标骨骼数据,生成所述目标虚拟人脸模型。
一种可选的实施方式中,所述第二生成模块在基于所述目标骨骼数据,生成所述目标虚拟人脸模型时,用于:基于所述目标骨骼数据、以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,对标准蒙皮数据进行位置变换处理,生成目标蒙皮数据;基于所述目标骨骼数据、以及所述目标蒙皮数据,生成所述目标虚拟人脸模型。
一种可选的实施方式中,所述虚拟人脸模型对应的骨骼数据包括以下至少一种数据:所述虚拟人脸模型的多块人脸骨骼中每块人脸骨骼对应的骨骼旋转数据、骨骼位置数据、以及骨骼缩放数据;所述目标骨骼数据包括以下至少一种数据:目标骨骼位置数据、目标骨骼缩放数据、以及目标骨骼旋转数据。
一种可选的实施方式中,所述第二生成模块在基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼位置数据进行插值处理,得到所述目标骨骼位置数据。
一种可选的实施方式中,所述第二生成模块在基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到所述目标骨骼缩放数据。
一种可选的实施方式中,所述第二生成模块在基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据时,用于:将所述多个虚拟人脸模型分别对应的骨骼旋转数据转换为四元数,并对所述多个 虚拟人脸模型分别对应的四元数进行正则化处理,得到正则化四元数;基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的正则化四元数进行插值处理,得到所述目标骨骼旋转数据。
一种可选的实施方式中,所述第一生成模块在基于目标图像生成第一真实人脸模型时,用于:获取包括原始人脸的目标图像;对所述目标图像中包括的所述原始人脸进行三维人脸重建,得到所述第一真实人脸模型。
一种可选的实施方式中,所述处理模块针对所述多个第二真实人脸模型中的每个第二真实人脸模型,根据以下方式生成所述第二真实人脸模型:获取包括参考人脸的多张参考图像;针对所述多张参考图像中的每张参考图像,对所述参考图像中包括的参考人脸进行三维人脸重建,得到所述参考图像对应的所述第二真实人脸模型。
一种可选的实施方式中,所述处理模块利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数时,用于:对多个所述第二真实人脸模型以及所述第一真实人脸模型进行最小二乘处理,得到所述多个第二真实人脸模型分别对应的拟合系数。
第三方面,本公开可选实现方式还提供一种计算机设备,处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
第四方面,本公开可选实现方式还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
关于上述重建人脸的装置、计算机设备、及计算机可读存储介质的效果描述参见上述重建人脸的方法的说明,这里不再赘述。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下 附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开一实施例所提供的重建人脸的方法的流程图;
图2示出了本公开实施例所提供的生成与目标图像对应的目标虚拟人脸模型的方法的流程图;
图3示出了本公开实施例所提供的一种基于目标骨骼数据生成与第一真实人脸模型对应的目标虚拟人脸模型的具体方法的流程图;
图4示出了本公开实施例所提供的重建人脸的方法中涉及的多个人脸以及人脸模型的示例;
图5示出了本公开实施例所提供的一种重建人脸的装置的示意图;
图6示出了本公开实施例所提供的一种计算机设备的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
经研究发现,利用人脸重建的方法可以根据真实人脸或自身喜好建立虚拟人脸三维模型。其中,在基于人像图像中的真实人脸进行人脸重建时,通常先对人像图像中的真实人脸进行特征提取,以得到人脸轮廓特征,再将人脸轮廓特征与预先生成的虚拟三维模型中的特征进行匹配,并基于匹配的结果,将人脸轮廓特征与虚拟三维模型进行融合,以获取与人像图像中的真实人脸对应的虚拟人脸三维模型。由于在将人脸轮廓特征与预先生成的虚拟三维模型中的特征进行匹配时,匹配的准确率较低,使得虚拟三维模型与人脸轮廓特征之间匹配的误差较大,容易造成依据匹配结果对人脸轮廓特征与人脸虚拟三维模型进行融合得到的虚拟人脸三维模型与人像图像中的人脸相似度较低的问题。
针对以上方案所存在的缺陷,本公开实施例提供了一种重建人脸的方法,能够生成具有预设风格的目标虚拟人脸模型,并且该目标虚拟人脸模型与真实人脸之间具有较高的相似度。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种重建人脸的方法进行详细介绍,本公开实施例所提供的重建人脸的方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该重建人脸的方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
下面对本公开实施例提供的重建人脸的方法加以说明。
图1为本公开一实施例提供的重建人脸的方法的流程图,如图1所示,所述方法包括步骤S101至S103,其中:
S101:基于目标图像生成第一真实人脸模型;
S102:利用预先生成的多个第二真实人脸模型对第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;
S103:基于多个第二真实人脸模型分别对应的拟合系数、以及与多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与目标图像对应的目标虚拟人脸模型。
本公开实施例利用预先生成的多个真实人脸模型拟合第一真实人脸模型的过程,得到个第二真实人脸模型分别对应的拟合系数,并将拟合系数作为媒介,建立多个第二真实人脸模型于第一真实人脸模型之间的关联关系,然后利用该拟合系数,以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型。这种方法使得基于拟合系数以及虚拟人脸模型所确定的目标虚拟人脸模型既具有预设风格,又具有与第一真实人脸模型对应的原始人脸的特征,也即所生成的目标虚拟人脸模型与第一真实人脸模型对应的原始人脸之间具有较高的相似度。
下面对上述步骤S101至S103加以详细说明。
针对上述步骤S101,目标图像例如为获取的包括人脸的图像,例如,在利用诸如相 机等的拍摄设备对某一对象进行拍摄时获取的包括人脸的图像,并且图像中所包括的任一张人脸均可以作为原始人脸。
在将本公开实施例提供的重建人脸的方法应用于不同的场景下时,目标图像的获取方法也有所区别。
例如,在将该重建人脸的方法应用于游戏中的情况下,可以通过游戏设备中安装的图像获取设备获取包含游戏玩家的脸部的图像,或者可以从游戏设备的相册中选择包含游戏玩家的脸部的图像、并将获取的包含游戏玩家的脸部的图像作为目标图像。
又例如,在将重建人脸的方法应用于手机等终端设备的情况下,可以由终端设备的摄像头采集包括用户的脸部的图像,或者从终端设备的相册中选择包含用户人脸的图像,或者从终端设备中安装的其他应用程序中接收包含用户的脸部的图像。
又例如,在将重建人脸的方法应用于直播场景下,可以从直播设备的视频流中所包括的多帧视频帧图像中获取包含人脸的视频帧图像;并将包含人脸的视频帧图像作为目标图像。此处,目标图像例如可以有多帧;多帧目标图像例如可以是对视频流进行采样获得。
在基于目标图像生成第一真实人脸模型时,例如可以采用下述方式:获取包含原始人脸的目标图像;对目标图像中包括的原始人脸进行三维人脸重建,得到第一真实人脸模型。
此处,在对目标图像中包括的原始人脸进行三维人脸重建时,例如可以采用三维可变形人脸模型(3 Dimensions Morphable Models,3DMM)得到原始人脸对应的第一真实人脸模型。其中,第一真实人脸模型例如包括目标图像中原始人脸的多个关键点中每个关键点在预设的相机坐标系中的位置信息。
针对上述步骤S102,第二真实人脸模型是基于包括参考人脸的参考图像生成的。其中,不同参考图像中的参考人脸可以不同;示例性地,可以选取性别、年龄、肤色、胖瘦程度等中至少一项不同的多个人,针对多个人中的每个人获取每个人的人脸图像,并将获取的人脸图像作为参考图像。这样,基于多个参考图像获取的多个第二真实人脸模型能够尽量覆盖到较为广泛的人脸外形特征。
其中,参考人脸例如包括N个不同个体对象对应的N个人脸,(N为大于1的整数)。在获取多张包括参考人脸的参考图像的情况下,示例性地,可以通过对N个不同个体对象分别进行拍摄,得到分别对应于N个不同个体对象的N张照片,且每张照片均对应 一个参考人脸,将此N张照片作为参考图像;或者,从预先拍摄完成的包括不同人脸的多张图像中确定N张参考图像。
示例性地,针对所述多个第二真实人脸模型中的每个第二真实人脸模型,生成第二真实人脸模型的方法包括:获取包括参考人脸的多张参考图像;针对多张参考图像中的每张参考图像,对所述参考图像中包括的参考人脸进行三维人脸重建,得到该参考图像对应的第二真实人脸模型。
其中,对参考人脸进行三维人脸重建的方法与上述对原始人脸进行三维人脸重建的方法类似,在此不再赘述。所得到的第二真实人脸模型包括参考图像中参考人脸的多个关键点中每个关键点在预设的相机坐标系中的位置信息。此处,该第二真实人脸模型的坐标系和第一真实人脸模型的坐标系可以为同一坐标系。
利用预先生成的多个第二真实人脸模型对第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数时,例如可以采用下述方式:对多个第二真实人脸模型以及第一真实人脸模型进行最小二乘处理,得到多个第二真实人脸模型分别对应的拟合系数。
示例性地,在预先生成N个第二真实人脸模型的情况下,可以将第一真实人脸模型对应的模型数据表示为D a,将第二真实人脸模型对应的模型数据表示为D bi(i∈[1,N]),其中,D bi表示N个第二真实人脸模型中的第i个第二真实人脸模型。
利用D a对D b1至D bN中的每一项进行最小二乘处理,可以得到N个拟合值,该拟合值表示为α i(i∈[1,N])。其中,α i表征第i个第二真实人脸模型对应的拟合值。利用N个拟合值,可以确定拟合系数Alpha,例如可以用系数矩阵表示,也即
Alpha=[α 12,…,α N]。
此处,在通过多个第二真实人脸模型拟合第一真实人脸模型的过程中,要使得通过多个拟合系数对多个第二真实人脸模型进行加权求和后得到的数据与第一真实人脸模型的数据尽可能接近。
该拟合系数又可视为利用多个第二真实人脸模型表达第一真实人脸模型时每个第二真实人脸模型的表达系数。也即,利用多个第二真实人脸模型分别在表达系数中对应的拟合值,可以将第二真实人脸模型向第一真实人脸模型进行转化拟合。
针对上述步骤S103,预设风格例如可以为卡通风格、古代风格或抽象风格等,可以根据实际的需要进行具体地设定。示例性地,针对预设风格为卡通风格的情况,具有预设风格的虚拟人脸模型例如可以为具有某种卡通风格的虚拟人脸模型。
其中,获取分别与多个第二真实人脸模型对应的具有预设风格的多个虚拟人脸模型的方法例如包括下述(a1)和(a2)中至少一种。
(a1)以获取一个第二真实人脸模型对应的虚拟人脸模型为例,可以基于参考图像设计制作具有参考人脸特征的、且具有预设风格的虚拟人脸图像,并对虚拟人脸图像中的虚拟人脸进行三维建模,得到虚拟人脸图像中虚拟人脸模型的骨骼数据以及蒙皮数据。
其中,骨骼数据包括为虚拟人脸预设的多个骨骼在预设坐标系中的骨骼旋转数据、骨骼缩放数据以及骨骼位置数据。此处,多个骨骼例如可以进行多层级的划分;例如包括根(root)骨骼、五官骨骼和五官细节骨骼;其中五官骨骼可以包括眉骨骼、鼻骨骼、颧骨骨骼、下颌骨骼和嘴骨骼等;五官细节骨骼例如又可以将不同的五官骨骼进行进一步的详细划分。可以根据不同风格的虚拟图像需求进行具体地设定,在此不做限定。
蒙皮数据包括虚拟人脸的表面中多个位置点在预设坐标系中的位置信息以及每个位置点与多个骨骼中至少一个骨骼的关联关系信息。
将对虚拟人脸图像中的虚拟人脸进行三维建模得到的虚拟模型作为第二真实人脸模型对应的虚拟人脸模型。
(a2)预先生成具有预设风格的标准虚拟人脸模型。该标准虚拟人脸模型同样包括标准骨骼数据、标准蒙皮数据以及标准骨骼数据与标准蒙皮数据之间的关联关系。基于参考人脸,对标准虚拟人脸模型中的标准骨骼数据做设计修改,以使设计修改后的标准虚拟人脸模型在具有预设风格的同时,还包括了参考图像中参考人脸的特征;然后,基于标准骨骼数据与标准蒙皮数据之间的关联关系,对标准蒙皮数据进行调整,同时还可以为标准蒙皮数据添加参考人脸所具有的特征信息,基于修改后的标准骨骼数据和修改后的标准蒙皮数据,生成第二真实人脸模型对应的虚拟人脸模型。
此处,虚拟人脸模型的具体数据表示可以参见上述(a1)中所描述的,在此不再赘述。
参见图2所示,本公开实施例还提供了一种基于多个第二真实人脸模型分别对应的拟合系数、以及与多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型的骨骼数据,生成与目标图像对应的目标虚拟人脸模型的方法,包括:
S201:基于多个第二真实人脸模型分别对应的拟合系数、及多个虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据。
其中,骨骼数据包括以下数据中的至少一种数据:虚拟人脸模型的多块人脸骨骼中每块人脸骨骼对应的骨骼旋转数据、骨骼位置数据和骨骼缩放数据。
在一种可能的实施方式中,可以基于多个第二真实人脸对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼数据进行插值处理,得到目标骨骼数据。得到的目标骨骼数据包括以下至少一种:目标骨骼位置数据、目标骨骼缩放数据、以及目标骨骼旋转数据。
其中,目标骨骼位置数据例如包括骨骼的中心点在模型坐标系中的三维坐标值;目标骨骼缩放数据例如包括目标骨骼相对于标准虚拟人脸模型中骨骼的缩放比例;目标骨骼旋转数据例如包括骨骼的轴线在模型坐标系中的旋转角度。
示例性地,基于多个第二真实人脸模型分别对应的拟合系数、及多个虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据,例如可以采用下述(b1)至(b3)中至少一项来实现。
(b1)基于多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼位置数据进行插值处理,得到目标骨骼位置数据。
(b2)基于多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到目标骨骼缩放数据。
(b3)将所述多个虚拟人脸模型分别对应的骨骼旋转数据转换为四元数(Quaternion)数据,并对所述多个虚拟人脸模型分别对应的四元数进行正则化处理,得到正则化四元数;基于多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的正则化四元数进行插值处理,得到目标骨骼旋转数据。
在具体实施中,针对上述方法(b1)以及(b2),在获取骨骼位置数据以及骨骼缩放数据的情况下,还包括基于多个第二真实人脸模型确定各层级骨骼以及各层级骨骼对应的局部坐标系。其中,在对人脸模型进行骨骼层级分层的情况下,例如可以直接按照生物学骨骼分层方法确定骨骼层级,也可以根据人脸重建的要求确定骨骼层级,具体的分层方法可以根据实际情况确定,在此不再赘述。
在确定各个骨骼层级后,可基于各个骨骼层级建立每个骨骼层级对应的骨骼坐标系。示例性地,可以将各层级骨骼表示为Bone i
此时,骨骼位置数据可以包括虚拟人脸模型中的各层级骨骼Bone i在对应的骨骼坐标系下的三维坐标值;骨骼缩放数据可以包括虚拟人脸模型中的各层级骨骼Bone i在对应的骨骼坐标系下用于表征骨骼缩放程度的百分比,例如为80%、90%或100%。
在一种可能的实施方式中,将第i个虚拟人脸模型对应的骨骼位置数据表示为Pos i,将第i个虚拟人脸模型对应的骨骼缩放数据表示为Scaling i。此时,骨骼位置数据Pos i包含多个层级骨骼分别对应的骨骼位置数据,且骨骼缩放数据Scaling i包含多个层级骨骼分别对应的骨骼缩放数据。
此时所对应的拟合系数为a i。基于M个第二真实人脸模型对应的拟合系数,对M个虚拟人脸模型对应的位置骨骼数据Pos i进行插值处理,得到目标骨骼位置数据。
示例性地,例如可以将拟合系数作为各个虚拟人脸模型对应的权重,对虚拟人脸模型对应的骨骼位置数据Pos i进行加权求和处理,实现插值处理的过程。此时,目标骨骼位置数据Pos new满足下述公式(1):
Figure PCTCN2021102404-appb-000001
类似地,基于M个第二真实人脸模型对应的拟合系数,对M个虚拟人脸模型对应的骨骼缩放数据进行插值处理,得到目标骨骼缩放数据,其中,将第i个虚拟人脸模型对应的骨骼缩放数据表示为Scaling i,可以将M个第二真实人脸模型分别对应的拟合系数,作为对应虚拟人脸模型的权重,对M个虚拟人脸模型分别对应的骨骼缩放数据进行加权求和处理,以实现对M个虚拟人脸模型进行插值处理;在该种情况下,目标骨骼缩放数据Scaling new满足下述公式(2):
Figure PCTCN2021102404-appb-000002
针对上述方法(b3),骨骼旋转数据可以包括虚拟人脸模型中的各个骨骼在对应的骨骼坐标系下用于表征骨骼的旋转坐标变换程度的向量值,例如,包含旋转轴和旋转角。在一种可能的实施方式中,将第i个虚拟人脸模型对应的骨骼旋转数据表示为Trans i。由于骨骼旋转数据所包含的旋转角存在万向节死锁的问题,故将骨骼旋转数据转换为四 元数,并且对四元数正则化,得到正则化四元数数数据,表示为Trans' i,以防止直接对四元数进行加权求和处理时产生过拟合的现象。
在基于M个第二真实人脸模型对应的拟合系数,对M个虚拟人脸模型对应的正则化四元数Trans' i进行插值处理时,也可以将M个第二真实人脸模型对应的拟合系数作为权重,对M个虚拟人脸模型对应的正则化四元数进行加权求和;在该种情况下,目标骨骼旋转数据Trans new满足下述公式(3):
Figure PCTCN2021102404-appb-000003
基于上述(b1)、(b2)以及(b3)中得到的目标骨骼位置数据Pos new、目标骨骼缩放数据Scaling new以及目标骨骼旋转数据Trans new,即可确定目标骨骼数据,其表示为Bone new。示例性地,可以将该目标骨骼数据以向量形式表示为(Pos new,Scaling new,Trans new)。
承接上述S201,生成目标图像对应的目标虚拟人脸模型的方法还包括:
S202:基于目标骨骼数据,生成目标虚拟人脸模型。
参见图3所示,为本公开实施例提供的一种基于目标骨骼数据生成与第一真实人脸模型对应的目标虚拟人脸模型的具体方法,包括:
S301:基于目标骨骼数据、以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,对标准蒙皮数据进行位置变换处理,生成目标蒙皮数据;
S302:基于目标骨骼数据以及目标蒙皮数据,生成目标虚拟人脸模型。
其中,标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系例如为各层级骨骼对应的标准骨骼数据与标准蒙皮数据之间的关联关系。基于此关联关系,可以将蒙皮绑定在虚拟人脸模型中的骨骼上。
利用目标骨骼数据以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,可以对多个层级骨骼对应位置的蒙皮数据进行位置变换处理,以使生成的目标蒙皮数据中对应层级骨骼的位置可以与对应的目标骨骼数据中位置相符。
此处,标准虚拟人脸模型中骨骼数据与标准蒙皮数据之间的关联关系包括:标准蒙皮变形数据中的各个位置点在模型坐标系中的坐标值与骨骼的骨骼位置数据、骨骼 缩放数据以及骨骼旋转数据中至少一项之间的关联关系。
在利用目标骨骼数据以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,对多个层级骨骼对应位置的蒙皮数据进行位置变换处理时,在目标骨骼数据确定的情况下,也即在目标骨骼的目标骨骼位置数据、目标骨骼缩放数据以及目标骨骼旋转数据中至少一项确定的情况下,可以利用上述关联关系,确定在骨骼从标准骨骼数据变换至目标骨骼数据后,标准蒙皮数据中的各个位置点在模型坐标系下的新的坐标值,从而基于标准蒙皮数据中的各个位置点在模型坐标系下的新的坐标值,得到目标虚拟人脸模型的目标蒙皮数据。
利用目标骨骼数据,可以确定用于构建目标虚拟人脸模型的各层级骨骼;且利用目标蒙皮数据,可以确定将模型绑定至骨骼上的蒙皮,从而构成目标虚拟人脸模型。
其中,确定目标虚拟人脸模型的方式可以为基于目标骨骼数据以及目标蒙皮数据直接建立目标虚拟人脸模型;或者,也可以利用各层级骨骼对应的目标骨骼数据替换第一真实人脸模型中对应的各层级骨骼数据,再利用目标蒙皮数据建立目标虚拟人脸模型。具体建立目标虚拟人脸模型的方法可以按照实际情况确定,在此不再赘述。
本公开实施例还提供了利用本公开实施例提供的重建人脸的方法,获取目标图像Pic A中的原始人脸A对应的目标虚拟人脸模型Mod Aim的具体过程的说明。
确定目标虚拟人脸模型Mod Aim包括下述步骤(c1)至(c5):
(c1)准备素材;其中,包括:准备标准虚拟人脸模型的素材;以及准备虚拟图片的素材。
在准备标准虚拟人脸模型的素材时,以选取卡通风格作为预设风格为例,首先设置一个卡通风格的标准虚拟人脸模型Mod Base
在准备虚拟图片的素材时,收集24张虚拟图片Pic 1~Pic 24;收集的24张虚拟图片中的虚拟人脸B 1~B 24对应的男生、女生的数量均衡,并且尽可能包含较广泛的五官特征分布。
(c2)人脸模型重建;其中,包括:利用目标图像Pic A中原始人脸A生成第一真实人脸模型Mod fst;以及利用虚拟图片中的虚拟人脸B 1~B 24生成第二真实人脸模型Mod snd-1~Mod snd-24
在确定原始人脸A生成第一真实人脸模型Mod fst时,首先对目标图像中的人脸进行转正剪裁,然后利用预先训练好的RGB重建神经网络,生成原始人脸A对应的第一真实人脸模型Mod fst。同样的,利用预先训练好的RGB重建神经网络,可以确定虚拟人脸B 1~B 24分别对应的第二真实人脸模型Mod snd-1~Mod snd-24
在确定第二真实人脸模型Mod snd-1~Mod snd-24后,还包括:利用预设的风格,利用人工调整的方式,确定第二真实人脸模型Mod snd-1~Mod snd-24分别对应的具有预设风格的虚拟人脸模型Mod fic-1~Mod fic-24
(c3)拟合处理;其中,包括:利用多个第二真实人脸模型对第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数alph=[alpha snd-1,alpha snd-2,…,alpha snd-24]。
在利用多个第二真实人脸模型对第一真实人脸模型进行拟合时,选取最小二乘法的方法进行拟合,得到24维系数alpha。
(c4)确定目标骨骼数据;其中,在确定目标骨骼数据时,还包括下述步骤(c4-1)以及(c4-2)。
(c4-1)读取骨骼数据;其中,骨骼数据包括:在各层级骨骼Bone i下具有预设风格的虚拟人脸模型Mod fic-1~Mod fic-24分别对应的骨骼位置数据Pos i、骨骼缩放数据Scaling i以及骨骼旋转数据Trans i
(c4-2)利用拟合系数alpha对预设风格的虚拟人脸模型Mod fic-1~Mod fic-24对应的骨骼数据进行差值处理,生成目标骨骼数据Bone new,该目标骨骼数据包括目标骨骼位置数据Pos new、目标骨骼缩放数据Scaling new以及目标骨骼旋转数据Trans new
(c5)生成目标虚拟人脸模型。
利用目标骨骼数据确定用于构建目标虚拟人脸模型的各层级骨骼,将目标骨骼数据替换至标准虚拟人脸模型Mod Base中,并利用目标蒙皮数据,可以确定将模型绑定至骨骼上的蒙皮,然后利用预先确定的标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,生成与第一真实人脸模型对应的目标虚拟人脸模型。
参见图4所示,为本公开实施例提供的在上述具体示例包含的多个过程中使用的具体数据的示例。其中,图4中a表示目标图像,41表示原始人脸A;图4中b表示卡通风格的标准虚拟人脸模型的示意图;图4中c表示生成的与第一真实人脸模型对应的目标虚拟人脸模型的示意图。
此处,值得注意的是,上述步骤(c1)至(c5)仅是重建人脸的方法的一个具体示例,不对本公开实施例提供的重建人脸的方法造成限定。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与重建人脸的方法对应的重建人脸的装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述重建人脸的方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
参照图5所示,本公开实施例提供一种重建人脸的装置,所述装置包括第一生成模块51、处理模块52以及第二生成模块53。
第一生成模块51,用于基于目标图像生成第一真实人脸模型。
处理模块52,用于利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数。
第二生成模块53,用于基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的多个虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型。
一种可选的实施方式中,所述第二生成模块53在基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据;基于所述目标骨骼数据,生成所述目标虚拟人脸模型。
一种可选的实施方式中,所述第二生成模块53在基于所述目标骨骼数据,生成所述目标虚拟人脸模型时,用于:基于所述目标骨骼数据、以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,对标准蒙皮数据进行位置变换处理,生成目标蒙皮数据;基于所述目标骨骼数据、以及所述目标蒙皮数据,生成所述目标虚拟人 脸模型。
一种可选的实施方式中,所述虚拟人脸模型的骨骼数据包括以下至少一种数据:所述虚拟人脸模型的多块人脸骨骼中每块人脸骨骼对应的骨骼旋转数据、骨骼位置数据和骨骼缩放数据;所述目标骨骼数据包括以下至少一种数据:目标骨骼位置数据、目标骨骼缩放数据以及目标骨骼旋转数据。
一种可选的实施方式中,所述目标骨骼数据包括所述目标骨骼位置数据,所述第二生成模块53在基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼位置数据进行插值处理,得到所述目标骨骼位置数据。
一种可选的实施方式中,所述目标骨骼数据包括所述目标骨骼缩放数据,所述第二生成模块53在基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据时,用于:基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到所述目标骨骼缩放数据。
一种可选的实施方式中,所述目标骨骼数据包括所述目标骨骼旋转数据,所述第二生成模块53在基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据时,用于:将所述多个虚拟人脸模型分别对应的骨骼旋转数据转换为四元数,并对所述多个虚拟人脸模型分别对应的四元数进行正则化处理,得到正则化四元数;基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的正则化四元数进行插值处理,得到所述目标骨骼旋转数据。
一种可选的实施方式中,所述第一生成模块51在基于目标图像生成第一真实人脸模型时,用于:获取包括原始人脸的目标图像;对所述目标图像中包括的所述原始人脸进行三维人脸重建,得到所述第一真实人脸模型。
一种可选的实施方式中,所述处理模块52针对所述多个第二真实人脸模型中的每个第二真实人脸模型,根据以下方式生成所述第二真实人脸模型:获取包括参考人脸的多张参考图像;针对所述多张参考图像中的每张参考图像,对所述参考图像中包括的参考人脸进行三维人脸重建,得到所述参考图像对应的所述第二真实人脸模型。
一种可选的实施方式中,所述处理模块52利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数时,用于:对多个所述第二真实人脸模型以及所述第一真实人脸模型进行最小二乘处理,得到所述多个第二真实人脸模型分别对应的拟合系数。
关于装置中的各模块的处理流程以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
如图6所示,本公开实施例还提供了一种计算机设备,包括处理器61和存储器62。
存储器62存储有处理器61可执行的机器可读指令,处理器61用于执行存储器62中存储的机器可读指令,该机器可读指令被处理器61执行时,处理器61执行下述步骤:基于目标图像生成第一真实人脸模型;利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型。
上述存储器62包括内存621和外部存储器622;这里的内存621也称内存储器,用于暂时存放处理器61中的运算数据,以及与硬盘等外部存储器622交换的数据,处理器61通过内存621与外部存储器622进行数据交换。
上述指令的具体执行过程可以参考本公开实施例中所述的重建人脸的方法,此处不再赘述。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的重建人脸的方法。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的重建人脸的方法,具体可参见上述方法实施例,在此不再赘述。
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development  Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。

Claims (13)

  1. 一种重建人脸的方法,包括:
    基于目标图像生成第一真实人脸模型;
    利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;
    基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型。
  2. 根据权利要求1所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型,包括:
    基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据;
    基于所述目标骨骼数据,生成所述目标虚拟人脸模型。
  3. 根据权利要求2所述的重建人脸的方法,其特征在于,所述基于所述目标骨骼数据,生成所述目标虚拟人脸模型,包括:
    基于所述目标骨骼数据以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,对标准蒙皮数据进行位置变换处理,生成目标蒙皮数据;
    基于所述目标骨骼数据以及所述目标蒙皮数据,生成所述目标虚拟人脸模型。
  4. 根据权利要求2或3所述的重建人脸的方法,其特征在于,
    所述虚拟人脸模型对应的骨骼数据包括以下至少一种数据:
    所述虚拟人脸模型的多块人脸骨骼中每块人脸骨骼对应的骨骼旋转数据、
    骨骼位置数据、和
    骨骼缩放数据;
    所述目标骨骼数据包括以下至少一种数据:
    目标骨骼位置数据、
    目标骨骼缩放数据、以及
    目标骨骼旋转数据。
  5. 根据权利要求4所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据,包括:
    基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼位置数据进行插值处理,得到所述目标骨骼位置数据。
  6. 根据权利要求4或5所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据,包括:
    基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到所述目标骨骼缩放数据。
  7. 根据权利要求4至6任一项所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据,包括:
    将所述多个虚拟人脸模型分别对应的骨骼旋转数据转换为四元数,并
    对所述多个虚拟人脸模型分别对应的四元数进行正则化处理,得到正则化四元数;
    基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的正则化四元数进行插值处理,得到所述目标骨骼旋转数据。
  8. 根据权利要求1至7任一所述的重建人脸的方法,其特征在于,所述基于目标图像生成第一真实人脸模型,包括:
    获取包括原始人脸的目标图像;
    对所述目标图像中包括的所述原始人脸进行三维人脸重建,得到所述第一真实人脸模型。
  9. 根据权利要求1至8任一所述的重建人脸的方法,其特征在于,针对所述多个第二真实人脸模型中的每个第二真实人脸模型,根据以下方式生成所述第二真实人脸模型:
    获取包括参考人脸的多张参考图像;
    针对所述多张参考图像中的每张参考图像,对所述参考图像中包括的参考人脸进行三维人脸重建,得到所述参考图像对应的所述第二真实人脸模型。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数,包括:
    对多个所述第二真实人脸模型以及所述第一真实人脸模型进行最小二乘处理,得到所述多个第二真实人脸模型分别对应的拟合系数。
  11. 一种重建人脸的装置,包括:
    第一生成模块,用于基于目标图像生成第一真实人脸模型;
    处理模块,用于利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;
    第二生成模块,用于基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型。
  12. 一种计算机设备,包括处理器和存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述处理器执行如权利要求1至10任一项所述的重建人脸的方法。
  13. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被计算机设备运行时,所述计算机设备执行如权利要求1至10任一项所述的重建人脸的方法。
PCT/CN2021/102404 2020-11-25 2021-06-25 重建人脸的方法、装置、计算机设备及存储介质 WO2022110790A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022519295A JP2023507862A (ja) 2020-11-25 2021-06-25 顔再構築方法、装置、コンピュータデバイス、及び記憶媒体
KR1020237021453A KR20230110607A (ko) 2020-11-25 2021-06-25 얼굴 재구성 방법, 장치, 컴퓨터 기기 및 저장 매체

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011342169.7 2020-11-25
CN202011342169.7A CN112419485B (zh) 2020-11-25 2020-11-25 一种人脸重建方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022110790A1 true WO2022110790A1 (zh) 2022-06-02

Family

ID=74843538

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/102404 WO2022110790A1 (zh) 2020-11-25 2021-06-25 重建人脸的方法、装置、计算机设备及存储介质

Country Status (5)

Country Link
JP (1) JP2023507862A (zh)
KR (1) KR20230110607A (zh)
CN (1) CN112419485B (zh)
TW (1) TWI778723B (zh)
WO (1) WO2022110790A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419485B (zh) * 2020-11-25 2023-11-24 北京市商汤科技开发有限公司 一种人脸重建方法、装置、计算机设备及存储介质
CN112419454B (zh) * 2020-11-25 2023-11-28 北京市商汤科技开发有限公司 一种人脸重建方法、装置、计算机设备及存储介质
CN114078184B (zh) * 2021-11-11 2022-10-21 北京百度网讯科技有限公司 数据处理方法、装置、电子设备和介质
CN114529640B (zh) * 2022-02-17 2024-01-26 北京字跳网络技术有限公司 一种运动画面生成方法、装置、计算机设备和存储介质
CN115187822B (zh) * 2022-07-28 2023-06-30 广州方硅信息技术有限公司 人脸图像数据集分析方法、直播人脸图像处理方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140204089A1 (en) * 2013-01-18 2014-07-24 Electronics And Telecommunications Research Institute Method and apparatus for creating three-dimensional montage
CN109395390A (zh) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 游戏角色脸部模型的处理方法、装置、处理器及终端
CN110111417A (zh) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 三维局部人体模型的生成方法、装置及设备
CN111695471A (zh) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 虚拟形象生成方法、装置、设备以及存储介质
CN112419485A (zh) * 2020-11-25 2021-02-26 北京市商汤科技开发有限公司 一种人脸重建方法、装置、计算机设备及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6207210B2 (ja) * 2013-04-17 2017-10-04 キヤノン株式会社 情報処理装置およびその方法
CN104851123B (zh) * 2014-02-13 2018-02-06 北京师范大学 一种三维人脸变化模拟方法
CN104157010B (zh) * 2014-08-29 2017-04-12 厦门幻世网络科技有限公司 一种3d人脸重建的方法及其装置
US11127163B2 (en) * 2015-06-24 2021-09-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Skinned multi-infant linear body model
CN110135226B (zh) * 2018-02-09 2023-04-07 腾讯科技(深圳)有限公司 表情动画数据处理方法、装置、计算机设备和存储介质
CN109978989B (zh) * 2019-02-26 2023-08-01 腾讯科技(深圳)有限公司 三维人脸模型生成方法、装置、计算机设备及存储介质
CN110111247B (zh) * 2019-05-15 2022-06-24 浙江商汤科技开发有限公司 人脸变形处理方法、装置及设备
CN110400369A (zh) * 2019-06-21 2019-11-01 苏州狗尾草智能科技有限公司 一种人脸重建的方法、***平台及存储介质
CN110717977B (zh) * 2019-10-23 2023-09-26 网易(杭州)网络有限公司 游戏角色脸部处理的方法、装置、计算机设备及存储介质
CN111710035B (zh) * 2020-07-16 2023-11-07 腾讯科技(深圳)有限公司 人脸重建方法、装置、计算机设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140204089A1 (en) * 2013-01-18 2014-07-24 Electronics And Telecommunications Research Institute Method and apparatus for creating three-dimensional montage
CN109395390A (zh) * 2018-10-26 2019-03-01 网易(杭州)网络有限公司 游戏角色脸部模型的处理方法、装置、处理器及终端
CN110111417A (zh) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 三维局部人体模型的生成方法、装置及设备
CN111695471A (zh) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 虚拟形象生成方法、装置、设备以及存储介质
CN112419485A (zh) * 2020-11-25 2021-02-26 北京市商汤科技开发有限公司 一种人脸重建方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
JP2023507862A (ja) 2023-02-28
CN112419485B (zh) 2023-11-24
TWI778723B (zh) 2022-09-21
CN112419485A (zh) 2021-02-26
KR20230110607A (ko) 2023-07-24
TW202221652A (zh) 2022-06-01

Similar Documents

Publication Publication Date Title
WO2022110791A1 (zh) 重建人脸的方法、装置、计算机设备及存储介质
WO2022110790A1 (zh) 重建人脸的方法、装置、计算机设备及存储介质
US10540817B2 (en) System and method for creating a full head 3D morphable model
WO2021253788A1 (zh) 一种人体三维模型构建方法及装置
CN110399849A (zh) 图像处理方法及装置、处理器、电子设备及存储介质
WO2022001236A1 (zh) 三维模型生成方法、装置、计算机设备及存储介质
JP2013524357A (ja) ビデオ・シーケンスに記録された現実エンティティのリアルタイムのクロッピングの方法
WO2013078404A1 (en) Perceptual rating of digital image retouching
TWI780919B (zh) 人臉影像的處理方法、裝置、電子設備及儲存媒體
WO2023077742A1 (zh) 视频处理方法及装置、神经网络的训练方法及装置
CN114333034A (zh) 人脸姿态估计方法、装置、电子设备及可读存储介质
WO2022110855A1 (zh) 人脸重建方法、装置、计算机设备及存储介质
CN114429518B (zh) 人脸模型重建方法、装置、设备和存储介质
CN108717730B (zh) 一种3d人物重建的方法及终端
JP7523530B2 (ja) 顔再構築方法、装置、コンピュータデバイス、及び記憶媒体
CN115393487A (zh) 一种虚拟角色模型处理方法、装置、电子设备及存储介质
CN114612614A (zh) 人体模型的重建方法、装置、计算机设备及存储介质
CN112308957B (zh) 一种基于深度学习的最佳胖瘦人脸肖像图像自动生成方法
CN114677476A (zh) 一种脸部处理方法、装置、计算机设备及存储介质
JP7525814B2 (ja) 顔再構成方法、装置、コンピュータ装置及び記憶媒体
WO2023005359A1 (zh) 图像处理方法和装置
US20240169634A1 (en) Stylized animatable representation
US11983819B2 (en) Methods and systems for deforming a 3D body model based on a 2D image of an adorned subject
Joglekar et al. Blending Motion Capture and 3D Human Reconstruction Techniques for Enhanced Character Animation
CN117994213A (zh) 一种基于三维人脸重建的斑秃评估方法、装置及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022519295

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896298

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20237021453

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21896298

Country of ref document: EP

Kind code of ref document: A1