WO2022110790A1 - 重建人脸的方法、装置、计算机设备及存储介质 - Google Patents
重建人脸的方法、装置、计算机设备及存储介质 Download PDFInfo
- Publication number
- WO2022110790A1 WO2022110790A1 PCT/CN2021/102404 CN2021102404W WO2022110790A1 WO 2022110790 A1 WO2022110790 A1 WO 2022110790A1 CN 2021102404 W CN2021102404 W CN 2021102404W WO 2022110790 A1 WO2022110790 A1 WO 2022110790A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- data
- target
- real
- virtual
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000012545 processing Methods 0.000 claims abstract description 64
- 210000000988 bone and bone Anatomy 0.000 claims description 182
- 238000004590 computer program Methods 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 description 10
- 210000000887 face Anatomy 0.000 description 8
- 230000001815 facial effect Effects 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 101150001783 fic1 gene Proteins 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 210000003054 facial bone Anatomy 0.000 description 2
- CXENHBSYCFFKJS-OXYODPPFSA-N (Z,E)-alpha-farnesene Chemical compound CC(C)=CCC\C(C)=C\C\C=C(\C)C=C CXENHBSYCFFKJS-OXYODPPFSA-N 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000004373 mandible Anatomy 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 210000000537 nasal bone Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003997 social interaction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000000216 zygoma Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
- A63F13/655—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/69—Involving elements of the real world in the game world, e.g. measurement in live races, real video
- A63F2300/695—Imported photos, e.g. of the player
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present disclosure relates to the technical field of image processing, and in particular, to a method, an apparatus, a computer device and a storage medium for reconstructing a human face.
- a three-dimensional model of a virtual face can be established according to a real face or one's own preferences, so as to realize the reconstruction of the face, which has a wide range of applications in the fields of games, animation, and virtual social interaction.
- the player can use the face reconstruction system provided by the game program to generate a virtual face 3D model according to the real face included in the image provided by the player, and use the generated virtual face 3D model to have a more sense of substitution participation in the game.
- face contour features are usually extracted based on face images, and then the extracted face contour features are matched and fused with the pre-generated virtual three-dimensional model.
- the generated virtual face 3D model and the real face image have a low matching degree. Similarity is low.
- the embodiments of the present disclosure provide at least a method, an apparatus, a computer device, and a storage medium for reconstructing a human face.
- an embodiment of the present disclosure provides a method for reconstructing a face, including: generating a first real face model based on a target image; performing a fitting process on the face model to obtain fitting coefficients corresponding to a plurality of second real face models respectively; based on the fitting coefficients corresponding to the plurality of second real face models respectively, and
- the face models respectively correspond to virtual face models with preset styles, and a target virtual face model corresponding to the target image is generated.
- the fitting coefficient is used as a medium to establish a plurality of associations between the second real face models and the first real face
- the association between the virtual face model and the target virtual face model established based on the first real face model enables the target virtual face model determined based on the fitting coefficient to have both the preset style and the first real face
- the features of the original face corresponding to the model, the generated target virtual face model, and the original face corresponding to the first real face model have a higher degree of similarity.
- the fitting coefficients corresponding to the plurality of second real face models and the virtual simulations with preset styles corresponding to the plurality of second real face models respectively. face model, generating a target virtual face model corresponding to the target image, including: based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the corresponding corresponding virtual face models The skeleton data is used to determine target skeleton data; based on the target skeleton data, the target virtual face model is generated.
- generating the target virtual face model based on the target skeleton data includes: based on the target skeleton data, and standard skeleton data and standard skin in the standard virtual face model. According to the relationship between the data, the standard skin data is subjected to position transformation processing to generate target skin data; based on the target bone data and the target skin data, the target virtual face model is generated.
- the target skin data and the correlation between the standard bone data and the standard skin data in the standard virtual face model can be used, so that the obtained target virtual face model can better combine the target bone data with the standard skin data.
- the target skin data is fitted, so that in the target virtual face model formed by using the target bone data, the target virtual face model caused by the change of the bone data shows less abnormal convexity or concave.
- the bone data corresponding to the virtual face model includes at least one of the following data: bone rotation data corresponding to each face bone in the multiple face bones of the virtual face model, Bone position data, and bone scaling data;
- the target bone data includes at least one of the following data: target bone position data, target bone scaling data, and target bone rotation data.
- the bone data corresponding to each of the multiple face bones can be more accurately represented by using the bone data, and the target virtual face model can be determined more accurately by using the target bone data.
- the target skeleton data is determined based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively, including: : Based on the fitting coefficients corresponding to the plurality of second real face models respectively, perform interpolation processing on the bone position data corresponding to the plurality of virtual face models to obtain the target bone position data.
- the target skeleton data is determined based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively, including: : based on the fitting coefficients corresponding to the plurality of second real face models respectively, perform interpolation processing on the bone scaling data corresponding to the plurality of virtual face models to obtain the target bone scaling data.
- the target skeleton data is determined based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively, including: : converting the skeleton rotation data corresponding to the plurality of virtual face models into quaternions, and performing regularization processing on the quaternions corresponding to the plurality of virtual face models respectively, to obtain a regularized quaternion; Based on the fitting coefficients corresponding to the plurality of second real face models, interpolation processing is performed on the regularization quaternions corresponding to the plurality of virtual face models to obtain target bone rotation data.
- the target skeleton data can be used to adjust the virtual face model more accurately, so that the skeleton details in the obtained target virtual face model are more detailed and more similar to the skeleton details of the original face, so that the target The virtual face model has a higher similarity with the original face.
- the generating the first real face model based on the target image includes: acquiring a target image including an original face; performing a three-dimensional face analysis on the original face included in the target image. Reconstruction to obtain the first real face model.
- the face features of the original face in the target image can be more accurately and comprehensively represented by using the first real face model obtained by reconstructing the original face in three dimensions.
- the second real face model is generated according to the following manner: obtaining a multiple reference images; for each reference image in the multiple reference images, perform three-dimensional face reconstruction on the reference face included in the reference image to obtain the second real face corresponding to the reference image face model.
- the second real face model obtained by performing 3D face reconstruction based on each reference image in the multiple reference images is also the same as It can cover as wide a range of facial features as possible.
- the first real face model is fitted by using a plurality of pre-generated second real face models, and the corresponding simulations of the plurality of second real face models are obtained.
- the fitting coefficients include: performing least squares processing on a plurality of the second real face models and the first real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively.
- the fitting coefficient by using the fitting coefficient, the fitting situation when the first real face model is fitted by using a plurality of second real face models can be accurately represented.
- an embodiment of the present disclosure further provides an apparatus for reconstructing a human face, including:
- a first generation module for generating a first real face model based on the target image
- a processing module configured to perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively;
- a second generation module configured to be based on the fitting coefficients corresponding to the plurality of second real face models and the virtual face models with preset styles corresponding to the plurality of second real face models respectively, A target virtual face model corresponding to the target image is generated.
- the second generation module is based on the fitting coefficients corresponding to the plurality of second real face models and the The virtual face model of the preset style, when generating the target virtual face model corresponding to the target image, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, and a plurality of the The skeleton data corresponding to the virtual face models are determined to determine target skeleton data; based on the target skeleton data, the target virtual face model is generated.
- the second generation module when the second generation module generates the target virtual face model based on the target skeleton data, it is used for: based on the target skeleton data and the standard virtual face model.
- the relationship between the standard bone data and the standard skin data, the standard skin data is subjected to position transformation processing, and the target skin data is generated; based on the target bone data and the target skin data, the target virtual data is generated. face model.
- the bone data corresponding to the virtual face model includes at least one of the following data: bone rotation data corresponding to each face bone in the multiple face bones of the virtual face model, Bone position data, and bone scaling data;
- the target bone data includes at least one of the following data: target bone position data, target bone scaling data, and target bone rotation data.
- the second generation module determines based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively.
- the target skeleton data is used, it is used to: perform interpolation processing on the skeleton position data corresponding to the plurality of virtual face models based on the fitting coefficients corresponding to the plurality of second real face models respectively, to obtain the target skeleton position data .
- the second generation module determines based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively.
- the target skeleton data is used, it is used to: perform interpolation processing on the skeleton scaling data corresponding to the plurality of virtual face models based on the fitting coefficients corresponding to the plurality of second real face models respectively, to obtain the target skeleton scaling data .
- the second generation module determines based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively.
- the target skeleton data it is used to: convert the skeleton rotation data corresponding to the plurality of virtual face models into quaternions, and perform regularization processing on the quaternions corresponding to the plurality of virtual face models respectively, obtaining a regularized quaternion; based on the fitting coefficients corresponding to the multiple second real face models respectively, perform interpolation processing on the regularized quaternions corresponding to the multiple virtual face models respectively, to obtain the target bone rotation data.
- the first generation module when generating the first real face model based on the target image, is used to: acquire a target image including the original face; Three-dimensional face reconstruction is performed on the original face to obtain the first real face model.
- the processing module generates the second real face model for each second real face model in the plurality of second real face models according to the following manner: obtaining the second real face model includes: Multiple reference images of the reference face; for each reference image in the multiple reference images, perform three-dimensional face reconstruction on the reference face included in the reference image, and obtain the first reference image corresponding to the reference image. Two real face models.
- the processing module uses a plurality of pre-generated second real face models to perform fitting processing on the first real face model, and obtains a plurality of second real face models corresponding to
- the fitting coefficient it is used to: perform least squares processing on a plurality of the second real face models and the first real face models to obtain the corresponding fitting coefficients of the plurality of second real face models respectively. Combined coefficient.
- an optional implementation manner of the present disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the instructions stored in the memory.
- machine-readable instructions when the machine-readable instructions are executed by the processor, when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or any possible implementation of the first aspect, is executed steps in the method.
- an optional implementation manner of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program executes the first aspect, or any of the first aspect, when the computer program is run. steps in one possible implementation.
- FIG. 1 shows a flowchart of a method for reconstructing a human face provided by an embodiment of the present disclosure
- FIG. 2 shows a flowchart of a method for generating a target virtual face model corresponding to a target image provided by an embodiment of the present disclosure
- FIG. 3 shows a flowchart of a specific method for generating a target virtual face model corresponding to a first real face model based on target skeleton data provided by an embodiment of the present disclosure
- FIG. 4 shows an example of multiple faces and face models involved in the method for reconstructing a face provided by an embodiment of the present disclosure
- FIG. 5 shows a schematic diagram of an apparatus for reconstructing a human face provided by an embodiment of the present disclosure
- FIG. 6 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
- the method of face reconstruction can establish a three-dimensional model of virtual face according to the real face or one's own preferences.
- face reconstruction based on the real face in the portrait image the feature extraction of the real face in the portrait image is usually performed first to obtain the face contour feature, and then the face contour feature is combined with the pre-generated virtual three-dimensional feature.
- the features in the model are matched, and based on the matching results, the face contour features are fused with the virtual three-dimensional model to obtain a virtual three-dimensional model of the face corresponding to the real face in the portrait image.
- the matching accuracy is low when the face contour features are matched with the features in the pre-generated virtual three-dimensional model, the matching error between the virtual three-dimensional model and the face contour features is relatively large, and it is easy to cause the matching results based on the matching results.
- the embodiments of the present disclosure provide a method for reconstructing a human face, which can generate a target virtual face model with a preset style, and the target virtual face model has a relatively high difference between the target virtual face model and the real face. high similarity.
- the execution subject of the method for reconstructing a face provided by the embodiment of the present disclosure is generally a computer with a certain computing capability.
- equipment the computer equipment for example includes: terminal equipment or server or other processing equipment, the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant) Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
- the method for reconstructing a human face may be implemented by the processor calling computer-readable instructions stored in the memory.
- FIG. 1 is a flowchart of a method for reconstructing a face provided by an embodiment of the present disclosure. As shown in FIG. 1 , the method includes steps S101 to S103, wherein:
- S101 Generate a first real face model based on the target image
- S102 Perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively;
- S103 Generate a target virtual person corresponding to the target image based on the fitting coefficients corresponding to the plurality of second real face models and the virtual face models with preset styles corresponding to the plurality of second real face models respectively face model.
- a process of fitting a first real face model with multiple pre-generated real face models is used to obtain fitting coefficients corresponding to a second real face model, and the fitting coefficients are used as a medium to establish multiple The relationship between a second real face model and the first real face model, and then use the fitting coefficient and the virtual faces with preset styles corresponding to the plurality of second real face models respectively. model, and generate a target virtual face model corresponding to the target image.
- This method makes the target virtual face model determined based on the fitting coefficient and the virtual face model not only has a preset style, but also has the characteristics of the original face corresponding to the first real face model, that is, the generated target There is a high degree of similarity between the virtual face model and the original face corresponding to the first real face model.
- the target image is, for example, an acquired image including a human face. All faces can be used as original faces.
- the method for reconstructing a face provided by the embodiment of the present disclosure is applied to different scenarios, the method for acquiring the target image is also different.
- an image containing the face of the game player can be acquired through an image acquisition device installed in the game device, or an image containing the game player's face can be selected from an album of the game device The image of the face of the game player, and the acquired image containing the face of the game player is used as the target image.
- an image including the user's face can be collected by the camera of the terminal device, or an image including the user's face can be selected from an album of the terminal device, Or receive images containing the user's face from other applications installed in the terminal device.
- a video frame image containing a human face can be obtained from multiple frames of video frame images included in the video stream of the live broadcast device; image as the target image.
- the target image may have multiple frames; for example, the multiple-frame target image may be obtained by sampling a video stream.
- the following methods may be adopted: obtaining a target image including the original face; performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face Model.
- a three-dimensional deformable face model (3 Dimensions Morphable Models, 3DMM) can be used to obtain the first real face model corresponding to the original face.
- the first real face model includes, for example, the position information of each key point in the preset camera coordinate system among the multiple key points of the original face in the target image.
- the second real face model is generated based on the reference image including the reference face.
- the reference faces in different reference images may be different; exemplarily, multiple people with different at least one of gender, age, skin color, degree of fatness and thinness, etc. may be selected, and the person of each person may be obtained for each of the multiple people face image, and use the acquired face image as a reference image.
- the plurality of second real face models obtained based on the plurality of reference images can cover a relatively wide range of face shape features as much as possible.
- the reference face includes, for example, N faces corresponding to N different individual objects, (N is an integer greater than 1).
- N photos corresponding to the N different individual objects can be obtained by separately photographing N different individual objects, and each photo is Corresponding to a reference face, the N photos are used as reference images; or, N reference images are determined from a plurality of images including different faces that have been photographed in advance.
- the method for generating the second real face model includes: acquiring a plurality of reference images including a reference face; For each reference image in the reference images, three-dimensional face reconstruction is performed on the reference face included in the reference image to obtain a second real face model corresponding to the reference image.
- the method for performing 3D face reconstruction on the reference face is similar to the above-mentioned method for performing 3D face reconstruction on the original face, and will not be repeated here.
- the obtained second real face model includes position information of each key point in the preset camera coordinate system among the multiple key points of the reference face in the reference image.
- the coordinate system of the second real face model and the coordinate system of the first real face model may be the same coordinate system.
- the following method may be used: The second real face model and the first real face model are subjected to least squares processing to obtain fitting coefficients corresponding to the plurality of second real face models respectively.
- the model data corresponding to the first real face model can be represented as D a
- the model data corresponding to the second real face model can be represented as D bi (i ⁇ [1, N]), where D bi represents the i-th second real face model among the N second real face models.
- N fitting values can be obtained, and the fitting values are expressed as ⁇ i (i ⁇ [1,N]).
- ⁇ i represents the fitting value corresponding to the ith second real face model.
- the fitting coefficient Alpha can be determined, for example, it can be represented by a coefficient matrix, that is,
- Alpha [ ⁇ 1 , ⁇ 2 ,..., ⁇ N ].
- the fitting coefficient can also be regarded as an expression coefficient of each second real face model when the first real face model is expressed by using a plurality of second real face models. That is, by using the fitting values corresponding to the expression coefficients of the plurality of second real face models respectively, the second real face model can be transformed and fitted to the first real face model.
- the preset style may be, for example, a cartoon style, an ancient style, or an abstract style, and may be specifically set according to actual needs.
- the virtual face model with the preset style may be, for example, a virtual face model with a certain cartoon style.
- the method for acquiring multiple virtual face models with preset styles corresponding to multiple second real face models respectively includes, for example, at least one of the following (a1) and (a2).
- a virtual face image with reference face features and a preset style can be designed and produced based on the reference image, and the virtual face image can be designed and produced based on the reference image.
- the virtual face in the image is subjected to three-dimensional modeling, and the skeleton data and skin data of the virtual face model in the virtual face image are obtained.
- the skeleton data includes skeleton rotation data, skeleton scaling data, and skeleton position data of a plurality of bones preset for the virtual face in a preset coordinate system.
- multiple bones can be divided into multiple levels; for example, they include root (root) bones, facial bones, and facial detail bones; where facial bones can include eyebrow bones, nasal bones, cheekbones, mandibles, and mouth bones, etc. ;
- the detailed bones of the facial features can further divide the bones of different facial features in detail. Specific settings can be made according to different styles of virtual image requirements, which are not limited here.
- the skinned data includes position information of multiple position points on the surface of the virtual face in a preset coordinate system and information about the association relationship between each position point and at least one of the multiple bones.
- the virtual model obtained by performing three-dimensional modeling on the virtual face in the virtual face image is used as the virtual face model corresponding to the second real face model.
- the standard virtual face model also includes standard bone data, standard skin data, and an association relationship between the standard bone data and the standard skin data. Based on the reference face, design and modify the standard skeleton data in the standard virtual face model, so that the modified standard virtual face model has the preset style and also includes the features of the reference face in the reference image. ; Then, based on the relationship between the standard bone data and the standard skin data, adjust the standard skin data, and add the feature information of the reference face to the standard skin data, based on the modified standard bones The data and the modified standard skinning data are used to generate a virtual face model corresponding to the second real face model.
- an embodiment of the present disclosure further provides a fitting coefficient corresponding to a plurality of second real face models, and a virtual simulation with a preset style corresponding to the plurality of second real face models respectively.
- the skeleton data of the face model, and the method for generating the target virtual face model corresponding to the target image including:
- S201 Determine target skeleton data based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively.
- the bone data includes at least one of the following data: bone rotation data, bone position data, and bone scaling data corresponding to each face bone in the multiple face bones of the virtual face model.
- interpolation processing may be performed on the skeleton data corresponding to the plurality of virtual face models to obtain the target skeleton data.
- the obtained target bone data includes at least one of the following: target bone position data, target bone scaling data, and target bone rotation data.
- the target bone position data includes, for example, the three-dimensional coordinate value of the center point of the bone in the model coordinate system;
- the target bone scaling data includes, for example, the scaling ratio of the target bone relative to the bone in the standard virtual face model;
- the target bone rotation data includes, for example, the bone The rotation angle of the axis in the model coordinate system.
- the target skeleton data is determined based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models, for example, the following (b1) to (b3) can be used. to achieve at least one of them.
- the method further includes determining each level of bones and the corresponding local coordinate system.
- the bone level can be determined directly according to the biological bone layering method, or the bone level can be determined according to the requirements of face reconstruction.
- the specific layering method can be determined according to the actual situation. The situation is determined and will not be repeated here.
- each bone level is determined, a bone coordinate system corresponding to each bone level can be established based on each bone level.
- each level of bone can be represented as Bone i .
- the bone position data may include the three-dimensional coordinate values of each level of bone Bone i in the virtual face model in the corresponding bone coordinate system; the bone scaling data may include the corresponding bone Bone i of each level in the virtual face model.
- the percentage used to represent the scale of the bone in the bone coordinate system such as 80%, 90% or 100%.
- the bone position data corresponding to the ith virtual face model is represented as Pos i
- the bone scaling data corresponding to the ith virtual face model is represented as Scaling i
- the bone position data Pos i includes the bone position data corresponding to the plurality of hierarchical bones respectively
- the bone scaling data Scaling i includes the bone scaling data corresponding to the plurality of hierarchical bones respectively.
- the corresponding fitting coefficient at this time is a i .
- interpolation processing is performed on the positional bone data Pos i corresponding to the M virtual face models to obtain target bone position data.
- the fitting coefficient may be used as the weight corresponding to each virtual face model, and the weighted summation processing is performed on the bone position data Pos i corresponding to the virtual face model to implement the interpolation processing process.
- the target bone position data Pos new satisfies the following formula (1):
- interpolation processing is performed on the bone scaling data corresponding to the M virtual face models to obtain target bone scaling data, wherein the i-th virtual face model is
- the corresponding bone scaling data is represented as Scaling i
- the fitting coefficients corresponding to the M second real face models can be used as the weights of the corresponding virtual face models, and the bone scaling data corresponding to the M virtual face models can be performed.
- the bone rotation data may include vector values used to represent the degree of rotation coordinate transformation of each bone in the virtual face model in the corresponding bone coordinate system, for example, including a rotation axis and a rotation angle.
- the bone rotation data corresponding to the ith virtual face model is represented as Trans i . Since the rotation angle contained in the bone rotation data has the problem of gimbal deadlock, the bone rotation data is converted into a quaternion, and the quaternion is regularized to obtain the regularized quaternion data, which is expressed as Trans' i , in order to prevent overfitting when the quaternion is directly weighted and summed.
- the M second real face models may also correspond to The fitting coefficient of is used as a weight, and the regularization quaternion corresponding to the M virtual face models is weighted and summed; in this case, the target bone rotation data Trans new satisfies the following formula (3):
- the target bone data can be determined, which is represented as Bone new .
- the target bone data can be represented as (Pos new , Scaling new , Trans new ) in the form of a vector.
- the method for generating the target virtual face model corresponding to the target image further includes:
- a specific method for generating a target virtual face model corresponding to a first real face model based on target skeleton data includes:
- S301 Based on the target skeleton data and the relationship between the standard skeleton data and the standard skin data in the standard virtual face model, perform position transformation processing on the standard skin data to generate target skin data;
- S302 Generate a target virtual face model based on the target bone data and the target skin data.
- the association relationship between the standard skeleton data and the standard skin data in the standard virtual face model is, for example, the association relationship between the standard skeleton data corresponding to each level of bones and the standard skin data. Based on this association, the skin can be bound to the bones in the virtual face model.
- the skin data at the corresponding positions of multiple levels of bones can be subjected to position transformation, so that the generated target skin data
- the position of the corresponding level bone in the data can be consistent with the position in the corresponding target bone data.
- the relationship between the bone data in the standard virtual face model and the standard skin data includes: the coordinate value of each position point in the standard skin deformation data in the model coordinate system, the bone position data of the bone, the bone scaling The relationship between the data and at least one item of the bone rotation data.
- the target skeleton data and the relationship between the standard skeleton data and the standard skin data in the standard virtual face model can be used to determine that after the bone is transformed from the standard bone data to the target bone data , the new coordinate values of each position point in the standard skinning data under the model coordinate system, so as to obtain the target virtual face model based on the new coordinate value of each position point in the standard skinning data under the model coordinate system Target skin data.
- target bone data it is possible to determine the bones of each level used to construct the target virtual face model; and using the target skin data, it is possible to determine the skin that binds the model to the bones, thereby forming the target virtual face model.
- the method of determining the target virtual face model may be to directly establish the target virtual face model based on the target skeleton data and the target skin data; Corresponding skeleton data at each level, and then use the target skin data to build the target virtual face model.
- the specific method for establishing the target virtual face model can be determined according to the actual situation, and will not be repeated here.
- the embodiment of the present disclosure also provides an explanation of the specific process of obtaining the target virtual face model Mod Aim corresponding to the original face A in the target image Pic A by using the method for reconstructing a face provided by the embodiment of the present disclosure.
- Determining the target virtual face model Mod Aim includes the following steps (c1) to (c5):
- Preparing materials including: preparing materials for standard virtual face models; and preparing materials for virtual pictures.
- (c2) face model reconstruction including: using the original face A in the target image Pic A to generate a first real face model Mod fst ; and using the virtual faces B 1 to B 24 in the virtual picture to generate a second real face model Face model Mod snd-1 ⁇ Mod snd-24 .
- the face in the target image is straightened and cropped, and then the pre-trained RGB is used to reconstruct the neural network to generate the first real face corresponding to the original face A.
- Face model Mod fst Similarly, by using the pre-trained RGB reconstruction neural network, the second real face models Mod snd-1 to Mod snd-24 corresponding to the virtual faces B 1 to B 24 respectively can be determined.
- the method further includes: determining the second real face models Mod snd- 1 to Mod snd -24 by using a preset style and manual adjustment The corresponding virtual face models Mod fic-1 to Mod fic-24 with preset styles respectively.
- the method of least squares is selected for fitting, and a 24-dimensional coefficient alpha is obtained.
- the skeleton data includes: the skeleton position data Pos i , respectively corresponding to the virtual face models Mod fic-1 to Mod fic-24 with preset styles under the bones Bone i of each level. Bone scaling data Scaling i and bone rotation data Trans i .
- target bone data to determine the bones of each level used to construct the target virtual face model, replace the target bone data with the standard virtual face model Mod Base , and use the target skin data to determine the skeleton to bind the model to the bone. skin, and then use the pre-determined relationship between the standard skeleton data and the standard skin data in the standard virtual face model to generate a target virtual face model corresponding to the first real face model.
- FIG. 4 an example of specific data used in a plurality of processes included in the above-mentioned specific example provided by an embodiment of the present disclosure.
- Figure 4 a represents the target image
- 41 represents the original face A
- Figure 4 b represents a schematic diagram of a cartoon-style standard virtual face model
- Figure 4 c represents the generated target corresponding to the first real face model Schematic diagram of the virtual face model.
- steps (c1) to (c5) are only a specific example of a method for reconstructing a human face, and do not limit the method for reconstructing a human face provided by the embodiments of the present disclosure.
- the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
- an apparatus for reconstructing a face corresponding to the method for reconstructing a face is also provided in the embodiment of the present disclosure, because the principle of solving the problem by the apparatus in the embodiment of the present disclosure is the same as the above-mentioned method for reconstructing a face in the embodiment of the present disclosure. Similar, therefore, the implementation of the apparatus may refer to the implementation of the method, and repeated descriptions will not be repeated.
- an embodiment of the present disclosure provides an apparatus for reconstructing a human face.
- the apparatus includes a first generating module 51 , a processing module 52 and a second generating module 53 .
- the first generating module 51 is configured to generate a first real face model based on the target image.
- the processing module 52 is configured to perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively.
- the second generating module 53 is configured to be based on the fitting coefficients corresponding to the plurality of second real face models and a plurality of virtual people with preset styles corresponding to the plurality of second real face models respectively face model, generating a target virtual face model corresponding to the target image.
- the second generation module 53 is based on the fitting coefficients corresponding to the plurality of second real face models and the corresponding fitting coefficients of the plurality of second real face models respectively.
- a virtual face model with a preset style when generating a target virtual face model corresponding to the target image, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, and a plurality of According to the skeleton data corresponding to the virtual face models, target skeleton data is determined; based on the target skeleton data, the target virtual face model is generated.
- the second generation module 53 when the second generation module 53 generates the target virtual face model based on the target skeleton data, it is used to: based on the target skeleton data and the standard virtual face model The relationship between the standard bone data and the standard skin data, the standard skin data is subjected to position transformation processing, and the target skin data is generated; based on the target bone data and the target skin data, the target skin data is generated. Virtual face model.
- the skeleton data of the virtual face model includes at least one of the following data: skeleton rotation data corresponding to each face skeleton in the multiple face skeletons of the virtual face model, skeleton Position data and bone scaling data;
- the target bone data includes at least one of the following data: target bone position data, target bone scaling data, and target bone rotation data.
- the target skeleton data includes the target skeleton position data
- the second generation module 53 is based on the fitting coefficients corresponding to the plurality of second real face models
- the The skeleton data corresponding to each of the virtual face models when determining the target skeleton data, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, for the corresponding skeleton data of the plurality of virtual face models
- the bone position data is subjected to interpolation processing to obtain the target bone position data.
- the target skeleton data includes the target skeleton scaling data
- the second generation module 53 is based on the fitting coefficients corresponding to the plurality of second real face models
- the The skeleton data corresponding to each of the virtual face models when determining the target skeleton data, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, for the corresponding skeleton data of the plurality of virtual face models
- the bone scaling data is subjected to interpolation processing to obtain the target bone scaling data.
- the target skeleton data includes the target skeleton rotation data
- the second generation module 53 is based on the fitting coefficients corresponding to the plurality of second real face models
- the The skeleton data corresponding to each of the virtual face models when determining the target skeleton data, is used for: converting the skeleton rotation data corresponding to the plurality of virtual face models respectively into quaternions, and performing analysis on the plurality of virtual face models. Perform regularization processing on the quaternions corresponding to the face models to obtain a regularized quaternion; based on the fitting coefficients corresponding to the plurality of second real face models, respectively The quaternion is used for interpolation processing to obtain the rotation data of the target bone.
- the first generation module 51 when generating the first real face model based on the target image, is used to: acquire a target image including the original face; 3D face reconstruction is performed on the original face to obtain the first real face model.
- the processing module 52 generates the second real face model for each of the plurality of second real face models according to the following manner: obtaining the second real face model. including multiple reference images of the reference face; for each reference image in the multiple reference images, perform three-dimensional face reconstruction on the reference face included in the reference image to obtain the reference image corresponding to the reference image.
- the second real face model is
- the processing module 52 performs fitting processing on the first real face model by using a plurality of pre-generated second real face models, and obtains a plurality of second real face models respectively.
- the corresponding fitting coefficient is used, it is used to: perform least squares processing on a plurality of the second real face models and the first real face models, so as to obtain the corresponding fitting coefficients.
- an embodiment of the present disclosure further provides a computer device including a processor 61 and a memory 62 .
- the memory 62 stores machine-readable instructions executable by the processor 61, and the processor 61 is used to execute the machine-readable instructions stored in the memory 62.
- the processor 61 executes the following steps : generating a first real face model based on the target image; using a plurality of pre-generated second real face models to perform fitting processing on the first real face model, to obtain a plurality of second real face models corresponding respectively fitting coefficients; based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the virtual face models with preset styles corresponding to the plurality of second real face models respectively, generating The target virtual face model corresponding to the target image.
- the above-mentioned memory 62 includes a memory 621 and an external memory 622; the memory 621 here is also called an internal memory, and is used to temporarily store the operation data in the processor 61 and the data exchanged with the external memory 622 such as the hard disk.
- the external memory 622 performs data exchange.
- Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the method for reconstructing a face described in the foregoing method embodiments is executed.
- the storage medium may be a volatile or non-volatile computer-readable storage medium.
- Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the methods for reconstructing a face described in the foregoing method embodiments.
- the computer program product carries program codes
- the instructions included in the program codes can be used to execute the methods for reconstructing a face described in the foregoing method embodiments.
- the foregoing methods please refer to the foregoing methods. The embodiments are not repeated here.
- the above-mentioned computer program product can be specifically implemented by means of hardware, software or a combination thereof.
- the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
- the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
- the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
- the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Electric Double-Layer Capacitors Or The Like (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Generation (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (13)
- 一种重建人脸的方法,包括:基于目标图像生成第一真实人脸模型;利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型。
- 根据权利要求1所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型,包括:基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据;基于所述目标骨骼数据,生成所述目标虚拟人脸模型。
- 根据权利要求2所述的重建人脸的方法,其特征在于,所述基于所述目标骨骼数据,生成所述目标虚拟人脸模型,包括:基于所述目标骨骼数据以及标准虚拟人脸模型中标准骨骼数据与标准蒙皮数据之间的关联关系,对标准蒙皮数据进行位置变换处理,生成目标蒙皮数据;基于所述目标骨骼数据以及所述目标蒙皮数据,生成所述目标虚拟人脸模型。
- 根据权利要求2或3所述的重建人脸的方法,其特征在于,所述虚拟人脸模型对应的骨骼数据包括以下至少一种数据:所述虚拟人脸模型的多块人脸骨骼中每块人脸骨骼对应的骨骼旋转数据、骨骼位置数据、和骨骼缩放数据;所述目标骨骼数据包括以下至少一种数据:目标骨骼位置数据、目标骨骼缩放数据、以及目标骨骼旋转数据。
- 根据权利要求4所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据,包括:基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼位置数据进行插值处理,得到所述目标骨骼位置数据。
- 根据权利要求4或5所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据,包括:基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的骨骼缩放数据进行插值处理,得到所述目标骨骼缩放数据。
- 根据权利要求4至6任一项所述的重建人脸的方法,其特征在于,所述基于所述多个第二真实人脸模型分别对应的拟合系数、及多个所述虚拟人脸模型分别对应的骨骼数据,确定目标骨骼数据,包括:将所述多个虚拟人脸模型分别对应的骨骼旋转数据转换为四元数,并对所述多个虚拟人脸模型分别对应的四元数进行正则化处理,得到正则化四元数;基于所述多个第二真实人脸模型分别对应的拟合系数,对多个虚拟人脸模型分别对应的正则化四元数进行插值处理,得到所述目标骨骼旋转数据。
- 根据权利要求1至7任一所述的重建人脸的方法,其特征在于,所述基于目标图像生成第一真实人脸模型,包括:获取包括原始人脸的目标图像;对所述目标图像中包括的所述原始人脸进行三维人脸重建,得到所述第一真实人脸模型。
- 根据权利要求1至8任一所述的重建人脸的方法,其特征在于,针对所述多个第二真实人脸模型中的每个第二真实人脸模型,根据以下方式生成所述第二真实人脸模型:获取包括参考人脸的多张参考图像;针对所述多张参考图像中的每张参考图像,对所述参考图像中包括的参考人脸进行三维人脸重建,得到所述参考图像对应的所述第二真实人脸模型。
- 根据权利要求1-9任一项所述的方法,其特征在于,所述利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数,包括:对多个所述第二真实人脸模型以及所述第一真实人脸模型进行最小二乘处理,得到所述多个第二真实人脸模型分别对应的拟合系数。
- 一种重建人脸的装置,包括:第一生成模块,用于基于目标图像生成第一真实人脸模型;处理模块,用于利用预先生成的多个第二真实人脸模型对所述第一真实人脸模型进行拟合处理,得到多个第二真实人脸模型分别对应的拟合系数;第二生成模块,用于基于所述多个第二真实人脸模型分别对应的拟合系数、以及与所述多个第二真实人脸模型分别对应的具有预设风格的虚拟人脸模型,生成与所述目标图像对应的目标虚拟人脸模型。
- 一种计算机设备,包括处理器和存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述处理器执行如权利要求1至10任一项所述的重建人脸的方法。
- 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被计算机设备运行时,所述计算机设备执行如权利要求1至10任一项所述的重建人脸的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022519295A JP2023507862A (ja) | 2020-11-25 | 2021-06-25 | 顔再構築方法、装置、コンピュータデバイス、及び記憶媒体 |
KR1020237021453A KR20230110607A (ko) | 2020-11-25 | 2021-06-25 | 얼굴 재구성 방법, 장치, 컴퓨터 기기 및 저장 매체 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011342169.7 | 2020-11-25 | ||
CN202011342169.7A CN112419485B (zh) | 2020-11-25 | 2020-11-25 | 一种人脸重建方法、装置、计算机设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022110790A1 true WO2022110790A1 (zh) | 2022-06-02 |
Family
ID=74843538
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/102404 WO2022110790A1 (zh) | 2020-11-25 | 2021-06-25 | 重建人脸的方法、装置、计算机设备及存储介质 |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP2023507862A (zh) |
KR (1) | KR20230110607A (zh) |
CN (1) | CN112419485B (zh) |
TW (1) | TWI778723B (zh) |
WO (1) | WO2022110790A1 (zh) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112419485B (zh) * | 2020-11-25 | 2023-11-24 | 北京市商汤科技开发有限公司 | 一种人脸重建方法、装置、计算机设备及存储介质 |
CN112419454B (zh) * | 2020-11-25 | 2023-11-28 | 北京市商汤科技开发有限公司 | 一种人脸重建方法、装置、计算机设备及存储介质 |
CN114078184B (zh) * | 2021-11-11 | 2022-10-21 | 北京百度网讯科技有限公司 | 数据处理方法、装置、电子设备和介质 |
CN114529640B (zh) * | 2022-02-17 | 2024-01-26 | 北京字跳网络技术有限公司 | 一种运动画面生成方法、装置、计算机设备和存储介质 |
CN115187822B (zh) * | 2022-07-28 | 2023-06-30 | 广州方硅信息技术有限公司 | 人脸图像数据集分析方法、直播人脸图像处理方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140204089A1 (en) * | 2013-01-18 | 2014-07-24 | Electronics And Telecommunications Research Institute | Method and apparatus for creating three-dimensional montage |
CN109395390A (zh) * | 2018-10-26 | 2019-03-01 | 网易(杭州)网络有限公司 | 游戏角色脸部模型的处理方法、装置、处理器及终端 |
CN110111417A (zh) * | 2019-05-15 | 2019-08-09 | 浙江商汤科技开发有限公司 | 三维局部人体模型的生成方法、装置及设备 |
CN111695471A (zh) * | 2020-06-02 | 2020-09-22 | 北京百度网讯科技有限公司 | 虚拟形象生成方法、装置、设备以及存储介质 |
CN112419485A (zh) * | 2020-11-25 | 2021-02-26 | 北京市商汤科技开发有限公司 | 一种人脸重建方法、装置、计算机设备及存储介质 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6207210B2 (ja) * | 2013-04-17 | 2017-10-04 | キヤノン株式会社 | 情報処理装置およびその方法 |
CN104851123B (zh) * | 2014-02-13 | 2018-02-06 | 北京师范大学 | 一种三维人脸变化模拟方法 |
CN104157010B (zh) * | 2014-08-29 | 2017-04-12 | 厦门幻世网络科技有限公司 | 一种3d人脸重建的方法及其装置 |
US11127163B2 (en) * | 2015-06-24 | 2021-09-21 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Skinned multi-infant linear body model |
CN110135226B (zh) * | 2018-02-09 | 2023-04-07 | 腾讯科技(深圳)有限公司 | 表情动画数据处理方法、装置、计算机设备和存储介质 |
CN109978989B (zh) * | 2019-02-26 | 2023-08-01 | 腾讯科技(深圳)有限公司 | 三维人脸模型生成方法、装置、计算机设备及存储介质 |
CN110111247B (zh) * | 2019-05-15 | 2022-06-24 | 浙江商汤科技开发有限公司 | 人脸变形处理方法、装置及设备 |
CN110400369A (zh) * | 2019-06-21 | 2019-11-01 | 苏州狗尾草智能科技有限公司 | 一种人脸重建的方法、***平台及存储介质 |
CN110717977B (zh) * | 2019-10-23 | 2023-09-26 | 网易(杭州)网络有限公司 | 游戏角色脸部处理的方法、装置、计算机设备及存储介质 |
CN111710035B (zh) * | 2020-07-16 | 2023-11-07 | 腾讯科技(深圳)有限公司 | 人脸重建方法、装置、计算机设备及存储介质 |
-
2020
- 2020-11-25 CN CN202011342169.7A patent/CN112419485B/zh active Active
-
2021
- 2021-06-25 KR KR1020237021453A patent/KR20230110607A/ko active Search and Examination
- 2021-06-25 JP JP2022519295A patent/JP2023507862A/ja active Pending
- 2021-06-25 WO PCT/CN2021/102404 patent/WO2022110790A1/zh active Application Filing
- 2021-07-26 TW TW110127359A patent/TWI778723B/zh active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140204089A1 (en) * | 2013-01-18 | 2014-07-24 | Electronics And Telecommunications Research Institute | Method and apparatus for creating three-dimensional montage |
CN109395390A (zh) * | 2018-10-26 | 2019-03-01 | 网易(杭州)网络有限公司 | 游戏角色脸部模型的处理方法、装置、处理器及终端 |
CN110111417A (zh) * | 2019-05-15 | 2019-08-09 | 浙江商汤科技开发有限公司 | 三维局部人体模型的生成方法、装置及设备 |
CN111695471A (zh) * | 2020-06-02 | 2020-09-22 | 北京百度网讯科技有限公司 | 虚拟形象生成方法、装置、设备以及存储介质 |
CN112419485A (zh) * | 2020-11-25 | 2021-02-26 | 北京市商汤科技开发有限公司 | 一种人脸重建方法、装置、计算机设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP2023507862A (ja) | 2023-02-28 |
CN112419485B (zh) | 2023-11-24 |
TWI778723B (zh) | 2022-09-21 |
CN112419485A (zh) | 2021-02-26 |
KR20230110607A (ko) | 2023-07-24 |
TW202221652A (zh) | 2022-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022110791A1 (zh) | 重建人脸的方法、装置、计算机设备及存储介质 | |
WO2022110790A1 (zh) | 重建人脸的方法、装置、计算机设备及存储介质 | |
US10540817B2 (en) | System and method for creating a full head 3D morphable model | |
WO2021253788A1 (zh) | 一种人体三维模型构建方法及装置 | |
CN110399849A (zh) | 图像处理方法及装置、处理器、电子设备及存储介质 | |
WO2022001236A1 (zh) | 三维模型生成方法、装置、计算机设备及存储介质 | |
JP2013524357A (ja) | ビデオ・シーケンスに記録された現実エンティティのリアルタイムのクロッピングの方法 | |
WO2013078404A1 (en) | Perceptual rating of digital image retouching | |
TWI780919B (zh) | 人臉影像的處理方法、裝置、電子設備及儲存媒體 | |
WO2023077742A1 (zh) | 视频处理方法及装置、神经网络的训练方法及装置 | |
CN114333034A (zh) | 人脸姿态估计方法、装置、电子设备及可读存储介质 | |
WO2022110855A1 (zh) | 人脸重建方法、装置、计算机设备及存储介质 | |
CN114429518B (zh) | 人脸模型重建方法、装置、设备和存储介质 | |
CN108717730B (zh) | 一种3d人物重建的方法及终端 | |
JP7523530B2 (ja) | 顔再構築方法、装置、コンピュータデバイス、及び記憶媒体 | |
CN115393487A (zh) | 一种虚拟角色模型处理方法、装置、电子设备及存储介质 | |
CN114612614A (zh) | 人体模型的重建方法、装置、计算机设备及存储介质 | |
CN112308957B (zh) | 一种基于深度学习的最佳胖瘦人脸肖像图像自动生成方法 | |
CN114677476A (zh) | 一种脸部处理方法、装置、计算机设备及存储介质 | |
JP7525814B2 (ja) | 顔再構成方法、装置、コンピュータ装置及び記憶媒体 | |
WO2023005359A1 (zh) | 图像处理方法和装置 | |
US20240169634A1 (en) | Stylized animatable representation | |
US11983819B2 (en) | Methods and systems for deforming a 3D body model based on a 2D image of an adorned subject | |
Joglekar et al. | Blending Motion Capture and 3D Human Reconstruction Techniques for Enhanced Character Animation | |
CN117994213A (zh) | 一种基于三维人脸重建的斑秃评估方法、装置及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2022519295 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21896298 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20237021453 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21896298 Country of ref document: EP Kind code of ref document: A1 |