CN109978930A - A kind of stylized human face three-dimensional model automatic generation method based on single image - Google Patents

A kind of stylized human face three-dimensional model automatic generation method based on single image Download PDF

Info

Publication number
CN109978930A
CN109978930A CN201910238031.3A CN201910238031A CN109978930A CN 109978930 A CN109978930 A CN 109978930A CN 201910238031 A CN201910238031 A CN 201910238031A CN 109978930 A CN109978930 A CN 109978930A
Authority
CN
China
Prior art keywords
face
image
network
dimensional
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910238031.3A
Other languages
Chinese (zh)
Other versions
CN109978930B (en
Inventor
秦昊
李冬平
赵海明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Phase Core Technology Co Ltd
Original Assignee
Hangzhou Phase Core Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Phase Core Technology Co Ltd filed Critical Hangzhou Phase Core Technology Co Ltd
Priority to CN201910238031.3A priority Critical patent/CN109978930B/en
Publication of CN109978930A publication Critical patent/CN109978930A/en
Application granted granted Critical
Publication of CN109978930B publication Critical patent/CN109978930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2024Style variation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a kind of stylized human face three-dimensional model automatic generation method based on single image, the following steps are included: pretreatment and training process, to every image zooming-out face three-dimensional information on portrait image data set, stylized face three-dimensional modeling manually is carried out to every image simultaneously, obtains stylized human face three-dimensional model;According to the portrait image data set and corresponding face three-dimensional information, training obtains a multitask face characteristic depth convolutional network.According to the portrait image data set, the stylization human face three-dimensional model and the face characteristic extract network, and training obtains a face geometry style conversion level convolutional network and a series of face texture style conversion level convolutional networks.Present invention input only needs single image, has very strong versatility, solves the expression problem of number image in the fields such as virtual reality, network social intercourse.

Description

A kind of stylized human face three-dimensional model automatic generation method based on single image
Technical field
The present invention relates to machine vision, image procossing and dimensional Modeling Technology fields, more particularly to one kind to be based on single width figure The stylized human face three-dimensional model automatic generation method of picture.
Background technique
The three-dimensional modeling of face is a highly important technology, in film, game, virtual reality, network social intercourse, face Extensive application in the scenes such as identification and tracking.According to input data required for modeling, human face model building can be divided into Three classes: the face modeling based on professional software, the face modeling based on professional equipment, and the face based on single image are built Mould.According to the concrete application scene of faceform, the objective result of face modeling can be faceform or stylized face Model.
Face three-dimensional modeling based on professional software is often suitable for the professional domains such as film, CG.Such as 3DMax, The business such as Maya, Blender or open source software, user, which requires a great deal of time, carrys out the application method of learning software, even Only professional artist or designer could use well;In order to complete a modeling, user needs to take hours Or even a couple of days constantly adjusts and tests.So while modeling software can be flexibly generated the threedimensional model of high quality, Complexity and the professional popularization for limiting it in ordinary user.
Face three-dimensional modeling based on professional equipment generally provides the point cloud or depth information of object via scanning device, Complete object model is obtained in conjunction with subsequent alignment, reconstruction scheduling algorithm.This method can be provided (within the scope of equipment error) Accurate true model, but there are still use upper difficulty.On the one hand, high-precision scanning device such as laser scanner and Multiple light courcess geometry appearance acquisition system etc., with high costs, modeling process is time-consuming and laborious and needs higher technical threshold.It is another Aspect, the other scanning device advantage of lower cost of civil, but corresponding scanning accuracy is often not high enough, and still can not Avoid more complex process for using.Especially it should be noted that on a mobile platform, there was only IPhone X, Huawei at present A small amount of type such as Mate20 can provide scanning (depth) information, limit the pervasive degree of such methods.
Three-dimensional face modeling based on single image is best suited for the convenient to use of ordinary user: user can be with mobile phone certainly Public figure's photo is taken or downloaded from the Internet, desired three-dimensional face model is then directly obtained.Building based on single image There is many difficult and challenge in mould.Wherein it is most importantly, single image can not reflect the complete information of object, how from endless Whole input finally restores complete information needs and is resolved, so the work of this respect usually requires some priori and knows Know.In three-dimensional face modeling, often by means of covering the parametrization face database of various shapes of face and expression, number is then utilized According to the two-dimension human face image of the relevant Coefficient Fitting input in library.
When the three-dimensional modeling of face is applied in virtual reality or social field, many problems are often encountered.Firstly, People is very strict for the cognition of real human face and fastidious, and the very small flaw of modeling result also tends to not to be tolerated;Benefit It is difficult that face very true to nature is generated with single image input limited in this way.Secondly, three-dimensional face modeling is possible to touch Send out " terrified paddy " effect: as the anthropomorphic degree of class personage's body increases, the mankind not increase always it good opinion, and can be non- Very close to and it is not quite identical when, show the feeling of very " terror ".Finally, user wish to show be also not necessarily Completely it is true oneself, but the number image (the number image after being likely to be beautification) of itself main feature can be represented.
Stylized face modeling is introduced into solve these problems of face modeling, user can be allowed to show Self, while the style for oneself wanting to highlight is shown again.One related sample is caricature creation, and the face of case of caricatures of persons have centainly Artistic technique can show the main feature of different personages again simultaneously.The development of deep neural network was so that X-Y scheme in recent years The stylization processing of picture achieves very big breakthrough, but how to carry out stylization processing to threedimensional model and still lack solution party Case.
Summary of the invention
In view of the deficiencies of the prior art, it is an object of the present invention to provide a kind of, and the stylized face based on single image is three-dimensional Model automatic forming method, to solve the expression problem of number image in the fields such as virtual reality, network social intercourse.
For achieving the above object, the present invention is the following technical schemes are provided: a kind of stylized people based on single image Face three-dimensional model automatic generation method, comprising the following steps:
(1) face database pre-processes: to every image zooming-out face three-dimensional information on portrait image data set, simultaneously Stylized face three-dimensional modeling manually is carried out to every image, obtains stylized human face three-dimensional model.
(2) face characteristic extracts network: the portrait image data set according to step 1 and corresponding face three-dimensional letter Breath, training obtain a multitask face characteristic depth convolutional network.
(3) face style converts network: the portrait image data set according to step 1 and stylized human face three-dimensional model, And the face characteristic depth convolutional network that step 2 training obtains, training obtain a face geometry style conversion level convolution Network and a series of face texture style conversion level convolutional networks, the face geometry style conversion level convolutional network and people Face texture style conversion level convolutional network constitutes face style and converts network.
(4) after inputting image to be processed, face characteristic that the training of operating procedure 2 obtains extracts network and step 3 is trained The face style conversion network arrived, exports face geometric consequence and texture result, after carrying out to the face geometric consequence of output Reason, merges the texture result of output, finally obtains stylized human face three-dimensional model.
Further, described that every image zooming-out face three-dimensional is believed on portrait image data set in the step (1) Breath specifically:
Firstly, the two-dimensional signal of the image labeling face to the portrait image data set of input, including face key point, with And hair, eyebrow, beard region exposure mask.Wherein face key point refers to including cheek edge, eyes, eyebrow, nose, mouth wheel Wide characteristic point position;Exposure mask refers to binaryzation pixel region, and value is 1 inside region, and region exterior value is 0.
Then, three-dimensional fitting algorithm is carried out in conjunction with face key point information and image color information, obtains the three of face Tie up information, including three-dimensional geometric information and texture mapping information;Specific operation process is: each face information table is shown as one Triangular mesh, each vertex of grid include a vertex geometric coordinate and a UV texture coordinate, and each grid is one corresponding Texture mapping carries out PCA decomposition, expression to the three-dimensional information of face are as follows:
Wherein,It is the grid vertex geometric coordinate of average face, WsIt is the PCA vector matrix of grid vertex geometric coordinate, α is geometric parameter vector;It is average texture textures, WtIt is the PCA vector matrix of texture mapping, β is parametric texture vector.
Then optimize following energy equation:
Efit(M, α, β)=λlEl(M,α,β)+λaEa(M,α,β)+λrEr(α,β)
Wherein, M is Camera extrinsic, λlarIt is three energy term E respectivelyl,Ea,ErWeight.El(M, α, β) is minimized The projected position of face key point and the difference of image labeling position:
P () represents camera projection function, sjIndicate vertex geometric coordinate of the s in j-th of face key point,It is pair The image labeling position answered;EiThe color difference of (M, α, β) minimum face drawing result and image pixel:
Ru,v() is drafting function, is indicated after three-dimensional grid is plotted on image in pixel u, the color value of v, Iu,v Indicate input picture in pixel u, the color value of v;Er(α, β) is regular terms, punishes overfitting problem:
Er(α, β)=| | α | |2+||β||2
Finally, energy equation Efit(M, α, β) is iteratively solved using the Gauss-Newton method of standard.
Further, the step 2 specifically: according to input picture and corresponding markup information, training more than one Business depth convolutional network, input are the image of RGB triple channel, and output includes: the parameter information of camera, the three-dimensional geometry of face Information, the texture mapping information of face, hair zones exposure mask, brow region exposure mask, beard region exposure mask.Multitask depth convolution Network is inputted RGB image, is carried out the relevant operation of multilayer convolution by two module compositions, first module, extracts the abstract of image Feature;The abstract characteristics that second module is extracted with first module are input, include multiple branches, each branch by convolution, Deconvolution or full connection relevant operation are constituted, and predict the different information of face respectively.
Further, in the step 3, it is specific that the training obtains a face geometry style conversion level convolutional network Are as follows: according to input picture, calculated on the face three-dimensional geometric information of multitask face characteristic depth convolutional network output The position on each vertex and normal vector, these positions and normal vector can be plotted to a 6 channel figures according to the UV expansion of model As on A;Similarly, the stylized human face three-dimensional model of the input picture, vertex position and normal vector can also be opened up It opens and is plotted on a six channel image B.Face geometry style converts network inputs image A, by a series of convolution sum warps Product relevant operation minimizes the Euclidean distance of network output image and B.
A series of face texture style conversion level convolutional networks of training specifically: according to input picture, in conjunction with more The face texture mapping information of task face characteristic depth convolutional network output, hair, eyebrow, beard region exposure mask, Yi Jisuo The non-hair area texture style conversion network of face, eyebrow area is respectively trained in the texture mapping for stating stylized human face three-dimensional model Domain texture style converts network and beard region texture style converts network.
Further, in the step 4, the face characteristic that the training of operating procedure 2 obtains extracts network and step 3 instruction The face style conversion network got, exports face geometric consequence and texture result specifically: user inputs single image, first It first runs the face characteristic extraction network and obtains the three-dimensional geometric information of face, texture mapping, hair, eyebrow, beard region Exposure mask;Then the face style conversion network is run, normal vector UV expansion and the texture of stylized human face three-dimensional model are obtained Textures three put up figure including non-hair area, brow region, beard region.
The face geometric consequence of described pair of output post-processes specifically: certainly according to the stylized human face three-dimensional model The vertex position of the face 3-D geometric model of dynamic generation method output and normal vector UV expansion, extract the flat of each dough sheet of model Equal normal information and the location information on each vertex.In conjunction with vertex position information and dough sheet normal information, combined optimization Export face stylization geometric consequence.
The texture result of described pair of output merges specifically: is automatically generated according to the stylized human face three-dimensional model Non- hair area, the three stylized texture mapping of brow region, beard region of method output, carry out translucent mixing output The stylized texture result of one face entirety.
The beneficial effect of this hair name is: inputting only need single image first, there is very strong versatility;Secondly it can run The face characteristic extracts network and face style converts network, is post-processed automatically to the face geometric consequence of output, right The texture result of output is merged automatically, finally obtains stylized human face three-dimensional model, solves virtual reality, network social intercourse The expression problem of number image in equal fields.
Detailed description of the invention
Fig. 1 is the flow diagram of the stylized human face three-dimensional model automatic generation method based on single image;
Fig. 2 is data prediction, gives input picture, first mark characteristic point and hair zones exposure mask, then marks eyebrow Hair and beard region exposure mask;
Fig. 3 is the face three-dimensional information that three-dimensional fitting obtains, including geological information and texture information;
Fig. 4 is the stylized human face three-dimensional model of input picture, including geometrical model and texture mapping;
Fig. 5 is that face characteristic extracts network;
Fig. 6 is face geometry style conversion level convolutional network;
Fig. 7 is the texture style conversion network of the non-hair area of face;
Fig. 8 is the texture style conversion network of brow region;
Fig. 9 is the texture style conversion network of beard region;
Figure 10 is the stylized human face three-dimensional model ultimately produced, including geometrical model, texture result, and combines geometry The final stylized human face three-dimensional model drawing result of model texture mapping;
Specific embodiment
The present invention is based on the stylized human face three-dimensional model automatic generation methods of single image, comprising the following steps:
1, face database pre-processes: to every image zooming-out face three-dimensional information on portrait image data set, simultaneously Stylized face three-dimensional modeling manually is carried out to every image, obtains stylized human face three-dimensional model.
The portrait image data set can use FaceWareHorse three-dimensional face database (FaceWarehouse:a 3D Facial Expression Database for Visual Computing.Chen Cao,Yanlin Weng,Shun Zhou,Yiying Tong,Kun Zhou IEEE TVCG,20(3):413-425,2014);Other open face data sets or The face image data that person voluntarily acquires also can be used.
Described includes following sub-step to every image zooming-out face three-dimensional information:
Firstly, the two-dimensional signal of the image labeling face to the portrait image data set of input, including face key point, with And hair, eyebrow, beard region exposure mask.Wherein face key point refers to including cheek edge, eyes, eyebrow, nose, mouth wheel Wide characteristic point position;Exposure mask refers to binaryzation pixel region, and value is 1 inside region, and region exterior value is 0.Specifically The definition of key point and exposure mask refers to Fig. 2.
Then, three-dimensional fitting algorithm is carried out in conjunction with face key point information and image color information, obtains the three of face Tie up information, including three-dimensional geometric information and texture mapping information (as shown in Figure 3);Specific operation process is that each face is believed Breath is expressed as a triangular mesh, and each vertex of grid includes a vertex geometric coordinate and a UV texture coordinate, each Grid corresponds to a texture mapping.PCA decomposition, expression are carried out to the three-dimensional information of face are as follows:
Wherein,It is the grid vertex geometric coordinate of average face, WsIt is the PCA vector matrix of grid vertex geometric coordinate, α is geometric parameter vector, the corresponding face geometric grid of each α;Similar,It is average texture textures, WtIt is texture mapping PCA vector matrix, β is parametric texture vector.
Then optimize following energy equation:
Efit(M, α, β)=λlEl(M,α,β)+λaEa(M,α,β)+λrEr(α,β)
Wherein, M is Camera extrinsic, λlarIt is three energy term E respectivelyl,Ea,ErWeight.Specifically, El(M,α, β) minimize the projected position of face key point and the difference of image labeling position:
P () represents camera projection function, sjIndicate vertex geometric coordinate of the s in j-th of face key point,It is pair The image labeling position answered;EiThe color difference of (M, α, β) minimum face drawing result and image pixel:
Ru,v() is drafting function, is indicated after three-dimensional grid is plotted on image in pixel u, the color value of v, Iu,v Indicate input picture in pixel u, the color value of v;Er(α, β) is regular terms, punishes overfitting problem:
Er(α, β)=| | α | |2+||β||2
Finally, energy equation Efit(M, α, β) is iteratively solved using the Gauss-Newton method of standard.
It is described that every image progress, stylized face three-dimensional modeling includes: manually
A kind of stylized type (uniform this animation style class of such as oxeye, spur, small mouth, the colour of skin is defined first Type), stylized amendment is then carried out according to extracted face three-dimensional information and (widens eyes, adjustment spur and mouth ratio, mill Flat face complexion etc.), obtain corresponding stylized human face three-dimensional model.Here stylized type can according to actual needs freely Specified, Fig. 4 gives a kind of style example.It should be noted that data processing and pre-training process are for every kind of given style It need to only carry out primary.After completing data processing and pre-training, one image of subsequent input can run fully and automatically neural network Derivation process ultimately generates the stylized human face three-dimensional model of user.It is further noted that although this method is paid close attention to emphatically Facial face region, but since this method has been partitioned into hair exposure mask, the automatic three-dimensional modeling of hair be can be used It is existing to be simply obtained based on contour line and the matched method of the field of direction.
2, face characteristic extracts network: the portrait image data set according to step 1 and corresponding face three-dimensional information, Training obtains a multitask face characteristic depth convolutional network.
The training obtains a multitask face characteristic depth convolutional network and means according to input picture, and corresponding The face three-dimensional information, training one multitask depth convolutional network, input be RGB triple channel image, output packet It includes: the parameter information of camera, the three-dimensional geometric information of face, the texture mapping information of face, hair zones exposure mask, brow region Exposure mask, beard region exposure mask.Multitask depth convolutional network is inputted RGB image, is carried out by two module compositions, first module Multilayer convolution relevant operation, extracts the abstract characteristics of image;Second module is defeated with the abstract characteristics that first module is extracted Enter, include multiple branches, each branch is made of convolution, deconvolution or full connection relevant operation, predicts the difference of face respectively Information (Fig. 5).
Specifically, first module inputs RGB image, the relevant operation of multilayer convolution is carried out, extracts the abstract of image Feature includes: to input individual RGB image;Face frame region is intercepted first and zooms to 224x224 size, is then carried out in order Convolutional network as shown in the table carries out feature extraction:
A Relu activation primitive can be linked behind each convolutional layer of network above and carries out nonlinear transformation, this is also deep Spend the standard processing mode in neural network.
Specifically, second module is input with the abstract characteristics that first module is extracted, it include multiple branches, often A branch is made of convolution, deconvolution or full connection relevant operation, predicts that the different information of face include: respectively
(1) camera parameter estimates branch: comprising single full articulamentum, the result for receiving MaxPool is used as input, then defeated 6 dimensional vectors out indicate the rotation and translation of camera.
(2) geometric parameter vector α estimates branch: comprising single full articulamentum, the result for receiving MaxPool is used as input, Then n is exportedαDimensional vector, wherein nαIt is the dimension of α.
(3) parametric texture vector β estimates branch: comprising single full articulamentum, the result for receiving MaxPool is used as input, Then n is exportedβDimensional vector, wherein nβIt is the dimension of β.
(4) hair zones exposure mask divides branch: it include three warp laminations, the result for receiving Conv33 is used as input, into The deconvolution of capable 3x3/0.5 three times, exports 56x56x64,112x112x32,224x224x1 respectively;It finally obtains and input figure As the consistent exposure mask of size, pixel region shared by hair is represented.
(5) beard region exposure mask divides branch: it include three warp laminations, the result for receiving Conv33 is used as input, into The deconvolution of capable 3x3/0.5 three times, exports 56x56x64,112x112x32,224x224x1 respectively;It finally obtains and input figure As the consistent exposure mask of size, pixel region shared by beard is represented.
(6) brow region exposure mask divides branch: it include three warp laminations, the result for receiving Conv33 is used as input, into The deconvolution of capable 3x3/0.5 three times, exports 56x56x64,112x112x32,224x224x1 respectively;It finally obtains and input figure As the consistent exposure mask of size, pixel region shared by eyebrow is represented.
There are many existing work using depth convolutional network fitting face three-dimensional information.The processing strategie of this method be from Input picture directly returns geometric parameter vector α and parametric texture vector β, that is to say, that two norms of the two vector differences As the loss function of neural network, the main reason for doing so be can rebuild face three-dimensional information from two coefficients, and And directly regression coefficient makes network training very simple, is conducive to fast convergence;On herein, multi-task mechanism is introduced, is made Multiple outputs can be provided simultaneously by obtaining the same network.On the one hand multi-task mechanism makes the utilization of information more abundant, reduce Calculation amount, on the other hand enhances the generalization ability of model, reduces over-fitting;Similar method and observation is being based on depth It has been widely used and has studied in the tasks such as semantic segmentation, target detection, the identification of neural network.
3, face style converts network: the portrait image data set according to step 1 and stylized human face three-dimensional model, with And the face characteristic depth convolutional network that step 2 training obtains, training obtain a face geometry style conversion level convolution net Network and a series of face texture style conversion level convolutional networks, the face geometry style conversion level convolutional network and face Texture style conversion level convolutional network constitutes face style and converts network.
It includes (Fig. 6) that the training, which obtains a face geometry style conversion level convolutional network:
According to input picture, each top is calculated on the face three-dimensional geometric information that the face characteristic extracts network output The position of point and normal vector, these positions and normal vector can be plotted on a 6 channel image A according to the UV expansion of model; Similarly, to the manual stylized face modeling of the input picture, the vertex position and normal vector of model can also be opened up It opens and is plotted on a six channel image B.Face geometry style converts network inputs image A, by a series of convolution sum warps Product relevant operation minimizes the Euclidean distance of network output image and B.
Specifically, these described positions and normal vector can be plotted on a 6 channel image A according to the UV expansion of model Include:
Note grid vertex geometric position is (vx,vy,vz), normal vector is (nx,ny,nz), texture coordinate is (tu,tv);For All triangle surfaces of grid, with (tu,tv) it is coordinate, vxTriangle drafting is carried out for color, an available channel Image;The image in other similar available five channels.
Specifically, described include: by a series of convolution sum deconvolution relevant operations
According to the input picture A;Then scaling first carries out the network mould of " hourglass " type to 224x224x6 size Type.Then such network model is upsampled to former resolution ratio first image down sampling to lower resolution ratio, sampling The intermediate level can allow information to connect in the process, avoid loss in detail caused by down-sampling.This model has been applied successfully In the semantic segmentation task based on convolutional neural networks, and output fine degree on achieve it is extremely successful into Exhibition.Specific net definitions refer to following table:
Network layer name Convolution kernel size/step-length Network layer output
FConv1 3x3/2 112x112x96
FConv2 3x3/2 56x56x256
FConv3 3x3/2 28x28x384
FConv4 3x3/1 28x28x384
FConv5 3x3/1 28x28x256
RConv5 3x3/1 28x28x384
RConv4 3x3/1 28x28x384
RConv3 3x3/0.5 56x56x256
RConv2 3x3/0.5 112x112x96
RConv1 3x3/0.5 224x224x6
In table upper one layer output always as next layer of input outside, the output of FConv1 can pass through a 1x1/1 Convolution after input as RConv1 simultaneously;FConv2-RConv2, FConv3-RConv3, FConv4-RConv4 have similar Connection.
Specifically, the Euclidean distance of the minimum network output image and B mean the loss letter for defining the network optimization Number:
L=LrecregLreg
IiIt is value of the output of RConv1 in pixel i, BiIt is value of the image B in the pixel,Indicate pixel i's One ring neighborhood;LrecThe image and given style image for constraining network to export must be similar enough;LregIt is smooth regular terms, about The local light slipping property of beam output image, this constraint is necessary, because the part of face geometry is smooth;λregIt is Regular terms weight.
A series of face texture style conversion level convolutional networks include:
According to input picture, the face texture patch of the output of the multitask face characteristic depth convolutional network in conjunction with described in step 1 Figure, the stylized line that hair, eyebrow, beard region exposure mask and the corresponding manual stylized face three-dimensional modeling obtain Textures are managed, the following several texture styles of training convert network:
(1) face non-hair area texture style conversion network inputs removal hair, eyebrow, beard region face line Reason minimizes network output and corresponding removal hair, eyebrow, beard area by a series of convolution sum deconvolution relevant operations The Euclidean distance (Fig. 7) of the stylized face texture in domain.
(2) image pixel in brow region texture style conversion Web vector graphic brow region exposure mask is used as input, other It is consistent (Fig. 8) with the non-hair area strategy of face.
(3) image pixel in beard region texture style conversion Web vector graphic beard region exposure mask is as input, with face The non-hair area strategy in portion is consistent (Fig. 9).
Although these three styles convert network, strategy is consistent, needs to be handled respectively with three networks, reason is non- The stylization of hair area, brow region, beard region differs greatly, and processing of putting together will lead to the learning difficulty mistake of network Greatly, increase convergence difficulty and reduce generalization ability.
Texture style converts the specific structure of network and " hourglass " structure of face geometry style conversion network Almost the same, difference is that inputting no longer is 6 channel vertex scheme vector-valued images, but 3 channel color images;In addition loss function Smooth regular terms is not added, because color image cannot be guaranteed local smoothing method property.
4, after inputting image to be processed, the face characteristic that the training of operating procedure 2 obtains extracts network and step 3 training obtains Face style convert network, export face geometric consequence and texture result, the face geometric consequence of output post-processed, The texture result of output is merged, stylized human face three-dimensional model (Figure 10) is finally obtained.
The face style conversion that the face characteristic that the training of operating procedure 2 obtains extracts network and step 3 training obtains Network, exports face geometric consequence and texture result includes:
User inputs single image, runs the face characteristic extraction network first and obtains the 3-D geometric model of face, Texture mapping, hair, eyebrow, beard region exposure mask;Then the face style conversion network is run, stylized face three is obtained The normal vector UV expansion and stylized texture mapping for tieing up geometrical model, including non-hair area, brow region, beard region three Put up figure.Because of each network trained completion in step in front, the step for directly successively run each network , not complicated operation bidirectional.
The geometric consequence of described pair of output carries out automatic post-processing
According to the vertex position of the face 3-D geometric model of the stylized human face three-dimensional model automatic generation method output It sets and is unfolded with normal vector UV, extract the average normal information of each dough sheet of model and the location information on each vertex.In conjunction with Vertex position information and dough sheet normal information, combined optimization obtain final face stylization 3-D geometric model.
Specifically, the average normal information for extracting each dough sheet of model is meant, for each triangle surface F, the image pixel of traversal face geometry style conversion level convolutional network output, and calculate and all belong to dough sheet f texture space Pixel normal vector average value, normalization obtain the average normal vector of dough sheet
Specifically, the location information on described and each vertex is meant, for each triangular apex j, according to its line Reason coordinate directly takes location coordinate information on the image pixel of face geometry style conversion level convolutional network output, obtains
Specifically, the combination vertex position information and dough sheet normal information, combined optimization obtains final face wind 3-D geometric model of formatting is meant, defines optimization problem:
HereIndicate normal vector transposition, vf,k(k=0,1,2) k-th of vertex of triangle surface f is indicated;λ is weight Parameter.Apex coordinate and the neural network output of this optimization problem one side constrained objective triangular mesh are close, another party The face normal vector of constrained objective triangular mesh and neural network output are close simultaneously in face.Why neural network is not utilized directly The apex coordinate of output, and it to be combined normal information, reason is that normal information has contained more local messages, And the ability of Processing with Neural Network local message is stronger.Above-mentioned optimization problem is a linear problem, can be obtained with direct solution Closed solutions.
The texture result of described pair of output carries out automatic fusion and means, automatic according to the stylized human face three-dimensional model Non- hair area, the three stylized texture mapping of brow region, beard region of generation method output, carry out translucent mixing Obtain the stylized texture mapping of a face entirety.

Claims (5)

1. a kind of stylized human face three-dimensional model automatic generation method based on single image, comprising the following steps:
(1) face database pre-processes: to every image zooming-out face three-dimensional information on portrait image data set, while to every It opens image and carries out stylized face three-dimensional modeling manually, obtain stylized human face three-dimensional model.
(2) face characteristic extracts network: the portrait image data set according to step 1 and corresponding face three-dimensional information, instruction It gets to a multitask face characteristic depth convolutional network.
(3) face style converts network: the portrait image data set according to step 1 and stylized human face three-dimensional model, and The face characteristic depth convolutional network that step 2 training obtains, training obtain a face geometry style conversion level convolutional network With a series of face texture style conversion level convolutional networks, the face geometry style conversion level convolutional network and face line It manages style conversion level convolutional network and constitutes face style conversion network.
(4) after inputting image to be processed, what the face characteristic extraction network and step 3 training that the training of operating procedure 2 obtains obtained Face style converts network, exports face geometric consequence and texture result, post-processes to the face geometric consequence of output, right The texture result of output is merged, and stylized human face three-dimensional model is finally obtained.
2. the stylized human face three-dimensional model automatic generation method based on single image, feature exist as described in claim 1 In, in the step (1), it is described on portrait image data set to every image zooming-out face three-dimensional information specifically:
Firstly, the two-dimensional signal of the image labeling face to the portrait image data set of input, including face key point, Yi Jitou Hair, eyebrow, beard region exposure mask.Wherein face key point refers to including cheek edge, eyes, eyebrow, nose, mouth profile Characteristic point position;Exposure mask refers to binaryzation pixel region, and value is 1 inside region, and region exterior value is 0.
Then, three-dimensional fitting algorithm is carried out in conjunction with face key point information and image color information, obtains the three-dimensional letter of face Breath, including three-dimensional geometric information and texture mapping information;Specific operation process is: each face information table is shown as a triangle Shape grid, each vertex of grid include a vertex geometric coordinate and a UV texture coordinate, the corresponding texture of each grid Textures carry out PCA decomposition, expression to the three-dimensional information of face are as follows:
Wherein,It is the grid vertex geometric coordinate of average face, WsIt is the PCA vector matrix of grid vertex geometric coordinate, α is Geometric parameter vector;It is average texture textures, WtIt is the PCA vector matrix of texture mapping, β is parametric texture vector.
Then optimize following energy equation:
Efit(M, α, β)=λlEl(M, α, β)+λaEa(M, α, β)+λrEr(α, β)
Wherein, M is Camera extrinsic, λl, λa, λrIt is three energy term E respectivelyl, Ea, ErWeight.El(M, α, β) minimizes face The projected position of key point and the difference of image labeling position:
P () represents camera projection function, sjIndicate vertex geometric coordinate of the s in j-th of face key point,It is corresponding Image labeling position;EiThe color difference of (M, α, β) minimum face drawing result and image pixel:
RU, v() is drafting function, is indicated after three-dimensional grid is plotted on image in pixel u, the color value of v, IU, vIt indicates Input picture is in pixel u, the color value of v;Er(α, β) is regular terms, punishes overfitting problem:
Er(α, β)=| | α | |2+||β||2
Finally, energy equation Efit(M, α, β) is iteratively solved using the Gauss-Newton method of standard.
3. the stylized human face three-dimensional model automatic generation method based on single image, feature exist as described in claim 1 In the step 2 specifically: according to input picture and corresponding markup information, one multitask depth convolution net of training Network, input are the image of RGB triple channel, and output includes: the parameter information of camera, the three-dimensional geometric information of face, face Texture mapping information, hair zones exposure mask, brow region exposure mask, beard region exposure mask.Multitask depth convolutional network is by two Module composition, first module input RGB image, carry out the relevant operation of multilayer convolution, extract the abstract characteristics of image;Second Module is input with the abstract characteristics that first module is extracted, and includes multiple branches, each branch is by convolution, deconvolution or Quan Lian Relevant operation composition is connect, predicts the different information of face respectively.
4. the stylized human face three-dimensional model automatic generation method based on single image, feature exist as described in claim 1 In in the step 3, the training obtains a face geometry style conversion level convolutional network specifically: schemed according to input Picture calculates the position on each vertex on the face three-dimensional geometric information of multitask face characteristic depth convolutional network output And normal vector, these positions and normal vector can be plotted on a 6 channel image A according to the UV expansion of model;Similarly, right The stylized human face three-dimensional model of the input picture, vertex position and normal vector can also be unfolded to be plotted to a Zhang Liutong On road image B.Face geometry style converts network inputs image A, by a series of convolution sum deconvolution relevant operations, minimizes The Euclidean distance of network output image and B.
A series of face texture style conversion level convolutional networks of training specifically: according to input picture, in conjunction with multitask The face texture mapping information of face characteristic depth convolutional network output, hair, eyebrow, beard region exposure mask and the wind It formats the texture mapping of human face three-dimensional model, the non-hair area texture style conversion network of face, brow region line is respectively trained It manages style conversion network and beard region texture style converts network.
5. the stylized human face three-dimensional model automatic generation method based on single image, feature exist as described in claim 1 In in the step 4, the face characteristic that the training of operating procedure 2 obtains extracts network and step 3 trains obtained face wind Lattice convert network, export face geometric consequence and texture result specifically: user inputs single image, runs the face first Feature extraction network obtains the three-dimensional geometric information of face, texture mapping, hair, eyebrow, beard region exposure mask;Then institute is run Face style conversion network is stated, the normal vector UV expansion and texture mapping of stylized human face three-dimensional model, including non-hair are obtained Figure three is puted up in region, brow region, beard region.
The face geometric consequence of described pair of output post-processes specifically: is given birth to automatically according to the stylized human face three-dimensional model Vertex position and normal vector UV expansion at the face 3-D geometric model of method output, extract the method for average of each dough sheet of model The location information on vector information and each vertex.In conjunction with vertex position information and dough sheet normal information, combined optimization output Face stylization geometric consequence.
The texture result of described pair of output merges specifically: according to the stylized human face three-dimensional model automatic generation method Non- hair area, the three stylized texture mapping of brow region, beard region of output, carry out translucent mixing and export one The stylized texture result of face entirety.
CN201910238031.3A 2019-03-27 2019-03-27 Stylized human face three-dimensional model automatic generation method based on single image Active CN109978930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910238031.3A CN109978930B (en) 2019-03-27 2019-03-27 Stylized human face three-dimensional model automatic generation method based on single image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910238031.3A CN109978930B (en) 2019-03-27 2019-03-27 Stylized human face three-dimensional model automatic generation method based on single image

Publications (2)

Publication Number Publication Date
CN109978930A true CN109978930A (en) 2019-07-05
CN109978930B CN109978930B (en) 2020-11-10

Family

ID=67080944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910238031.3A Active CN109978930B (en) 2019-03-27 2019-03-27 Stylized human face three-dimensional model automatic generation method based on single image

Country Status (1)

Country Link
CN (1) CN109978930B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334679A (en) * 2019-07-11 2019-10-15 厦门美图之家科技有限公司 Face point processing method and processing device
CN110443892A (en) * 2019-07-25 2019-11-12 北京大学 A kind of three-dimensional grid model generation method and device based on single image
CN111354079A (en) * 2020-03-11 2020-06-30 腾讯科技(深圳)有限公司 Three-dimensional face reconstruction network training and virtual face image generation method and device
CN111353470A (en) * 2020-03-13 2020-06-30 北京字节跳动网络技术有限公司 Image processing method and device, readable medium and electronic equipment
CN111369427A (en) * 2020-03-06 2020-07-03 北京字节跳动网络技术有限公司 Image processing method, image processing device, readable medium and electronic equipment
CN111368685A (en) * 2020-02-27 2020-07-03 北京字节跳动网络技术有限公司 Key point identification method and device, readable medium and electronic equipment
CN111524207A (en) * 2020-04-21 2020-08-11 腾讯科技(深圳)有限公司 Image generation method and device based on artificial intelligence and electronic equipment
CN111667400A (en) * 2020-05-30 2020-09-15 温州大学大数据与信息技术研究院 Human face contour feature stylization generation method based on unsupervised learning
CN112396693A (en) * 2020-11-25 2021-02-23 上海商汤智能科技有限公司 Face information processing method and device, electronic equipment and storage medium
CN112581481A (en) * 2020-12-30 2021-03-30 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112581358A (en) * 2020-12-17 2021-03-30 北京达佳互联信息技术有限公司 Training method of image processing model, image processing method and device
US20210110557A1 (en) * 2019-10-10 2021-04-15 Andrew Thomas Busey Pattern-triggered object modification in augmented reality system
CN112991523A (en) * 2021-04-02 2021-06-18 福建天晴在线互动科技有限公司 Efficient and automatic hair matching head type generation method and generation device thereof
CN113012282A (en) * 2021-03-31 2021-06-22 深圳市慧鲤科技有限公司 Three-dimensional human body reconstruction method, device, equipment and storage medium
CN113111861A (en) * 2021-05-12 2021-07-13 北京深尚科技有限公司 Face texture feature extraction method, 3D face reconstruction method, device and storage medium
CN113256694A (en) * 2020-02-13 2021-08-13 北京沃东天骏信息技术有限公司 Eyebrow pencil drawing method and device
CN113269888A (en) * 2021-05-25 2021-08-17 山东大学 Hairstyle three-dimensional modeling method, character three-dimensional modeling method and system
CN113284229A (en) * 2021-05-28 2021-08-20 上海星阑信息科技有限公司 Three-dimensional face model generation method, device, equipment and storage medium
WO2021174939A1 (en) * 2020-03-03 2021-09-10 平安科技(深圳)有限公司 Facial image acquisition method and system
CN113409454A (en) * 2021-07-14 2021-09-17 北京百度网讯科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN113470182A (en) * 2021-09-03 2021-10-01 中科计算技术创新研究院 Face geometric feature editing method and deep face remodeling editing method
CN113470162A (en) * 2020-03-30 2021-10-01 京东方科技集团股份有限公司 Method, device and system for constructing three-dimensional head model and storage medium
CN113538221A (en) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 Three-dimensional face processing method, training method, generating method, device and equipment
CN113570634A (en) * 2020-04-28 2021-10-29 北京达佳互联信息技术有限公司 Object three-dimensional reconstruction method and device, electronic equipment and storage medium
CN114119154A (en) * 2021-11-25 2022-03-01 北京百度网讯科技有限公司 Virtual makeup method and device
CN114299206A (en) * 2021-12-31 2022-04-08 清华大学 Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN115147578A (en) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 Stylized three-dimensional face generation method and device, electronic equipment and storage medium
CN115761855A (en) * 2022-11-23 2023-03-07 北京百度网讯科技有限公司 Face key point information generation, neural network training and three-dimensional face reconstruction method
CN116740300A (en) * 2023-06-16 2023-09-12 广东工业大学 Multi-mode-based prime body and texture fusion furniture model reconstruction method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303772A (en) * 2008-06-20 2008-11-12 浙江大学 Method for modeling non-linear three-dimensional human face based on single sheet image
CN101751689A (en) * 2009-09-28 2010-06-23 中国科学院自动化研究所 Three-dimensional facial reconstruction method
CN103208133A (en) * 2013-04-02 2013-07-17 浙江大学 Method for adjusting face plumpness in image
CN107358648A (en) * 2017-07-17 2017-11-17 中国科学技术大学 Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN109255831A (en) * 2018-09-21 2019-01-22 南京大学 The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate
CN109255827A (en) * 2018-08-24 2019-01-22 太平洋未来科技(深圳)有限公司 Three-dimensional face images generation method, device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303772A (en) * 2008-06-20 2008-11-12 浙江大学 Method for modeling non-linear three-dimensional human face based on single sheet image
CN101751689A (en) * 2009-09-28 2010-06-23 中国科学院自动化研究所 Three-dimensional facial reconstruction method
CN103208133A (en) * 2013-04-02 2013-07-17 浙江大学 Method for adjusting face plumpness in image
CN107358648A (en) * 2017-07-17 2017-11-17 中国科学技术大学 Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN109255827A (en) * 2018-08-24 2019-01-22 太平洋未来科技(深圳)有限公司 Three-dimensional face images generation method, device and electronic equipment
CN109255831A (en) * 2018-09-21 2019-01-22 南京大学 The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AYUSH TEWARI ET AL.: "MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction", 《ICCV》 *
LI FANGMIN ET AL.: "3D Face Reconstruction Based on Convolutional Neural Network", 《2017 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTATION TECHNOLOGY AND AUTOMATION》 *

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334679A (en) * 2019-07-11 2019-10-15 厦门美图之家科技有限公司 Face point processing method and processing device
CN110334679B (en) * 2019-07-11 2021-11-26 厦门美图之家科技有限公司 Face point processing method and device
CN110443892B (en) * 2019-07-25 2021-06-04 北京大学 Three-dimensional grid model generation method and device based on single image
CN110443892A (en) * 2019-07-25 2019-11-12 北京大学 A kind of three-dimensional grid model generation method and device based on single image
US11908149B2 (en) * 2019-10-10 2024-02-20 Andrew Thomas Busey Pattern-triggered object modification in augmented reality system
US20210110557A1 (en) * 2019-10-10 2021-04-15 Andrew Thomas Busey Pattern-triggered object modification in augmented reality system
CN113256694A (en) * 2020-02-13 2021-08-13 北京沃东天骏信息技术有限公司 Eyebrow pencil drawing method and device
CN111368685B (en) * 2020-02-27 2023-09-29 北京字节跳动网络技术有限公司 Method and device for identifying key points, readable medium and electronic equipment
CN111368685A (en) * 2020-02-27 2020-07-03 北京字节跳动网络技术有限公司 Key point identification method and device, readable medium and electronic equipment
WO2021174939A1 (en) * 2020-03-03 2021-09-10 平安科技(深圳)有限公司 Facial image acquisition method and system
CN111369427A (en) * 2020-03-06 2020-07-03 北京字节跳动网络技术有限公司 Image processing method, image processing device, readable medium and electronic equipment
CN111369427B (en) * 2020-03-06 2023-04-18 北京字节跳动网络技术有限公司 Image processing method, image processing device, readable medium and electronic equipment
CN111354079B (en) * 2020-03-11 2023-05-02 腾讯科技(深圳)有限公司 Three-dimensional face reconstruction network training and virtual face image generation method and device
CN111354079A (en) * 2020-03-11 2020-06-30 腾讯科技(深圳)有限公司 Three-dimensional face reconstruction network training and virtual face image generation method and device
CN111353470A (en) * 2020-03-13 2020-06-30 北京字节跳动网络技术有限公司 Image processing method and device, readable medium and electronic equipment
WO2021197230A1 (en) * 2020-03-30 2021-10-07 京东方科技集团股份有限公司 Three-dimensional head model constructing method, device, system, and storage medium
CN113470162A (en) * 2020-03-30 2021-10-01 京东方科技集团股份有限公司 Method, device and system for constructing three-dimensional head model and storage medium
CN111524207A (en) * 2020-04-21 2020-08-11 腾讯科技(深圳)有限公司 Image generation method and device based on artificial intelligence and electronic equipment
CN113570634A (en) * 2020-04-28 2021-10-29 北京达佳互联信息技术有限公司 Object three-dimensional reconstruction method and device, electronic equipment and storage medium
CN111667400A (en) * 2020-05-30 2020-09-15 温州大学大数据与信息技术研究院 Human face contour feature stylization generation method based on unsupervised learning
CN112396693A (en) * 2020-11-25 2021-02-23 上海商汤智能科技有限公司 Face information processing method and device, electronic equipment and storage medium
CN112581358B (en) * 2020-12-17 2023-09-26 北京达佳互联信息技术有限公司 Training method of image processing model, image processing method and device
CN112581358A (en) * 2020-12-17 2021-03-30 北京达佳互联信息技术有限公司 Training method of image processing model, image processing method and device
CN112581481B (en) * 2020-12-30 2024-04-12 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112581481A (en) * 2020-12-30 2021-03-30 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113012282B (en) * 2021-03-31 2023-05-19 深圳市慧鲤科技有限公司 Three-dimensional human body reconstruction method, device, equipment and storage medium
CN113012282A (en) * 2021-03-31 2021-06-22 深圳市慧鲤科技有限公司 Three-dimensional human body reconstruction method, device, equipment and storage medium
CN112991523A (en) * 2021-04-02 2021-06-18 福建天晴在线互动科技有限公司 Efficient and automatic hair matching head type generation method and generation device thereof
CN112991523B (en) * 2021-04-02 2023-06-30 福建天晴在线互动科技有限公司 Efficient and automatic hair matching head shape generation method and generation device thereof
CN113111861A (en) * 2021-05-12 2021-07-13 北京深尚科技有限公司 Face texture feature extraction method, 3D face reconstruction method, device and storage medium
CN113269888A (en) * 2021-05-25 2021-08-17 山东大学 Hairstyle three-dimensional modeling method, character three-dimensional modeling method and system
CN113284229A (en) * 2021-05-28 2021-08-20 上海星阑信息科技有限公司 Three-dimensional face model generation method, device, equipment and storage medium
CN113409454A (en) * 2021-07-14 2021-09-17 北京百度网讯科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN113538221A (en) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 Three-dimensional face processing method, training method, generating method, device and equipment
CN113470182B (en) * 2021-09-03 2022-02-18 中科计算技术创新研究院 Face geometric feature editing method and deep face remodeling editing method
CN113470182A (en) * 2021-09-03 2021-10-01 中科计算技术创新研究院 Face geometric feature editing method and deep face remodeling editing method
CN114119154A (en) * 2021-11-25 2022-03-01 北京百度网讯科技有限公司 Virtual makeup method and device
CN114299206A (en) * 2021-12-31 2022-04-08 清华大学 Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN115147578A (en) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 Stylized three-dimensional face generation method and device, electronic equipment and storage medium
CN115147578B (en) * 2022-06-30 2023-10-27 北京百度网讯科技有限公司 Stylized three-dimensional face generation method and device, electronic equipment and storage medium
CN115761855A (en) * 2022-11-23 2023-03-07 北京百度网讯科技有限公司 Face key point information generation, neural network training and three-dimensional face reconstruction method
CN115761855B (en) * 2022-11-23 2024-02-09 北京百度网讯科技有限公司 Face key point information generation, neural network training and three-dimensional face reconstruction method
CN116740300A (en) * 2023-06-16 2023-09-12 广东工业大学 Multi-mode-based prime body and texture fusion furniture model reconstruction method
CN116740300B (en) * 2023-06-16 2024-05-03 广东工业大学 Multi-mode-based prime body and texture fusion furniture model reconstruction method

Also Published As

Publication number Publication date
CN109978930B (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN109978930A (en) A kind of stylized human face three-dimensional model automatic generation method based on single image
CN106067190B (en) A kind of generation of fast face threedimensional model and transform method based on single image
CN108765550B (en) Three-dimensional face reconstruction method based on single picture
US11961200B2 (en) Method and computer program product for producing 3 dimensional model data of a garment
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN113269872A (en) Synthetic video generation method based on three-dimensional face reconstruction and video key frame optimization
CN104008564B (en) A kind of human face expression cloning process
CN108288072A (en) A kind of facial expression synthetic method based on generation confrontation network
CN107506714A (en) A kind of method of face image relighting
CN113298936B (en) Multi-RGB-D full-face material recovery method based on deep learning
CN107610209A (en) Human face countenance synthesis method, device, storage medium and computer equipment
CN110335343A (en) Based on RGBD single-view image human body three-dimensional method for reconstructing and device
CN101826217A (en) Rapid generation method for facial animation
CN106780713A (en) A kind of three-dimensional face modeling method and system based on single width photo
CN108986132A (en) A method of certificate photo Trimap figure is generated using full convolutional neural networks
CN109410119A (en) Mask image distortion method and its system
CN106127818A (en) A kind of material appearance based on single image obtains system and method
CN117496072B (en) Three-dimensional digital person generation and interaction method and system
CN111127642A (en) Human face three-dimensional reconstruction method
WO2020104990A1 (en) Virtually trying cloths & accessories on body model
CN110717978B (en) Three-dimensional head reconstruction method based on single image
CN113870404B (en) Skin rendering method of 3D model and display equipment
CN107564097A (en) A kind of remains of the deceased three-dimensional rebuilding method based on direct picture
Jeong et al. Automatic generation of subdivision surface head models from point cloud data
CN105321205B (en) A kind of parameterized human body model method for reconstructing based on sparse key point

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant