CN104978764B - 3 d human face mesh model processing method and equipment - Google Patents

3 d human face mesh model processing method and equipment Download PDF

Info

Publication number
CN104978764B
CN104978764B CN201410141093.XA CN201410141093A CN104978764B CN 104978764 B CN104978764 B CN 104978764B CN 201410141093 A CN201410141093 A CN 201410141093A CN 104978764 B CN104978764 B CN 104978764B
Authority
CN
China
Prior art keywords
human face
expressive features
face
face image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410141093.XA
Other languages
Chinese (zh)
Other versions
CN104978764A (en
Inventor
吕培
周炯
赵寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong smart Polytron Technologies Inc
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201410141093.XA priority Critical patent/CN104978764B/en
Publication of CN104978764A publication Critical patent/CN104978764A/en
Application granted granted Critical
Publication of CN104978764B publication Critical patent/CN104978764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The present invention, which provides a kind of 3 d human face mesh model processing method and equipment, this method, to be included:Initial three-dimensional face wire frame model corresponding with original two dimensional facial image is obtained, initial three-dimensional face wire frame model includes the second expressive features point corresponding with the first expressive features point of original two dimensional facial image;According to formula(1)Calculate the camera parameter matrix of initial three-dimensional face wire frame model;The second expressive features point is mapped on original two dimensional facial image according to camera parameter matrix, to judge the matching degree of the second expressive features point and the first expressive features point, and initial three-dimensional face wire frame model is adjusted according to judged result.The judgement of matching degree is carried out to initial three-dimensional face wire frame model and original two dimensional facial image according to camera parameter, initial three-dimensional face wire frame model is adjusted in the case where matching degree is low, so as to ensure that the 3 d human face mesh model after adjustment and original two dimensional facial image have more preferable matching degree.

Description

3 d human face mesh model processing method and equipment
Technical field
The invention belongs to handle the pictures technical field, is specifically related to a kind of 3 d human face mesh model processing method And equipment.
Background technology
The facial expression of face is a kind of delicate body language, is the important means for transmitting emotion information, such as In the application such as facial expression animation making, photo disposal, often it is related to the human face expression on a secondary picture being transferred to separately One width is different from the face picture of expression, for example someone has shone a sheet photo, but less full to the expression on the photograph Meaning, now the expression of the satisfaction on certain former sheet photo can be transferred to current phase by him by certain image processing techniques On piece.
With the development of three-dimensional modeling and acquiring technology, threedimensional model is for two dimensional image, using the teaching of the invention it is possible to provide more Detailed information, therefore, in the processing procedure of human face expression transfer, generally require to initial target two-dimension human face image and ginseng The modeling that two-dimension human face image carries out corresponding 3 d human face mesh model respectively is examined, and then is based on corresponding three-dimensional face grid mould Type carries out the processing such as scalloping, fusion, finally to realize the purpose of human face expression transfer.Therefore, the three-dimensional face net of foundation The effect that lattice model is handled human face expression with the matching degree of corresponding two-dimension human face image has material impact.
The foundation of existing 3 d human face mesh model is mostly based on Facial expression database.I.e. by human face expression The three-dimensional face expression model that is stored in database carries out overall deformation, with the face marked on initial two-dimension human face image Expressive features point matches.But the 3 d human face mesh model established of the mode of this overall deformation is often with corresponding two Dimension facial image has relatively low matching degree.
The content of the invention
For problems of the prior art, the present invention provides a kind of 3 d human face mesh model processing method and set It is standby, to overcome the 3 d human face mesh model for causing foundation based on Facial expression database progress overall deformation in the prior art With the matching degree of corresponding two-dimension human face image it is relatively low the defects of.
First aspect present invention provides a kind of 3 d human face mesh model processing method, including:
Obtain initial three-dimensional face wire frame model corresponding with original two dimensional facial image, the initial three-dimensional face grid Model includes the second expressive features point corresponding with the first expressive features point of the original two dimensional facial image;
According to formula(1)Calculate the camera parameter matrix of the initial three-dimensional face wire frame model:
Wherein, P is the camera parameter matrix, XiFor i-th of second tables on the initial three-dimensional face wire frame model Feelings characteristic point, xiFor with the second expressive features point XiI-th of first expressions on the corresponding original two dimensional facial image Characteristic point, N are the number of the first expressive features point and the second expressive features point;
According to the camera parameter matrix being calculated by the second expression on the initial three-dimensional face wire frame model Characteristic point is mapped on the original two dimensional facial image, to judge the second expressive features point and first expressive features The matching degree of point, and the initial three-dimensional face wire frame model is adjusted according to judged result.
In the first possible implementation of first aspect, the camera parameter matrix that the basis is calculated The second expressive features point on the initial three-dimensional face wire frame model is mapped on the original two dimensional facial image, to sentence The matching degree of the disconnected second expressive features point and the first expressive features point, and according to judged result to described pending Model is adjusted, including:
According to formula(2)Calculate the matching error of the second expressive features point and the first expressive features point:
Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
Judge whether the matching error is more than or equal to predetermined threshold value;
If being more than or equal to, the initial three-dimensional face wire frame model is adjusted, so that the three-dimensional face after adjustment The matching error of the second expressive features point and the first expressive features point on grid model is less than the predetermined threshold value.
According to the first possible implementation of first aspect, in second of possible implementation of first aspect In, it is described that the initial three-dimensional face wire frame model is adjusted, including:
Calculate the second expressive features point XiEach grid vertex X on to the initial three-dimensional face wire frame modeljGeodesic curve away from From, wherein, i is not equal to j;
The second expressive features point X on the fixed initial three-dimensional face wire frame modeliZ coordinate, it is default using first Algorithm changes the second expressive features point XiX, y-coordinate, obtain and the second expressive features point XiCorresponding 3rd expression Characteristic point Xi’;
It is constraint with geodesic curve distance, is determined and the 3rd expressive features point X using the second preset algorithmi' right Each grid vertex X answeredj’;
According to the 3rd expressive features point Xi' and with the 3rd expressive features point Xi' corresponding to each grid vertex Xj’ Adjust the initial three-dimensional face wire frame model.
According to the first or second of possible implementation of first aspect, first aspect, the 3rd of first aspect the In the possible implementation of kind, the original two dimensional facial image includes target two-dimension human face image and refers to two-dimension human face figure Picture;
It is described to obtain corresponding with original two dimensional facial image initial three-dimensional face wire frame model, including:
Extract the human face expression characteristic point of the target two-dimension human face image and the face with reference to two-dimension human face image Expressive features point, the human face expression characteristic point include face mask characteristic point and the first expressive features point;
According to the face mask characteristic point of the target two-dimension human face image and the face with reference to two-dimension human face image Contour feature point determines nearly front face image, and the nearly front face image is the target two-dimension human face image or the ginseng Examine two-dimension human face image;
According to the face mask characteristic point and the first expressive features point of the nearly front face image, to from neutral face database The target Nature face model of middle determination is deformed, and obtains Nature face model corresponding with the front face image;
According to the Nature face model of the nearly front face image respectively to presetting each preset table included in expression storehouse Feelings model is deformed, and obtains each expression model corresponding with the front face image;
First weight of each expression model is determined according to the first expressive features point of the target two-dimension human face image Coefficient, and determine according to the first expressive features point with reference to two-dimension human face image the second weight system of each expression model Number;
Each expression model is merged according to first weight coefficient, to obtain and the target two-dimension human face image pair The 3 d human face mesh model answered, and each expression model is merged according to second weight coefficient, to obtain and the ginseng Examine 3 d human face mesh model corresponding to two-dimension human face image.
According to the third possible implementation of first aspect, in the 4th kind of possible implementation of first aspect In, the default expression storehouse includes general blendshape models.
According to the third or the 4th kind of possible implementation of first aspect, in the 5th kind of possible reality of first aspect In existing mode, the face mask characteristic point according to the target two-dimension human face image and described with reference to two-dimension human face image Face mask characteristic point determines nearly front face image, including:
The face of the target two-dimension human face image is calculated according to the face mask characteristic point of the target two-dimension human face image Contouring curvature, and refer to two-dimension human face figure according to the face mask characteristic point calculating with reference to two-dimension human face image is described The face mask curvature of picture;
It is nearly front face image to determine the small image of the face mask curvature.
According to first aspect the third, the 4th kind or the 5th kind of possible implementation, at the 6th kind of first aspect In possible implementation, it is described the initial three-dimensional face wire frame model is adjusted according to judged result after, also wrap Include:
According to 3 d human face mesh model corresponding with the target two-dimension human face image to the target two-dimension human face figure As being deformed, and according to 3 d human face mesh model corresponding with the reference two-dimension human face image two-dimentional people is referred to described Face image is deformed;
Merged by the target two-dimension human face image after deformation and with reference to two-dimension human face image, by described with reference to two dimension Expression on facial image is transferred on the target two-dimension human face image.
Second aspect of the present invention provides a kind of 3 d human face mesh model processing equipment, including:
Acquisition module, for obtaining initial three-dimensional face wire frame model corresponding with original two dimensional facial image, the original Beginning 3 d human face mesh model includes the second expression corresponding with the first expressive features point of the original two dimensional facial image Characteristic point;
Computing module, according to formula(1)Calculate the camera parameter matrix of the initial three-dimensional face wire frame model:
Wherein, P is the camera parameter matrix, XiFor i-th of second tables on the initial three-dimensional face wire frame model Feelings characteristic point, xiFor with the second expressive features point XiI-th of first expressions on the corresponding original two dimensional facial image Characteristic point, N are the number of the first expressive features point and the second expressive features point;
Judge module, for according to the camera parameter matrix that is calculated by the initial three-dimensional face wire frame model On the second expressive features point be mapped on the original two dimensional facial image, with judge the second expressive features point with it is described The matching degree of first expressive features point, and the initial three-dimensional face wire frame model is adjusted according to judged result.
In the first possible implementation of second aspect, the judge module, including:
Computing unit, for according to formula(2)Calculate the second expressive features point and the first expressive features point Matching error:
Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
Judging unit, for judging whether the matching error is more than or equal to predetermined threshold value;
Adjustment unit, if for being more than or equal to, the initial three-dimensional face wire frame model is adjusted, so that adjustment The matching error of the second expressive features point and the first expressive features point on 3 d human face mesh model afterwards is less than described Predetermined threshold value.
According to the first possible implementation of second aspect, in second of possible implementation of second aspect In, the adjustment unit, including:
Computation subunit, for calculating the second expressive features point XiEach grid on to the initial three-dimensional face wire frame model Summit XjGeodesic curve distance, wherein, i is not equal to j;
First adjustment subelement, for fixing the second expressive features point X on the initial three-dimensional face wire frame modeli's Z coordinate, the second expressive features point X is changed using the first preset algorithmiX, y-coordinate, is obtained and second expression is special Levy point XiCorresponding 3rd expressive features point Xi’;
Determination subelement, for being constraint with geodesic curve distance, determined and the described 3rd using the second preset algorithm Expressive features point Xi' corresponding to each grid vertex Xj’;
Second adjustment subelement, for according to the 3rd expressive features point Xi' and with the 3rd expressive features point Xi’ Corresponding each grid vertex Xj' the adjustment initial three-dimensional face wire frame model.
According to the first or second of possible implementation of second aspect, second aspect, the 3rd of second aspect the In the possible implementation of kind, the original two dimensional facial image includes target two-dimension human face image and refers to two-dimension human face figure Picture;
The acquisition module, including:
Extraction unit, for extracting human face expression characteristic point and the two-dimentional people of the reference of the target two-dimension human face image The human face expression characteristic point of face image, the human face expression characteristic point include face mask characteristic point and the first expressive features point;
First determining unit, for the face mask characteristic point according to the target two-dimension human face image and described refer to two The face mask characteristic point of dimension facial image determines nearly front face image, and the nearly front face image is target two dimension Facial image described refers to two-dimension human face image;
First deformation unit, for the face mask characteristic point and the first expressive features according to the nearly front face image Point, the target Nature face model determined from neutral face database is deformed, obtained corresponding with the front face image Nature face model;
Second deformation unit, for according to the Nature face model of the nearly front face image respectively to presetting expression storehouse In each default expression model for including deformed, obtain each expression model corresponding with the front face image;
Second determining unit, each table is determined for the first expressive features point according to the target two-dimension human face image First weight coefficient of feelings model, and each expression is determined according to the first expressive features point with reference to two-dimension human face image Second weight coefficient of model;
Combining unit, for merging each expression model according to first weight coefficient, to obtain and the target 3 d human face mesh model corresponding to two-dimension human face image, and each expression model is merged according to second weight coefficient, To obtain 3 d human face mesh model corresponding with the reference two-dimension human face image.
According to the third possible implementation of second aspect, in the 4th kind of possible implementation of second aspect In, the default expression storehouse includes general blendshape models.
According to the third or the 4th kind of possible implementation of second aspect, in the 5th kind of possible reality of second aspect In existing mode, first determining unit, it is specifically used for:
The face of the target two-dimension human face image is calculated according to the face mask characteristic point of the target two-dimension human face image Contouring curvature, and refer to two-dimension human face figure according to the face mask characteristic point calculating with reference to two-dimension human face image is described The face mask curvature of picture;
It is nearly front face image to determine the small image of the face mask curvature.
According to second aspect the third, the 4th kind or the 5th kind of possible implementation, at the 6th kind of second aspect In possible implementation, the equipment also includes:
Deformation module, for basis 3 d human face mesh model corresponding with the target two-dimension human face image to the mesh Mark two-dimension human face image is deformed, and according to 3 d human face mesh model corresponding with the reference two-dimension human face image to institute State and deformed with reference to two-dimension human face image;
Merging module, for being merged by the target two-dimension human face image after deformation and with reference to two-dimension human face image, with The expression with reference on two-dimension human face image is transferred on the target two-dimension human face image.
3 d human face mesh model processing method and equipment provided by the invention, obtaining and original two dimensional facial image pair After the initial three-dimensional face wire frame model answered, according to the camera parameter of the initial three-dimensional face wire frame model, by described original three The second expressive features point on dimension face wire frame model is mapped on the original two dimensional facial image, to judge second table Feelings characteristic point and the matching degree of the first expressive features point, and according to judged result to the initial three-dimensional face grid mould Type is adjusted.Matching degree is carried out according to camera parameter with original two dimensional facial image to initial three-dimensional face wire frame model to sentence It is disconnected so that initial three-dimensional face wire frame model to be adjusted in the case where matching degree is low, so as to ensure the three-dimensional after adjustment Face wire frame model has more preferable matching degree with original two dimensional facial image.
Brief description of the drawings
Fig. 1 is the flow chart for the 3 d human face mesh model processing method that the embodiment of the present invention one provides;
Fig. 2 is the flow chart of the processing procedure of the step 103 of embodiment illustrated in fig. 1;
Fig. 3 is the flow chart for the 3 d human face mesh model processing method that the embodiment of the present invention two provides;
Fig. 4 is the structural representation for the 3 d human face mesh model processing equipment that the embodiment of the present invention three provides;
Fig. 5 is the structural representation for the 3 d human face mesh model processing equipment that the embodiment of the present invention four provides;
Fig. 6 is the structural representation for the processing equipment that the embodiment of the present invention five provides.
Embodiment
Fig. 1 is the flow chart for the 3 d human face mesh model processing method that the embodiment of the present invention one provides, as shown in figure 1, This method includes:
Step 101, obtain initial three-dimensional face wire frame model corresponding with original two dimensional facial image, the initial three-dimensional Face wire frame model includes the second expressive features point corresponding with the first expressive features point of the original two dimensional facial image;
In the present embodiment, above-mentioned 3 d human face mesh model processing method, the processing unit are performed by a processing unit It is preferably integrated and is arranged in the terminal devices such as PC, notebook computer, can be used for the two images progress to input The processing of human face expression transfer.The methods described that the present embodiment provides is applied to the three-dimensional to being obtained by the way of prior art Face wire frame model is adjusted, or, it is also applied for the three-dimensional obtained to the method provided using embodiment as shown in Figure 3 Being adjusted for face wire frame model, is not limited with the present embodiment.
No matter for description for the sake of simplicity, the 3 d human face mesh model that will be obtained in the present embodiment using which kind of above-mentioned method Initial three-dimensional face wire frame model is referred to as to illustrate.The initial three-dimensional face wire frame model correspond to an original two dimension Facial image.The methods described that the present embodiment provides is preferably adapted in the application scenarios of human face expression transfer, in face table It is to need that target two-dimension human face image will be transferred to reference to the human face expression on two-dimension human face image in the processing procedure of feelings transfer On.If wanting to carry out human face expression transfer, first have to realize the three of target two-dimension human face image and reference two-dimension human face image respectively Tie up the reconstruction of face wire frame model.Therefore, the original two dimensional facial image described in the present embodiment for example can be with reference to two dimension Facial image either target two-dimension human face image, accordingly, initial three-dimensional face wire frame model for example can be with reference to two Tie up the either three-dimensional face grid mould corresponding with target two-dimension human face image of 3 d human face mesh model corresponding to facial image Type, because the methods described that the present embodiment provides is all suitable for for both 3 d human face mesh models, explanation is not differentiated between below.
Processing unit obtains initial three-dimensional face wire frame model corresponding with original two dimensional facial image first, described original It is special that 3 d human face mesh model includes the second expression corresponding with the first expressive features point of the original two dimensional facial image Sign point.3 d human face mesh model in the present embodiment for example to obtain in the prior art is used as the initial three-dimensional face grid Model, the processing unit is inputted, so that the second expression that the processing unit includes according to the beginning 3 d human face mesh model is special Sign point carries out follow-up adjustment processing.
Wherein, the first expressive features point of original two dimensional facial image is primarily referred to as face can draw when showing different expressions Playing different face changes, the motion morphology of these face is the first expressive features point for forming the human face expression, such as nose, The form of face, eyebrow, eyes etc..And the second expressive features point corresponding with the first expressive features point can be artificial or oneself Dynamic mark is on initial three-dimensional face wire frame model.
Step 102, according to formula(1)Calculate the camera parameter matrix of the initial three-dimensional face wire frame model:
Wherein, P is the camera parameter matrix, XiFor i-th of second tables on the initial three-dimensional face wire frame model Feelings characteristic point, xiFor with the second expressive features point XiI-th of first expressions on the corresponding original two dimensional facial image Characteristic point, N are the number of the first expressive features point and the second expressive features point;
The camera parameter matrix that step 103, basis are calculated is by the initial three-dimensional face wire frame model Second expressive features point is mapped on the original two dimensional facial image, to judge the second expressive features point and described first The matching degree of expressive features point, and the initial three-dimensional face wire frame model is adjusted according to judged result.
In the present embodiment, in order to which the original two dimensional facial image whether corresponding to initial three-dimensional face wire frame model is No matching is judged, it is necessary first to which the second expressive features point on initial three-dimensional face wire frame model is mapped into corresponding original On beginning two-dimension human face image, and then judge the second expressive features point and the first expression on corresponding original two dimensional facial image Matching error between characteristic point, the initial three-dimensional face wire frame model is adjusted according to judged result afterwards.
And the second expressive features point on initial three-dimensional face wire frame model is being mapped to corresponding original two dimensional face , it is necessary to use a parameter, i.e. camera parameter when on image, the parameter is typically represented in the form of a parameter matrix.Specifically Ground, solution formula can be passed through(1)To obtain camera parameter matrix, formula(1)Mean that the camera parameter matrix need to be as far as possible full The distance of the second expressive features point of foot and the first expressive features point is minimum.After the camera parameter matrix is obtained, the square is utilized The second expressive features point on initial three-dimensional face wire frame model is mapped on corresponding two-dimension human face image by battle array, to judge The matching degree of the second expressive features point and the first expressive features point is stated, and according to judged result to the initial three-dimensional people Face grid model is adjusted.
In the present embodiment, according to camera parameter to initial three-dimensional face wire frame model and the progress of original two dimensional facial image Judgement with degree so that initial three-dimensional face wire frame model is adjusted in the case where matching degree is low, so as to ensure to adjust 3 d human face mesh model afterwards has more preferable matching degree with original two dimensional facial image.
Further, Fig. 2 is the flow chart of the processing procedure of the step 103 of embodiment illustrated in fig. 1, as shown in Fig. 2 Fig. 1 According to the camera parameter matrix that is calculated by the second table on the initial three-dimensional face wire frame model in middle step 103 Feelings characteristic point is mapped on the original two dimensional facial image, to judge that the second expressive features point is special with first expression The matching degree of point is levied, and the initial three-dimensional face wire frame model is adjusted according to judged result, including:
Step 201, according to formula(2)The matching for calculating the second expressive features point and the first expressive features point misses Difference:
Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
Step 202, judge whether the matching error is more than or equal to predetermined threshold value, if being more than or equal to, perform step 203, otherwise terminate;
After the camera parameter of initial three-dimensional face wire frame model is obtained, according to the camera parameter square being calculated The second expressive features point on the initial three-dimensional face wire frame model is mapped on the original two dimensional facial image by battle array, with According to formula(2)To judge on the second expressive features point on initial three-dimensional face wire frame model and original two dimensional facial image The matching error of first expressive features point.Formula(2)In, because the depth of each pair expressive features point, pixel grey scale are had nothing in common with each other, Therefore, each pair expressive features point has different weight coefficients.
And then judge whether the matching error is more than or equal to predetermined threshold value, if being more than or equal to, need according to step 203 ~206 pairs of initial three-dimensional face wire frame models are adjusted, and otherwise need not be adjusted.
Step 203, calculate the second expressive features point XiEach grid vertex X on to the initial three-dimensional face wire frame modelj's Geodesic curve distance, wherein, i is not equal to j;
The second expressive features point X in step 204, the fixed initial three-dimensional face wire frame modeliZ coordinate, use First preset algorithm changes the second expressive features point XiX, y-coordinate, obtain and the second expressive features point XiIt is corresponding 3rd expressive features point Xi’;
Step 205, with the geodesic curve distance for constraint, using the second preset algorithm determine with the 3rd expressive features Point Xi' corresponding to each grid vertex Xj’;
Step 206, according to the 3rd expressive features point Xi' and with the 3rd expressive features point Xi' corresponding to each grid Summit Xj' the adjustment initial three-dimensional face wire frame model.
When judging that matching error is more than or equal to predetermined threshold value, initial three-dimensional face wire frame model need to be adjusted. Specifically, each second expressive features point is calculated on the initial three-dimensional face wire frame model first to the initial three-dimensional face grid mould The geodesic curve distance of other grid vertexes in type in addition to corresponding current second expressive features point.Due to initial three-dimensional face Grid model is a three-dimensional grid model, is made up of grid one by one, and the geodesic curve distance can be understood as being to work as Preceding second expressive features point along different grid lines reach some grid vertex when most short grid lines length, i.e. path distance.
Afterwards, the second expressive features point X on fixed initial three-dimensional face wire frame modeliZ coordinate, it is default using first Algorithm changes the second expressive features point XiX, y-coordinate, obtain and the second expressive features point XiCorresponding 3rd expression is special Levy point Xi', wherein, first preset algorithm is, for example, Nelder-Mead simplex algorithm.
And then determined and the 3rd expressive features point Xi ' using the second preset algorithm with geodesic curve distance to constrain Corresponding each grid vertex Xj', and according to the 3rd expressive features point Xi' and with the 3rd expressive features point Xi' corresponding to Each grid vertex Xj' the adjustment initial three-dimensional face wire frame model.Wherein, second preset algorithm is, for example, radial direction base letter Number, Laplce's distortion of the mesh algorithm etc..
It is because to keep the second expressive features point as far as possible it is understood that why being constraint with geodesic curve distance The grid vertex of other non-characteristic points of surrounding, after the second expressive features point is changed to the 3rd expressive features point, is still becoming According to geodesic curve distance after more, the correlative positional relation with the 3rd expressive features point is kept.
It is relatively low in initial three-dimensional face wire frame model and the matching degree of corresponding two-dimension human face image in the present embodiment When, it is to constrain to adjust initial three-dimensional face wire frame model, what is be adjusted to expressive features point with above-mentioned geodesic curve distance Meanwhile advantageously ensure that other non-expressive features points keep the relative position relation with corresponding expression characteristic point afterwards before adjustment, So that the 3 d human face mesh model after adjustment has more preferable matching degree with corresponding two-dimension human face image.
Fig. 3 is the flow chart for the 3 d human face mesh model processing method that the embodiment of the present invention two provides, as shown in figure 3, The processing method is to the improvement for the process for obtaining initial three-dimensional face wire frame model in the prior art, in the prior art based on people Face expression data storehouse is established in the scheme of the 3 d human face mesh model to match with source beginning two-dimension human face image, due to the people Each human face expression model in face expression data storehouse is counted according to the age of Different Individual, sex, shape of face, mood, expression etc. Establish, there are obvious individual differences, if the expression in original two-dimension human face image is beyond the human face expression data The scope in storehouse, it will be unable to obtain matching 3 d human face mesh model by Facial expression database.Therefore, the present embodiment The methods described of offer is used to establish 3 d human face mesh model corresponding with original two dimensional facial image.Wherein, above-mentioned Fig. 1 or Original two dimensional facial image described in embodiment illustrated in fig. 2 specifically includes target two-dimension human face image and ginseng in the present embodiment Two-dimension human face image is examined, in the application scenarios of human face expression transfer, is needed with reference to the face table in two-dimension human face image Feelings are transferred in target two-dimension human face image.
The methods described that the present embodiment provides includes:
Step 301, the human face expression characteristic point of the extraction target two-dimension human face image and the reference two-dimension human face figure The human face expression characteristic point of picture, the human face expression characteristic point include face mask characteristic point and the first expressive features point;
Step 302, the face mask characteristic point according to the target two-dimension human face image and the reference two-dimension human face figure The face mask characteristic point of picture determines nearly front face image, and the nearly front face image is the target two-dimension human face image Or described refer to two-dimension human face image;
Step 303, face mask characteristic point and the first expressive features point according to the nearly front face image, to therefrom The target Nature face model determined in property face database is deformed, and obtains Nature face corresponding with the front face image Model;
Step 304, according to the Nature face model of the nearly front face image respectively to presetting what is included in expression storehouse Each default expression model is deformed, and obtains each expression model corresponding with the front face image;
Step 305, each expression model determined according to the first expressive features point of the target two-dimension human face image First weight coefficient, and determine the of each expression model according to the first expressive features point with reference to two-dimension human face image Two weight coefficients;
Step 306, each expression model merged according to first weight coefficient, to obtain and the target two dimension people 3 d human face mesh model corresponding to face image, and each expression model is merged according to second weight coefficient, to obtain 3 d human face mesh model corresponding with the reference two-dimension human face image.
This method can still be performed by above-mentioned processing unit, and the two images of now processing unit input are referred to as Target two-dimension human face image and with reference to two-dimension human face image, is to need that two will be referred in the processing procedure of human face expression transfer Human face expression on dimension facial image is transferred on target two-dimension human face image.
Extract target two-dimension human face image and the human face expression characteristic point with reference to two-dimension human face image respectively first.Extracting Such as active shape model can be used when human face expression characteristic point(Active Shape Model, hereinafter referred to as ASM) Human face expression characteristic point is accurately detected out etc. ripe algorithm, the human face expression characteristic point includes face mask characteristic point and the Expressive features point, wherein, face mask characteristic point refers to clearly picking out some characteristic points of facial contour, first Expressive features point is primarily referred to as face can cause different face to change when showing different expressions, the motion morphology of these face Form the first expressive features point of the human face expression, such as the form of nose, face, eyebrow, eyes etc..
, therefore, can because face mask characteristic point can embody the direction of face on corresponding image in the present embodiment With the face mask characteristic point in the human face expression characteristic point of target two-dimension human face image respectively and refer to two-dimension human face figure Face mask characteristic point in the human face expression characteristic point of picture, the nearly front face figure of a conduct is selected in the two images Picture.It is specifically that target two-dimension human face image is calculated according to the face mask characteristic point of target two-dimension human face image in the present embodiment Face mask curvature, and the face for referring to two-dimension human face image is calculated according to the face mask characteristic point with reference to two-dimension human face image Contouring curvature, it is front face image to select the small image of face mask curvature afterwards.Face mask curvature is small to mean face Portion is towards more towards front.
And then according to the face mask characteristic point and the first expressive features point of the nearly front face image, to from gender bender The target Nature face model determined in face storehouse is deformed, and obtains the Nature face model of the nearly front face image.Citing For, for example selection determination with reference to two-dimension human face image to be used as nearly front face image, then will be to from neutral face database The target Nature face model of determination refers to the face mask characteristic point and the first expressive features point of two-dimension human face image according to this The deformation such as such as scale, rotate, with obtain with this with reference to the corresponding Nature face model of two-dimension human face image.Wherein, in Include the three-dimensional Nature face model of the individual differences such as multiple sexes of forgoing, age, race in property face database, from Nature face The target Nature face model determined in storehouse both can be the three-dimensional Nature face model of one be randomly chosen or right Three-dimensional Nature face model after the Weighted Fusion of the three-dimensional Nature face model of all or part included in Nature face storehouse.
In the present embodiment, the Nature face model of the nearly front face image of acquisition has and the nearly front face image base This consistent face mask, simply there is no detailed expressive features on the Nature face model.Therefore, with this in this implementation Property faceform be intermediary, and then carry out the processing of follow-up expression model.
It is and then each pre- to being included in default expression storehouse respectively according to the Nature face model of the nearly front face image If expression model is deformed, each expression model corresponding with the front face image is obtained.Specifically, the default expression storehouse Preferably general blendshape models, wherein including a variety of different expression models, in the present embodiment, use is general Blendshape models add expressive features for above-mentioned Nature face model.Need to be according to the Nature face of nearly front face image Model deforms to a variety of expression models, to obtain each blendshape expressions mould corresponding with the nearly front face image Type.The expressive features of its own had both been contained in each blendshape expressions model, have contained the face of front face image again Contouring feature.
And then, it is necessary to determined and the target two-dimension human face figure according to the first expressive features point of target two-dimension human face image The first weight coefficient of each blendshape expressions model as corresponding to, and according to the first expression with reference to two-dimension human face image Characteristic point determines the second weight coefficient with this with reference to the corresponding each blendshape expressions model of two-dimension human face image, also It is to say, because the expressive features on each blendshape expressions model are different, it is necessary to be directed to target two-dimension human face respectively Image and with reference to two-dimension human face image, to determine the proportion shared by each blendshape expressions model, and then according to described first Weight coefficient merges each blendshape expressions model, to obtain three-dimensional people corresponding with the target two-dimension human face image Face grid model, and each blendshape expressions model is merged according to second weight coefficient, to obtain and the ginseng Examine 3 d human face mesh model corresponding to two-dimension human face image.So-called merging, i.e., will be each according to respective weight coefficient Blendshape expression models are superimposed together, be by each blendshape expressions model with each first expressive features point Corresponding organ is overlapped according to the weight coefficient of corresponding blendshape expressions model.
Illustrate how to determine weight coefficient exemplified by determining the first weight coefficient, for certain in target two-dimension human face image For individual first expressive features point, each blendshape expressions model organ corresponding with the first expressive features point is traveled through successively Characteristic point, this feature point can be artificial hand labeled, also can initially delimit, such as eyebrow, and then according to the spy of the organ Sign is put with the close degree of the first expressive features point to determine the weight coefficient of the blendshape expression models.
Obtain corresponding with target two-dimension human face image 3 d human face mesh model in execution of step 306, and with ginseng After examining 3 d human face mesh model corresponding to two-dimension human face image, optionally, side as shown in Figure 1 or 2 can also carry out Method, the 3 d human face mesh model of acquisition is adjusted.
Optionally, in execution of step 306, or according to method as shown in Figure 1 or 2, to the three-dimensional face of acquisition After grid model is adjusted, following steps are can also carry out to realize the purpose of human face expression transfer.
Step 307, basis 3 d human face mesh model corresponding with the target two-dimension human face image are to the target two Dimension facial image is deformed, and according to 3 d human face mesh model corresponding with the reference two-dimension human face image to the ginseng Two-dimension human face image is examined to be deformed;
Step 308, merge by the target two-dimension human face image after deformation and with reference to two-dimension human face image, will described in It is transferred to reference to the expression on two-dimension human face image on the target two-dimension human face image.
In the present embodiment, target two-dimension human face image and the human face expression feature with reference to two-dimension human face image are extracted respectively Point, and the face mask characteristic point in the human face expression characteristic point from target two-dimension human face image and refers to two-dimension human face figure Nearly front face image of conduct is selected as in, and then according to the nearly front face image to being determined from neutral face database Target Nature face model is deformed, and obtains Nature face model, special independent of specific individual in the Nature face storehouse Sign;Afterwards according to the Nature face model each blendshape expressions model to being included in general blendshape models respectively Deformed, and then distinguished respectively according to target two-dimension human face image and with reference to each first expressive features point of two-dimension human face image Determine the first weight coefficient and the second weight coefficient of each blendshape expressions model, with according to different weight coefficients to each Blendshape expression models are merged, and are finally given respectively with target two-dimension human face image and with reference to two-dimension human face image pair The 3 d human face mesh model answered.Because Nature face storehouse and general blendshape models all obviate the individual difference of people, Overcome and the defects of 3 d human face mesh model easily fails is established based on Facial expression database in the prior art.
Fig. 4 is the structural representation for the 3 d human face mesh model processing equipment that the embodiment of the present invention three provides, such as Fig. 4 institutes Show, the processing equipment includes:
Acquisition module 11, it is described for obtaining initial three-dimensional face wire frame model corresponding with original two dimensional facial image Initial three-dimensional face wire frame model includes the second table corresponding with the first expressive features point of the original two dimensional facial image Feelings characteristic point;
Computing module 12, according to formula(1)Calculate the camera parameter matrix of the initial three-dimensional face wire frame model:
Wherein, P is the camera parameter matrix, XiFor i-th of second tables on the initial three-dimensional face wire frame model Feelings characteristic point, xiFor with the second expressive features point XiI-th of first expressions on the corresponding original two dimensional facial image Characteristic point, N are the number of the first expressive features point and the second expressive features point;
Judge module 13, for according to the camera parameter matrix that is calculated by the initial three-dimensional face grid mould The second expressive features point in type is mapped on the original two dimensional facial image, to judge the second expressive features point and institute The matching degree of the first expressive features point is stated, and the initial three-dimensional face wire frame model is adjusted according to judged result.
The processing equipment of the present embodiment can be used for the technical scheme for performing embodiment of the method shown in Fig. 1, its realization principle Similar with technique effect, here is omitted.
Fig. 5 is the structural representation for the 3 d human face mesh model processing equipment that the embodiment of the present invention four provides, such as Fig. 5 institutes To show, the processing equipment is on the basis of embodiment illustrated in fig. 4, the judge module 13, including:
Computing unit 131, for according to formula(2)Calculate the second expressive features point and the first expressive features point Matching error:
Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
Judging unit 132, for judging whether the matching error is more than or equal to predetermined threshold value;
Adjustment unit 133, if for being more than or equal to, the initial three-dimensional face wire frame model is adjusted, so that The matching error of the second expressive features point and the first expressive features point on 3 d human face mesh model after adjustment is less than The predetermined threshold value.
Further, the adjustment unit 133, including:
Computation subunit 1331, for calculating the second expressive features point XiIt is each on to the initial three-dimensional face wire frame model Grid vertex XjGeodesic curve distance, wherein, i is not equal to j;
First adjustment subelement 1332, for fixing the second expressive features point on the initial three-dimensional face wire frame model XiZ coordinate, the second expressive features point X is changed using the first preset algorithmiX,yCoordinate, obtain and second expression Characteristic point XiCorresponding 3rd expressive features point Xi’;
Determination subelement 1333, for the geodesic curve distance for constraint, using the second preset algorithm determine with it is described 3rd expressive features point Xi' corresponding to each grid vertex Xj’;
Second adjustment subelement 1334, for according to the 3rd expressive features point Xi' and with the 3rd expressive features Point Xi' corresponding to each grid vertex Xj' the adjustment initial three-dimensional face wire frame model.
Further, the original two dimensional facial image includes target two-dimension human face image and with reference to two-dimension human face image;
The acquisition module 11, including:
Extraction unit 111, for extracting the human face expression characteristic point of the target two-dimension human face image and described referring to two The human face expression characteristic point of facial image is tieed up, the human face expression characteristic point includes face mask characteristic point and first expression Characteristic point;
First determining unit 112, for the face mask characteristic point according to the target two-dimension human face image and the ginseng The face mask characteristic point for examining two-dimension human face image determines nearly front face image, and the nearly front face image is the target Two-dimension human face image described refers to two-dimension human face image;
First deformation unit 113, for the face mask characteristic point and the first expression according to the nearly front face image Characteristic point, the target Nature face model determined from neutral face database is deformed, obtained and the front face image Corresponding Nature face model;
Second deformation unit 114, for according to the Nature face model of the nearly front face image respectively to preset table Each default expression model included in feelings storehouse is deformed, and obtains each expression model corresponding with the front face image;
Second determining unit 115, described in being determined according to the first expressive features point of the target two-dimension human face image First weight coefficient of each expression model, and determined according to the first expressive features point with reference to two-dimension human face image described each Second weight coefficient of expression model;
Combining unit 116, for merging each expression model according to first weight coefficient, to obtain and the mesh 3 d human face mesh model corresponding to two-dimension human face image is marked, and each expression mould is merged according to second weight coefficient Type, to obtain 3 d human face mesh model corresponding with the reference two-dimension human face image.
Specifically, the default expression storehouse includes general blendshape models.
Further, first determining unit 112, is specifically used for:
The face of the target two-dimension human face image is calculated according to the face mask characteristic point of the target two-dimension human face image Contouring curvature, and refer to two-dimension human face figure according to the face mask characteristic point calculating with reference to two-dimension human face image is described The face mask curvature of picture;
It is nearly front face image to determine the small image of the face mask curvature.
Further, the processing equipment also includes:
Deformation module 21, for basis 3 d human face mesh model corresponding with the target two-dimension human face image to described Target two-dimension human face image is deformed, and according to 3 d human face mesh model pair corresponding with the reference two-dimension human face image It is described to be deformed with reference to two-dimension human face image;
Merging module 22, for being merged by the target two-dimension human face image after deformation and with reference to two-dimension human face image, So that the expression with reference on two-dimension human face image is transferred on the target two-dimension human face image.
The processing equipment of the present embodiment can be used for the technical scheme for performing embodiment of the method shown in Fig. 2 or Fig. 3, and it is realized Principle is similar with technique effect, and here is omitted.
Fig. 6 is the structural representation for the processing equipment that the embodiment of the present invention five provides, as shown in fig. 6, the processing equipment bag Include:
Memory 31 and the processor 32 being connected with the memory 31, wherein, the memory 31 is used to store one Group program code, the processor 32 is used to call the program code stored in the memory 31, to perform as shown in Figure 1 three Tie up in face wire frame model processing method:Initial three-dimensional face wire frame model corresponding with original two dimensional facial image is obtained, The initial three-dimensional face wire frame model includes corresponding with the first expressive features point of the original two dimensional facial image 2 expressive features points;According to formula(1)Calculate the camera parameter matrix of the initial three-dimensional face wire frame model:
Wherein, P is the camera parameter matrix, XiFor i-th of second tables on the initial three-dimensional face wire frame model Feelings characteristic point, xiFor with the second expressive features point XiI-th of first expressions on the corresponding original two dimensional facial image Characteristic point, N are the number of the first expressive features point and the second expressive features point;According to the camera parameter matrix being calculated The second expressive features point on the initial three-dimensional face wire frame model is mapped on the original two dimensional facial image, to sentence The matching degree of the disconnected second expressive features point and the first expressive features point, and according to judged result to described original three Dimension face wire frame model is adjusted.
Further, the processor 32 is additionally operable to according to formula(2)Calculate the second expressive features point and described the The matching error of 1 expressive features point:
Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
Judge whether the matching error is more than or equal to predetermined threshold value;If being more than or equal to, to the initial three-dimensional face Grid model is adjusted, so that the second expressive features point on the 3 d human face mesh model after adjustment and first expression The matching error of characteristic point is less than the predetermined threshold value.
Further, the processor 32 is additionally operable to calculate the second expressive features point XiTo the initial three-dimensional face grid Each grid vertex X on modeljGeodesic curve distance, wherein, i is not equal to j;On the fixed initial three-dimensional face wire frame model Second expressive features point XiZ coordinate, the second expressive features point X is changed using the first preset algorithmiX, y-coordinate, obtain With the second expressive features point XiCorresponding 3rd expressive features point Xi’;It is constraint with geodesic curve distance, using second Preset algorithm determines and the 3rd expressive features point Xi' corresponding to each grid vertex Xj’;According to the 3rd expressive features point Xi' and with the 3rd expressive features point Xi' corresponding to each grid vertex Xj' the adjustment initial three-dimensional face wire frame model.
Further, the original two dimensional facial image includes target two-dimension human face image and with reference to two-dimension human face image, The processor 32 is additionally operable to extract the human face expression characteristic point of the target two-dimension human face image and described refers to two-dimension human face The human face expression characteristic point of image, the human face expression characteristic point include face mask characteristic point and first expressive features Point;It is special according to the face mask characteristic point of the target two-dimension human face image and the face mask with reference to two-dimension human face image Sign point determines nearly front face image, and the nearly front face image is the target two-dimension human face image or described with reference to two dimension Facial image;According to the face mask characteristic point and the first expressive features point of the nearly front face image, to from Nature face The target Nature face model determined in storehouse is deformed, and obtains Nature face model corresponding with the front face image; Entered respectively to presetting each default expression model included in expression storehouse according to the Nature face model of the nearly front face image Row deformation, obtains each expression model corresponding with the front face image;According to the first of the target two-dimension human face image Expressive features point determines the first weight coefficient of each expression model, and according to first table with reference to two-dimension human face image Feelings characteristic point determines the second weight coefficient of each expression model;Each expression mould is merged according to first weight coefficient Type, to obtain 3 d human face mesh model corresponding with the target two-dimension human face image, and according to second weight coefficient Merge each expression model, to obtain 3 d human face mesh model corresponding with the reference two-dimension human face image.
Further, the processor 32 is additionally operable to the face mask characteristic point meter according to the target two-dimension human face image The face mask curvature of the target two-dimension human face image is calculated, and it is special according to the face mask with reference to two-dimension human face image Sign point calculates the face mask curvature with reference to two-dimension human face image;Determine the small image of the face mask curvature for closely just Dough figurine face image.
Further, the processor 32 is additionally operable to according to three-dimensional face net corresponding with the target two-dimension human face image Lattice model deforms to the target two-dimension human face image, and according to three-dimensional people corresponding with the reference two-dimension human face image Face grid model deforms to described with reference to two-dimension human face image;By the target two-dimension human face image after deformation and with reference to two dimension Facial image is merged, and the expression with reference on two-dimension human face image is transferred into the target two-dimension human face image On.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above method embodiment can pass through Programmed instruction related hardware is completed, and foregoing program can be stored in a computer read/write memory medium, the program Upon execution, the step of execution includes above method embodiment;And foregoing storage medium includes:ROM, RAM, magnetic disc or light Disk etc. is various can be with the medium of store program codes.
Finally it should be noted that:Various embodiments above is merely illustrative of the technical solution of the present invention, rather than its limitations;To the greatest extent The present invention is described in detail with reference to foregoing embodiments for pipe, it will be understood by those within the art that:Its according to The technical scheme described in foregoing embodiments can so be modified, either which part or all technical characteristic are entered Row equivalent substitution;And these modifications or replacement, the essence of appropriate technical solution is departed from various embodiments of the present invention technology The scope of scheme.

Claims (10)

  1. A kind of 1. 3 d human face mesh model processing method, it is characterised in that including:
    Obtain initial three-dimensional face wire frame model corresponding with original two dimensional facial image, the initial three-dimensional face wire frame model Include the second expressive features point corresponding with the first expressive features point of the original two dimensional facial image;
    The camera parameter matrix of the initial three-dimensional face wire frame model is calculated according to formula (1):
    <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mo>&amp;CenterDot;</mo> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, P is the camera parameter matrix, XiFor i-th of second expressive features on the initial three-dimensional face wire frame model Point, xiFor with the second expressive features point XiI-th of first expressive features on the corresponding original two dimensional facial image Point, N are the number of the first expressive features point and the second expressive features point;
    According to the camera parameter matrix being calculated by the second expressive features on the initial three-dimensional face wire frame model Point is mapped on the original two dimensional facial image, to judge the second expressive features point and the first expressive features point Matching degree, and the initial three-dimensional face wire frame model is adjusted according to judged result;
    The camera parameter matrix that the basis is calculated is by the second expression on the initial three-dimensional face wire frame model Characteristic point is mapped on the original two dimensional facial image, to judge the second expressive features point and first expressive features The matching degree of point, and the initial three-dimensional face wire frame model is adjusted according to judged result, including:
    The matching error of the second expressive features point and the first expressive features point is calculated according to formula (2):
    <mrow> <mi>E</mi> <mi>r</mi> <mi>r</mi> <mo>=</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mo>&amp;CenterDot;</mo> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
    Judge whether the matching error is more than or equal to predetermined threshold value;
    If being more than or equal to, the initial three-dimensional face wire frame model is adjusted, so that the three-dimensional face grid after adjustment The matching error of the second expressive features point and the first expressive features point on model is less than the predetermined threshold value;
    It is described that the initial three-dimensional face wire frame model is adjusted, including:
    Calculate the second expressive features point XiEach grid vertex X on to the initial three-dimensional face wire frame modeljGeodesic curve distance, Wherein, i is not equal to j;
    The second expressive features point X on the fixed initial three-dimensional face wire frame modeliZ coordinate, changed using the first preset algorithm Become the second expressive features point XiX, y-coordinate, obtain and the second expressive features point XiCorresponding 3rd expressive features point Xi’;
    It is constraint with geodesic curve distance, is determined and the 3rd expressive features point X using the second preset algorithmi' corresponding to it is each Grid vertex Xj’;
    According to the 3rd expressive features point Xi' and with the 3rd expressive features point Xi' corresponding to each grid vertex Xj' adjustment The initial three-dimensional face wire frame model.
  2. 2. according to the method for claim 1, it is characterised in that the original two dimensional facial image includes target two-dimension human face Image and with reference to two-dimension human face image;
    It is described to obtain corresponding with original two dimensional facial image initial three-dimensional face wire frame model, including:
    Extract the human face expression characteristic point of the target two-dimension human face image and the human face expression with reference to two-dimension human face image Characteristic point, the human face expression characteristic point include face mask characteristic point and the first expressive features point;
    According to the face mask characteristic point of the target two-dimension human face image and the face mask with reference to two-dimension human face image Characteristic point determines nearly front face image, and the nearly front face image is the target two-dimension human face image or described refers to two Tie up facial image;
    According to the face mask characteristic point and the first expressive features point of the nearly front face image, to true from neutral face database Fixed target Nature face model is deformed, and obtains Nature face model corresponding with the front face image;
    According to the Nature face model of the nearly front face image respectively to presetting each default expression mould included in expression storehouse Type is deformed, and obtains each expression model corresponding with the front face image;
    First weight coefficient of each expression model is determined according to the first expressive features point of the target two-dimension human face image, And the second weight coefficient of each expression model is determined according to the first expressive features point with reference to two-dimension human face image;
    Each expression model is merged according to first weight coefficient, it is corresponding with the target two-dimension human face image to obtain 3 d human face mesh model, and each expression model is merged according to second weight coefficient, to obtain referring to two with described Tie up 3 d human face mesh model corresponding to facial image.
  3. 3. according to the method for claim 2, it is characterised in that the default expression storehouse includes general blendshape moulds Type.
  4. 4. according to the method in claim 2 or 3, it is characterised in that the face according to the target two-dimension human face image Contouring characteristic point and the face mask characteristic point with reference to two-dimension human face image determine nearly front face image, including:
    The face that the target two-dimension human face image is calculated according to the face mask characteristic point of the target two-dimension human face image takes turns Wide curvature, and calculated according to the face mask characteristic point with reference to two-dimension human face image described with reference to two-dimension human face image Face mask curvature;
    It is nearly front face image to determine the small image of the face mask curvature.
  5. 5. the method according to any one of claim 2~3, it is characterised in that it is described according to judged result to the original After beginning 3 d human face mesh model is adjusted, in addition to:
    The target two-dimension human face image is entered according to 3 d human face mesh model corresponding with the target two-dimension human face image Row deformation, and two-dimension human face figure is referred to described according to 3 d human face mesh model corresponding with the reference two-dimension human face image As being deformed;
    Merged by the target two-dimension human face image after deformation and with reference to two-dimension human face image, two-dimension human face is referred to by described Expression on image is transferred on the target two-dimension human face image.
  6. A kind of 6. 3 d human face mesh model processing equipment, it is characterised in that including:
    Acquisition module, for obtaining corresponding with original two dimensional facial image initial three-dimensional face wire frame model, described original three Dimension face wire frame model includes the second expressive features corresponding with the first expressive features point of the original two dimensional facial image Point;
    Computing module, the camera parameter matrix of the initial three-dimensional face wire frame model is calculated according to formula (1):
    <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mo>&amp;CenterDot;</mo> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, P is the camera parameter matrix, XiFor i-th of second expressive features on the initial three-dimensional face wire frame model Point, xiFor with the second expressive features point XiI-th of first expressive features on the corresponding original two dimensional facial image Point, N are the number of the first expressive features point and the second expressive features point;
    Judge module, for according to the camera parameter matrix that is calculated by the initial three-dimensional face wire frame model Second expressive features point is mapped on the original two dimensional facial image, to judge the second expressive features point and described first The matching degree of expressive features point, and the initial three-dimensional face wire frame model is adjusted according to judged result;
    The judge module, including:
    Computing unit, for calculating the matching of the second expressive features point and the first expressive features point according to formula (2) Error:
    <mrow> <mi>E</mi> <mi>r</mi> <mi>r</mi> <mo>=</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mo>&amp;CenterDot;</mo> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
    Judging unit, for judging whether the matching error is more than or equal to predetermined threshold value;
    Adjustment unit, if for being more than or equal to, the initial three-dimensional face wire frame model is adjusted, so that after adjustment The matching error of the second expressive features point and the first expressive features point on 3 d human face mesh model is less than described default Threshold value;
    The adjustment unit, including:
    Computation subunit, for calculating the second expressive features point XiEach grid vertex on to the initial three-dimensional face wire frame model XjGeodesic curve distance, wherein, i is not equal to j;
    First adjustment subelement, for fixing the second expressive features point X on the initial three-dimensional face wire frame modeliZ sit Mark, the second expressive features point X is changed using the first preset algorithmiX, y-coordinate, obtain and the second expressive features point XiCorresponding 3rd expressive features point Xi’;
    Determination subelement, for being constraint with geodesic curve distance, determined and the 3rd expression using the second preset algorithm Characteristic point Xi' corresponding to each grid vertex Xj’;
    Second adjustment subelement, for according to the 3rd expressive features point Xi' and with the 3rd expressive features point Xi' corresponding Each grid vertex Xj' the adjustment initial three-dimensional face wire frame model.
  7. 7. equipment according to claim 6, it is characterised in that the original two dimensional facial image includes target two-dimension human face Image and with reference to two-dimension human face image;
    The acquisition module, including:
    Extraction unit, for the human face expression characteristic point for extracting the target two-dimension human face image and the reference two-dimension human face figure The human face expression characteristic point of picture, the human face expression characteristic point include face mask characteristic point and the first expressive features point;
    First determining unit, for the face mask characteristic point according to the target two-dimension human face image and the two-dimentional people of the reference The face mask characteristic point of face image determines nearly front face image, and the nearly front face image is the target two-dimension human face Image described refers to two-dimension human face image;
    First deformation unit, for the face mask characteristic point and the first expressive features point according to the nearly front face image, The target Nature face model determined from neutral face database is deformed, obtain it is corresponding with the front face image in Property faceform;
    Second deformation unit, for according to the Nature face model of the nearly front face image respectively to presetting expression storehouse Zhong Bao Each default expression model contained is deformed, and obtains each expression model corresponding with the front face image;
    Second determining unit, each expression mould is determined for the first expressive features point according to the target two-dimension human face image First weight coefficient of type, and each expression model is determined according to the first expressive features point with reference to two-dimension human face image The second weight coefficient;
    Combining unit, for merging each expression model according to first weight coefficient, to obtain and target two dimension 3 d human face mesh model corresponding to facial image, and each expression model is merged according to second weight coefficient, with To 3 d human face mesh model corresponding with the reference two-dimension human face image.
  8. 8. equipment according to claim 7, it is characterised in that the default expression storehouse includes general blendshape moulds Type.
  9. 9. the equipment according to claim 7 or 8, it is characterised in that first determining unit, be specifically used for:
    The face that the target two-dimension human face image is calculated according to the face mask characteristic point of the target two-dimension human face image takes turns Wide curvature, and calculated according to the face mask characteristic point with reference to two-dimension human face image described with reference to two-dimension human face image Face mask curvature;
    It is nearly front face image to determine the small image of the face mask curvature.
  10. 10. the equipment according to any one of claim 7~8, it is characterised in that also include:
    Deformation module, for basis 3 d human face mesh model corresponding with the target two-dimension human face image to the target two Dimension facial image is deformed, and according to 3 d human face mesh model corresponding with the reference two-dimension human face image to the ginseng Two-dimension human face image is examined to be deformed;
    Merging module, for being merged by the target two-dimension human face image after deformation and with reference to two-dimension human face image, by institute State and be transferred to reference to the expression on two-dimension human face image on the target two-dimension human face image.
CN201410141093.XA 2014-04-10 2014-04-10 3 d human face mesh model processing method and equipment Active CN104978764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410141093.XA CN104978764B (en) 2014-04-10 2014-04-10 3 d human face mesh model processing method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410141093.XA CN104978764B (en) 2014-04-10 2014-04-10 3 d human face mesh model processing method and equipment

Publications (2)

Publication Number Publication Date
CN104978764A CN104978764A (en) 2015-10-14
CN104978764B true CN104978764B (en) 2017-11-17

Family

ID=54275238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410141093.XA Active CN104978764B (en) 2014-04-10 2014-04-10 3 d human face mesh model processing method and equipment

Country Status (1)

Country Link
CN (1) CN104978764B (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106934759A (en) * 2015-12-30 2017-07-07 掌赢信息科技(上海)有限公司 The front method and electronic equipment of a kind of human face characteristic point
CN107203962B (en) * 2016-03-17 2021-02-19 掌赢信息科技(上海)有限公司 Method for making pseudo-3D image by using 2D picture and electronic equipment
CN107292812A (en) * 2016-04-01 2017-10-24 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment of migration of expressing one's feelings
CN106327482B (en) * 2016-08-10 2019-01-22 东方网力科技股份有限公司 A kind of method for reconstructing and device of the facial expression based on big data
CN106530376B (en) * 2016-10-10 2021-01-26 福建网龙计算机网络信息技术有限公司 Three-dimensional role creating method and system
CN106570931A (en) * 2016-10-10 2017-04-19 福建网龙计算机网络信息技术有限公司 Virtual reality resource manufacturing method and system
CN108305312B (en) * 2017-01-23 2021-08-17 腾讯科技(深圳)有限公司 Method and device for generating 3D virtual image
CN107592449B (en) * 2017-08-09 2020-05-19 Oppo广东移动通信有限公司 Three-dimensional model establishing method and device and mobile terminal
CN108875335B (en) 2017-10-23 2020-10-09 北京旷视科技有限公司 Method for unlocking human face and inputting expression and expression action, authentication equipment and nonvolatile storage medium
CN107993216B (en) * 2017-11-22 2022-12-20 腾讯科技(深圳)有限公司 Image fusion method and equipment, storage medium and terminal thereof
CN108446595A (en) * 2018-02-12 2018-08-24 深圳超多维科技有限公司 A kind of space-location method, device, system and storage medium
CN108564659A (en) * 2018-02-12 2018-09-21 北京奇虎科技有限公司 The expression control method and device of face-image, computing device
CN108564642A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Unmarked performance based on UE engines captures system
CN108491850B (en) * 2018-03-27 2020-04-10 北京正齐口腔医疗技术有限公司 Automatic feature point extraction method and device of three-dimensional tooth mesh model
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN109147037B (en) * 2018-08-16 2020-09-18 Oppo广东移动通信有限公司 Special effect processing method and device based on three-dimensional model and electronic equipment
CN109377445B (en) * 2018-10-12 2023-07-04 北京旷视科技有限公司 Model training method, method and device for replacing image background and electronic system
CN109754467B (en) * 2018-12-18 2023-09-22 广州市百果园网络科技有限公司 Three-dimensional face construction method, computer storage medium and computer equipment
CN116843804A (en) * 2018-12-29 2023-10-03 华为技术有限公司 Method for generating animation expression and electronic equipment
CN110263617B (en) * 2019-04-30 2021-10-22 北京永航科技有限公司 Three-dimensional face model obtaining method and device
CN110111247B (en) 2019-05-15 2022-06-24 浙江商汤科技开发有限公司 Face deformation processing method, device and equipment
CN110135376A (en) * 2019-05-21 2019-08-16 北京百度网讯科技有限公司 Determine method, equipment and the medium of the coordinate system conversion parameter of imaging sensor
CN112348937A (en) * 2019-08-09 2021-02-09 华为技术有限公司 Face image processing method and electronic equipment
CN111259829B (en) * 2020-01-19 2023-10-20 北京小马慧行科技有限公司 Processing method and device of point cloud data, storage medium and processor
CN112884881B (en) * 2021-01-21 2022-09-27 魔珐(上海)信息科技有限公司 Three-dimensional face model reconstruction method and device, electronic equipment and storage medium
CN112989541B (en) * 2021-05-07 2021-07-23 国网浙江省电力有限公司金华供电公司 Three-dimensional grid model generation method and device, electronic equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593365A (en) * 2009-06-19 2009-12-02 电子科技大学 A kind of method of adjustment of universal three-dimensional human face model
CN101916454A (en) * 2010-04-08 2010-12-15 董洪伟 Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN103093498A (en) * 2013-01-25 2013-05-08 西南交通大学 Three-dimensional human face automatic standardization method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130287294A1 (en) * 2012-04-30 2013-10-31 Cywee Group Limited Methods for Generating Personalized 3D Models Using 2D Images and Generic 3D Models, and Related Personalized 3D Model Generating System
US9224248B2 (en) * 2012-07-12 2015-12-29 Ulsee Inc. Method of virtual makeup achieved by facial tracking

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593365A (en) * 2009-06-19 2009-12-02 电子科技大学 A kind of method of adjustment of universal three-dimensional human face model
CN101916454A (en) * 2010-04-08 2010-12-15 董洪伟 Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN103093498A (en) * 2013-01-25 2013-05-08 西南交通大学 Three-dimensional human face automatic standardization method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于二维人脸图像的三维建模研究;张怡等;《兰州工业学院学报》;20130831;第20卷(第4期);第1-5页 *

Also Published As

Publication number Publication date
CN104978764A (en) 2015-10-14

Similar Documents

Publication Publication Date Title
CN104978764B (en) 3 d human face mesh model processing method and equipment
US11302064B2 (en) Method and apparatus for reconstructing three-dimensional model of human body, and storage medium
US11127163B2 (en) Skinned multi-infant linear body model
CN104781849B (en) Monocular vision positions the fast initialization with building figure (SLAM) simultaneously
CN109859296A (en) Training method, server and the storage medium of SMPL parametric prediction model
CN107610209A (en) Human face countenance synthesis method, device, storage medium and computer equipment
CN108960020A (en) Information processing method and information processing equipment
CN108229269A (en) Method for detecting human face, device and electronic equipment
CN108960001A (en) Method and apparatus of the training for the image processing apparatus of recognition of face
CN110956071B (en) Eye key point labeling and detection model training method and device
CN102663820A (en) Three-dimensional head model reconstruction method
WO2021063271A1 (en) Human body model reconstruction method and reconstruction system, and storage medium
CN107451950A (en) Face image synthesis method, human face recognition model training method and related device
Yoshizawa et al. Skeleton‐based variational mesh deformations
CN106919899A (en) The method and system for imitating human face expression output based on intelligent robot
CN109144252A (en) Object determines method, apparatus, equipment and storage medium
Choi et al. Animatomy: An animator-centric, anatomically inspired system for 3d facial modeling, animation and transfer
CN114529640B (en) Moving picture generation method, moving picture generation device, computer equipment and storage medium
Le et al. Marker optimization for facial motion acquisition and deformation
JP2008140385A (en) Real-time representation method and device of skin wrinkle at character animation time
Orvalho et al. Transferring the rig and animations from a character to different face models
Chi et al. A new parametric 3D human body modeling approach by using key position labeling and body parts segmentation
CN117152407A (en) Automatic positioning method for head shadow measurement mark points
CN108509924A (en) The methods of marking and device of human body attitude
CN110751026B (en) Video processing method and related device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20171208

Address after: 510640 Guangdong City, Tianhe District Province, No. five, road, public education building, unit 371-1, unit 2401

Patentee after: Guangdong Gaohang Intellectual Property Operation Co., Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: Huawei Technologies Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180209

Address after: 528400 B37 No. N6 B37 of the three phase of the city of Ya Ju music in Zhongshan, Guangdong

Patentee after: Zhongshan micro network technology Co., Ltd.

Address before: 510640 Guangdong City, Tianhe District Province, No. five, road, public education building, unit 371-1, unit 2401

Patentee before: Guangdong Gaohang Intellectual Property Operation Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180927

Address after: 528463 Guangdong Zhongshan three township Zhenhua Road 3, three rural financial business center 705 cards, 706 cards

Patentee after: Guangdong smart Polytron Technologies Inc

Address before: 528400 B37 three, phase three, N6, three Town, Zhongshan, Guangdong.

Patentee before: Zhongshan micro network technology Co., Ltd.

TR01 Transfer of patent right