CN104376594B - Three-dimensional face modeling method and device - Google Patents
Three-dimensional face modeling method and device Download PDFInfo
- Publication number
- CN104376594B CN104376594B CN201410687577.4A CN201410687577A CN104376594B CN 104376594 B CN104376594 B CN 104376594B CN 201410687577 A CN201410687577 A CN 201410687577A CN 104376594 B CN104376594 B CN 104376594B
- Authority
- CN
- China
- Prior art keywords
- conversion
- master pattern
- primitive man
- colour
- textures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
To realize that efficiently three-dimensional face is modeled, and inventor provides a kind of three-dimensional face modeling method, including step:Obtain primitive man's picture and master pattern;Skeleton point information is obtained according to master pattern;The primitive man is as upper mark characteristic point and records its coordinate information;To the primitive man as doing the first conversion;Second conversion is done to the master pattern;By the master pattern through the second conversion using presetting method unfolded surface as texture image;By the primitive man through the first conversion as mapping to the texture image.Inventor additionally provides a kind of corresponding three-dimensional face model building device for realizing the above method simultaneously.Above-mentioned technical proposal mode of operation is simple, and fitting effect is good, and reducing degree is high;The process realized and calculated is simplified simultaneously, realizes speed faster, it is more efficient, the process of data acquisition is also simplify, the practicality and adaptability of system is improved, there is big advantage to the less demanding scene of face depth information at some.
Description
Technical field
The present invention relates to computer graphics and digital image processing field, relate more specifically to a kind of three-dimensional face modeling
Method and apparatus.
Background technology
With computer graphics and the tremendous development of image processing techniques, the three-dimensional face in two subject crossing fields is used as
Modeling technique has been obtained extensively in the relevant industries such as game animation, medical science and lift face, film advertisement, video conference and videophone
Application, and be increasingly becoming the hot issue of research.
The current major technique modeled on three-dimensional face is including following several:
Three-dimensional face modeling based on spatial digitizer etc.:Refer mainly to be scanned face using 3-dimensional digital scanner
To obtain the three-dimensional information of face, the three-dimensional reconstruction of faceform, the party are carried out in a computer according to the three-dimensional data of acquisition
Method results in accurate faceform.However, carrying out the method for three-dimensional face modeling using hardware devices such as spatial digitizers
Versatility and flexibility are poor, and operation is complex, and the cost of its hardware device is also higher, and it is some special to be typically only applicable to
Occasion.
Human face model building based on image:Refer mainly to, by single view or multiple views facial image or video sequence, obtain
The three-dimensional information of human face characteristic point is obtained, utilizes the information to reconstruct the 3-D geometric model of face.Face modeling based on image
Method is broadly divided into the modeling based on single width facial image and the modeling based on several facial images.
Modeling based on single width facial image:Generally refer to by gather individual face image carry out data analysis with
And fitting and the three-dimensional reconstruction of face, its basic skills is all the light and shade information on facial image or image by micro- side to enter
The judgement of the upper depth information of row faceform, so as to complete the modeling work of three-dimensional face.However, being entered using single width facial image
The method of row modeling needs to make choice between the shape and appearance of face, it is difficult to accurately fits and is consistent with real human face
Model.Because the right and left of face is not fully symmetrical, therefore when the face using micro- side carries out the collection of depth information
When, necessarily cause the distortion of face appearance after rebuilding;And the shade using face direct picture carries out depth information meter with bloom
During calculation, due to calculating the deficiencies such as complexity, operation time length, the poor reliability of result of calculation, it tends to be difficult to obtain preferable effect.
Modeling based on several facial images:Refer to the facial image by several multi-angles, such as front, left surface, right side
The images such as face carry out the three-dimensional modeling of face to obtain the three-dimensional data of more plurality of human faces.Modeling energy based on several facial images
Enough front texture information, depth information more directly perceived, intactly acquisition face etc., so as to be conducive to setting up more accurate three
Tie up faceform.However, the method being modeled using several facial images no doubt more can intactly obtain face just
The information such as face texture, face depth, but it adds manually-operated step to a certain extent, reduces the flexible of operation
Property.And in some handheld devices, user is difficult to provide the side image for meeting and requiring more conveniently, and this is modeled to face
Accuracy also result in very big influence.
The content of the invention
For this reason, it may be necessary to provide, one kind can evade face depth information calculating, operating process is easy, fast and automatically changes, build
The reliable three-dimensional face modeling method of mould real result and device.
To achieve the above object, a kind of three-dimensional face modeling method, including step are inventor provided:
Obtain primitive man's picture and master pattern;
Skeleton point information is obtained according to master pattern, the skeleton point is that deformation quantity is located in master pattern grid deformation
The characteristic point of one pre-set interval;
The primitive man is as upper mark characteristic point and records its coordinate information;
To the primitive man as doing the first conversion, first conversion make the primitive man as upper preset standard position with
Preset standard aligned in position on the master pattern, and same coordinate transform is done to the skeleton point of portrait;
The second conversion is done to the master pattern, second conversion includes throwing the plane of the skeleton point on master pattern
Shadow point moves to the corresponding skeleton point of primitive man's picture through the first conversion;
By the master pattern through the second conversion using presetting method unfolded surface as texture image;
By the primitive man through the first conversion as mapping to the texture image.
Further, it is described " by the master pattern through the second conversion with default side in described three-dimensional face modeling method
Method unfolded surface is texture image;By the primitive man through the first conversion as mapping to the texture image " specifically include:
By the master pattern through the second conversion using presetting method unfolded surface as UV textures;
By the primitive man through the first conversion as mapping to the UV textures.
Further, in described three-dimensional face modeling method, in step " by the primitive man through the first conversion as mapping to
Also include the colour of skin matching treatment that UV textures described in a pair are carried out before the UV textures ", the colour of skin matching treatment is specifically wrapped
Include:
Primitive man is chosen as upper one or more predeterminable areas, colour of skin sampled value is obtained according to preset algorithm;
Discoloration processing is done to the UV textures using the colour of skin sampled value.
Further, in described three-dimensional face modeling method, step is " using the colour of skin sampled value to the UV textures
Do discoloration processing " specifically include:
The colour of skin sample graph with UV textures formed objects is set up using the colour of skin sampled value, and using the UV textures as source
Picture, the colour of skin sample graph are that Target Photo carries out graph cut.
Further, it is described " by the primitive man through the first conversion as mapping to institute in described three-dimensional face modeling method
State UV textures " specifically include:
Emergence masking-out using pre-set dimension is shade, by the primitive man through the first conversion as being mapped by emergence suture way
On to the UV textures through colour of skin matching treatment.
Further, in described three-dimensional face modeling method, the preset standard position be characterized a little in left and right pupil
Hole;
First conversion to primitive man's picture includes rotation, scaling or translated;
The rotation is specifically included:Primitive man's picture is rotated, makes its left and right pupil standard people corresponding with master pattern
Left and right pupil horizontal level as in is consistent;
The scaling is specifically included:Primitive man's picture is scaled, makes its left and right interpupillary distance mark corresponding with master pattern
Left and right interpupillary distance in quasi- portrait is consistent;
The translation is specifically included:Primitive man's picture is translated, makes its left and right pupil standard people corresponding with master pattern
Left and right pupil alignment as in.
Further, in described three-dimensional face modeling method, second conversion specifically includes following steps:
The bone spot projection for master pattern of changing commanders is become to screen plane by orthogonal projection;
The plane projection point of the skeleton point of master pattern is moved to the corresponding skeleton point of primitive man's picture through the first conversion;
Inverse transformation to the master pattern skeleton point plane projection point orthogonal projection conversion after translation.
Further, in described three-dimensional face modeling method, the orthogonal projection conversion is specifically included:
The skeleton point of master pattern is converted into world coordinate system from self-defined coordinate system using model matrix Model;
The skeleton point of master pattern is converted into visual coordinate system from world coordinate system using observation matrix View;
Visual coordinate system is removed into Z axis coordinate using projection matrix Projection rectangular projection, it is fallen within screen
Plane.
Further, in described three-dimensional face modeling method, the master pattern is as ethnic group with the primitive man
The master pattern matched somebody with somebody.
Further, in described three-dimensional face modeling method, the primitive man is as meeting a default resolution condition
Or a default light and shade difference condition.
Inventor additionally provides a kind of three-dimensional face model building device, including input block, skeleton point determining unit, characteristic point
Indexing unit, converter unit, texture mapping unit and map unit;
The input block is used to obtain primitive man's picture and master pattern;
Skeleton point determining unit is used to obtain skeleton point information according to master pattern, and the skeleton point is in master pattern net
Deformation quantity is located at the characteristic point of a pre-set interval when trellis becomes;
Characteristic point indexing unit is used to as upper mark characteristic point and record its coordinate information in the primitive man;
Converter unit is used for the primitive man as doing the first conversion, and first conversion makes the primitive man as upper pre-
If normal place and the preset standard aligned in position on the master pattern, and same coordinate change is done to the skeleton point of portrait
Change;
Converter unit is additionally operable to make the master pattern the second conversion, and second conversion is included on master pattern
The plane projection point of skeleton point moves to the corresponding skeleton point of primitive man's picture through the first conversion;
Texture mapping unit is used for the master pattern through the second conversion using presetting method unfolded surface as texture image;
Map unit is used for the primitive man through the first conversion as mapping to the texture image.
Further, in described three-dimensional face model building device, texture mapping unit is used for the standard through the second conversion
Model is using presetting method unfolded surface as UV textures;
Map unit is used for the primitive man through the first conversion as mapping to the UV textures.
Further, in described three-dimensional face model building device, in addition to colour of skin matching unit, the colour of skin matching unit
For choosing primitive man as upper one or more predeterminable areas, colour of skin sampled value is obtained according to preset algorithm;And utilize the skin
Color sampled value does discoloration processing to the UV textures.
Further, in described three-dimensional face model building device, colour of skin matching unit is using the colour of skin sampled value to institute
State UV textures do discoloration processing specifically include:
The colour of skin sample graph with UV textures formed objects is set up using the colour of skin sampled value, and using the UV textures as source
Picture, the colour of skin sample graph are that Target Photo carries out graph cut.
Further, in described three-dimensional face model building device, map unit is by the primitive man through the first conversion as mapping
Specifically included to the UV textures:
Emergence masking-out using pre-set dimension is shade, by the primitive man through the first conversion as being mapped by emergence suture way
On to the UV textures through colour of skin matching treatment.
Further, in described three-dimensional face model building device, the preset standard position be characterized a little in left and right pupil
Hole;
First conversion of the converter unit to primitive man's picture includes rotation, scaling or translated;
The rotation is specifically included:Primitive man's picture is rotated, makes its left and right pupil standard people corresponding with master pattern
Left and right pupil horizontal level as in is consistent;
The scaling is specifically included:Primitive man's picture is scaled, makes its left and right interpupillary distance mark corresponding with master pattern
Left and right interpupillary distance in quasi- portrait is consistent;
The translation is specifically included:Primitive man's picture is translated, makes its left and right pupil standard people corresponding with master pattern
Left and right pupil alignment as in.
Further, in described three-dimensional face model building device, converter unit does the second conversion and specifically includes following step
Suddenly:
The bone spot projection for master pattern of changing commanders is become to screen plane by orthogonal projection;
The plane projection point of the skeleton point of master pattern is moved to the corresponding skeleton point of primitive man's picture through the first conversion;
Inverse transformation to the master pattern skeleton point plane projection point orthogonal projection conversion after translation.
Further, in described three-dimensional face model building device, converter unit does orthogonal projection conversion and specifically included:
The skeleton point of master pattern is converted into world coordinate system from self-defined coordinate system using model matrix Model;
The skeleton point of master pattern is converted into visual coordinate system from world coordinate system using observation matrix View;
Visual coordinate system is removed into Z axis coordinate using projection matrix Projection rectangular projection, it is fallen within screen
Plane.
Further, in described three-dimensional face model building device, the master pattern is as ethnic group with the primitive man
The master pattern matched somebody with somebody.
Further, in described three-dimensional face model building device, the primitive man is as meeting a default resolution condition
Or a default light and shade difference condition.
Prior art is different from, above-mentioned technical proposal realizes building for the three-dimensional face of automation by the way of bone alignment
Mould, its mode of operation is simple, and the effect of fitting is good, and reducing degree is higher;It is compared to tradition and three-dimensional is carried out based on single image
The method of modeling, simplifies the process realized and calculated, realizes speed faster, more efficient, disclosure satisfy that has one to real-time
The system design of provisioning request.And the method that three-dimensional modeling is carried out based on multiple image is compared to, technical solution of the present invention letter again
Change the process of data acquisition, improve the practicality and adaptability of system, it is less demanding to face depth information at some
Scene has big advantage.
Brief description of the drawings
Fig. 1 is the flow chart of three-dimensional face modeling method described in an embodiment of the present invention;
Fig. 2 is the structural representation of three-dimensional face model building device described in an embodiment of the present invention.
Description of reference numerals:
1- input blocks
2- skeleton point determining units
3- characteristic point indexing units
4- converter units
The assorted unit of the 5- colours of skin
6- map units
7- texture mapping units
Embodiment
To describe the technology contents of technical scheme in detail, feature, the objects and the effects being constructed, below in conjunction with specific reality
Apply example and coordinate accompanying drawing to be explained in detail.
Referring to Fig. 1, described in an embodiment of the present invention three-dimensional face modeling method flow chart.Methods described includes
Following steps:
S1, acquisition primitive man's picture and master pattern;
Further, the master pattern is the master pattern matched with the primitive man as ethnic group;Primitive man's picture
To meet the two-dimentional full face of a default resolution condition and a default light and shade difference condition.
Because the core of technical solution of the present invention is to carry out building for three-dimensional face by deformable grid model fitting
Mould, and there is very big difference in the similarity of human face fitting in different deformation models, therefore in present embodiment, according to not
Characteristic information with ethnic group is analyzed, it is established that different master patterns, for example, for Asian (yellow), leading to
The a large amount of Asian faces of collection are crossed, comprehensive analysis are carried out to its head dummy, contour shape, five official rank information and feature is extracted setting up
Asia head part's master pattern.The master pattern of other ethnic groups is similarly.Then, master pattern is determined according to the ethnic group of primitive man's picture
Species, for follow-up fitting.
In addition, in order to ensure the accurate reliable of modeling, the input to primitive man's picture is also required to meet certain condition.First,
In order to retain the appearance information of face as completely as possible, the primitive man meets the front of a default resolution condition as should be
Facial image.Secondly, it should be noted that the facial image gathered under the control of light, half-light can not in the gatherer process of facial image
It is accurate to obtain Skin Color Information, thereby increases and it is possible to cause face calibration error occur, the similarity of faceform after reduction modeling.Thus,
The primitive man is as that should meet a default resolution condition and a default left and right face light and shade difference condition.
S2, according to master pattern obtain skeleton point information, the skeleton point be in master pattern grid deformation deformation quantity
Positioned at the characteristic point of a pre-set interval;
S3, the primitive man is as upper mark characteristic point and records its coordinate information;
Characteristic point this concept occurred in step S2 and step S3, is generally referred in Face detection and identification technology,
There are the point of prominent features meaning, referred to as characteristic point to appearance for calibrating face shape of face, eyebrow, eyes, nose, face etc..
Based on different face calibration technical research results, there is that different feature point models are alternative, present embodiment is used
83 spies of 7 significant features (face exterior contour, left and right eyebrow, right and left eyes, nose, mouth) are marked off according to face general knowledge
Levy dot system.In other embodiments, other characteristic point scaling methods of the prior art can also be used.
And bone key point this concept, then refer to that those in characteristic point will be to facial contour and five in grid deformation
Official produces the point of considerable influence, and the method for obtaining bone key point is typically that Deformation Experiments, analysis are carried out to face wire frame model
Influence of each characteristic point to grid deformation, meet default deformation quantity condition is confirmed as bone key point.
S4, to the primitive man as doing the first conversion;
First conversion makes the primitive man as the preset standard on upper preset standard position and the master pattern
Aligned in position, and same coordinate transform is done to the skeleton point of portrait;
In present embodiment, the preset standard position be characterized a little in left and right pupil;
First conversion to primitive man's picture includes rotation, scaling or translated;
The rotation is specifically included:Primitive man's picture is rotated, makes its left and right pupil standard people corresponding with master pattern
Left and right pupil horizontal level as in is consistent;Specifically, information is demarcated according to the characteristic point of primitive man's picture, takes left and right pupil feature
Point position, calculates its angle with horizontal direction, then using pupil position line midpoint as pivot, portrait is rotated corresponding
Angle, make its horizontal direction consistent.
The scaling is specifically included:According to the interpupillary distance of standard portrait, primitive man's picture is scaled, makes its left and right pupil
Left and right interpupillary distance in distance standard portrait corresponding with master pattern is consistent;
The translation is specifically included:Primitive man's picture is translated, makes its left and right pupil standard people corresponding with master pattern
Left and right pupil alignment as in.
In addition also include cutting step when necessary, i.e., the face area of particular size is cut out according to portrait characteristic point position
Domain, puts it into the image with standard portrait formed objects.
In fact, the conversion that this step is done is a kind of pretreatment work of primitive man's picture to input, mainly by institute
State primitive man's picture rotated, scaled, translated or cutting processing, make one predeterminated position (being pupil position in present embodiment)
The predeterminated position (pupil) of standard portrait obtained by deploying with master pattern aligns.The step of above-mentioned rotation, scaling, translation
Represented in the form of matrix disposal, therefore matrix that can also be same to the characteristic point demarcated in portrait is sat accordingly
Mark conversion, to obtain the coordinate position in new images.
S5, the second conversion is done to the master pattern;
Second conversion includes the plane projection point of the skeleton point on master pattern moving to the original through the first conversion
The corresponding skeleton point of beginning portrait;
Further, second conversion is specifically included as follows step by step again:
S51, the bone spot projection for master pattern of changing commanders become to screen plane by orthogonal projection;
S52, by the plane projection point of the skeleton point of master pattern move to through first conversion primitive man's picture corresponding bone
Bone point;
Because primitive man before this by pretreatment according to standard portrait as having carried out pupil registration process, therefore only need
Subpoint of the skeleton point on master pattern on screen is moved to the skeleton point position of the portrait of standardization respectively.
S53, the inverse transformation to the master pattern skeleton point plane projection point orthogonal projection conversion after translation.Actually this step
The inverse transformation of matrix computations (Model*View*Projection) calculates (Model*View*Projection) in as S51-1,
Three-dimensional coordinate position of the inverse transformation with the skeleton point after being alignd in the model space, obtained result as bone are carried out with this
Model after bone alignment.
Further, the orthogonal projection conversion described in step S52 is specifically included:
The skeleton point of master pattern is converted into world coordinate system from self-defined coordinate system using model matrix Model;
The skeleton point of master pattern is converted into visual coordinate system from world coordinate system using observation matrix View;
Visual coordinate system is removed into Z axis coordinate using projection matrix Projection rectangular projection, it is fallen within screen
Plane.
S6, by through second conversion master pattern using presetting method unfolded surface as UV textures;
S7, colour of skin matching treatment is done to the UV textures;
The colour of skin matching treatment is specifically included as follows step by step:
S71, selection primitive man obtain colour of skin sampled value as upper one or more predeterminable areas according to preset algorithm;Due to
There is different degrees of shadow and highlight in common portrait, this can influence truly to a certain extent by collection such environmental effects
The performance of the colour of skin, and in order that modeling after model color closer to face true skin tone, it is necessary to be adopted to skin color
Sample.For example, using the result of face calibration, the region chosen between nose both sides and face feature point is sampled, and calculates two
The average of individual field color as the colour of skin sampled value.
S72, using the colour of skin sampled value discoloration processing is done to the UV textures.
More specifically, the discoloration processing includes again:Set up and UV textures formed objects using the colour of skin sampled value
Colour of skin sample graph, and by source picture of the UV textures, the colour of skin sample graph be Target Photo carry out graph cut.
S8, by the primitive man through the first conversion as mapping to the UV textures.
This step is specifically included again:Emergence masking-out using pre-set dimension is shade, by the primitive man through the first conversion as logical
Emergence suture way is crossed to map on the UV textures through colour of skin matching treatment.
In fact, UV textures are a 2 d texture coordinate systems being defined, for determining how a texture maps
Surface as being positioned over threedimensional model.In other embodiments, also the coordinate system of identical purpose can be realized using other
To realize conversion of the master pattern to skin texture images.
Present embodiment methods described drives master die by the way of bone alignment using the coordinate of human face characteristic point
Bone key point in type carries out alignment operation, can effectively fit real face shape.According to different ethnic group heads,
The difference of face feature sets up the human body head master pattern of differentiation, can be reduced as far as the error that fitting is produced, carry
The reducing degree of high facial contour and facial characteristics.When carrying out bone alignment, present embodiment is proposed using pupil as alignment
Benchmark handles the pre-align that the facial image after standardization and standard textures carry out pupil, and follow-up work need to only click through bone
Row translation can complete work of aliging, and effectively simplify the calculating of models fitting.In addition, the facial image inputted using user
Colour of skin sampling processing and calculating are carried out, the color of textures is subjected to graph cut according to the sampled value of the colour of skin, is retaining source images
The colour of skin and the color value of sampling on faceform is matched on the basis of details, make model on the whole can be preferable
Ground reduces the color of face.Meanwhile, present embodiment is extracted wherein to face wheel based on the analysis carried out to face wire frame model
Wide and face feature influences larger characteristic point as bone key point, and the profile and five of face is fitted according to these key points
Official's feature.In summary, the three-dimensional face modeling method that present embodiment is provided can realize with high universalizable and flexibility,
The three-dimensional face modeling function that easy to operate, with low cost, fitting effect is accurate truly, fast and automatically change.
Referring to Fig. 2, described in an embodiment of the present invention three-dimensional face model building device structural representation;Described device
It is single including input block 1, skeleton point determining unit 2, characteristic point indexing unit 3, converter unit 4, texture mapping unit 5 and mapping
Member 6;
The input block 1 is used to obtain primitive man's picture and master pattern;The master pattern be and primitive man's picture
The master pattern of ethnic group matching;The primitive man is as meeting a default resolution condition or a default light and shade difference condition.
Because the core of technical solution of the present invention is to carry out building for three-dimensional face by deformable grid model fitting
Mould, and there is very big difference in the similarity of human face fitting in different deformation models, therefore in present embodiment, input is single
Primitive man's picture and master pattern acquired in member 1 need to meet some requirements.First, master pattern is according to different ethnic groups
Characteristic information is analyzed and sets up what is obtained, for example, for Asian (yellow), it is a large amount of Asian by collecting
Its head dummy, contour shape, five official rank information are carried out comprehensive analysis and extract feature to set up Asia head part's master die by face
Type.The master pattern of other ethnic groups is similarly.Then, the species of master pattern is determined according to the ethnic group of primitive man's picture, for follow-up
Fitting.
In addition, in order to ensure the accurate reliable of modeling, primitive man is as being also required to meet certain condition.First, in order to the greatest extent may be used
The appearance information of face can intactly be retained, the primitive man meets the front face figure of a default resolution condition as should be
Picture.Secondly, it should be noted that the facial image gathered under the control of light, half-light can not be obtained accurately in the gatherer process of facial image
Obtain Skin Color Information, thereby increases and it is possible to cause face calibration error occur, the similarity of faceform after reduction modeling.Thus, the original
Beginning portrait should meet a default resolution condition and a default left and right face light and shade difference condition.
Skeleton point determining unit 2 is used to obtain skeleton point information according to master pattern, and the skeleton point is in master pattern
Deformation quantity is located at the characteristic point of a pre-set interval during grid deformation.
Characteristic point indexing unit 3 is used to as upper mark characteristic point and record its coordinate information in the primitive man.
The information processing that skeleton point determining unit 2 and characteristic point indexing unit 3 are realized includes characteristic point this concept,
Generally refer in Face detection and identification technology, for calibrating face shape of face, eyebrow, eyes, nose, face etc. to appearance
There are the point of prominent features meaning, referred to as characteristic point.Based on different face calibration technical research results, there are different characteristic point moulds
Type is alternative, present embodiment use according to face general knowledge mark off 7 significant features (face exterior contour,
Left and right eyebrow, right and left eyes, nose, mouth) 83 feature dot systems.In other embodiments, can also be using in the prior art
Other characteristic point scaling methods.And bone key point this concept, then refer to that those in characteristic point will in grid deformation
The point of considerable influence can be produced to facial contour and face, the method for obtaining bone key point is typically that face wire frame model is entered
Row Deformation Experiments, analyze influence of each characteristic point to grid deformation, and meet default deformation quantity condition is confirmed as bone key
Point.
Converter unit 4 is used to make the primitive man as upper as doing the first conversion, first conversion to the primitive man
Preset standard position and the preset standard aligned in position on the master pattern, and same coordinate change is done to the skeleton point of portrait
Change;
In present embodiment, the preset standard position be characterized a little in left and right pupil;
First conversion of the converter unit 4 to primitive man's picture includes rotation, scaling or translated;
The rotation is specifically included:Primitive man's picture is rotated, makes its left and right pupil standard people corresponding with master pattern
Left and right pupil horizontal level as in is consistent;
The scaling is specifically included:Primitive man's picture is scaled, makes its left and right interpupillary distance mark corresponding with master pattern
Left and right interpupillary distance in quasi- portrait is consistent;
The translation is specifically included:Primitive man's picture is translated, makes its left and right pupil standard people corresponding with master pattern
Left and right pupil alignment as in.
In fact, the first conversion that converter unit 4 is done is a kind of pretreatment work of primitive man's picture to input, mainly
By to the primitive man as being rotated, scale, translate or cutting processing, make the one predeterminated position (be in present embodiment
Pupil position) alignd with the predeterminated position (pupil) of the standard portrait obtained by master pattern expansion.Above-mentioned rotation, scaling, translation
The step of can be represented in the form of matrix disposal, therefore to the characteristic point demarcated in portrait can also same matrix enter
The corresponding coordinate transform of row, to obtain the coordinate position in new images.
Converter unit 4 is additionally operable to make the master pattern the second conversion, and second conversion is included on master pattern
Skeleton point plane projection point move to through first conversion primitive man's picture corresponding skeleton point;
Further, converter unit 4 does the second conversion and specifically includes following steps:
The bone spot projection for master pattern of changing commanders is become to screen plane by orthogonal projection;
The plane projection point of the skeleton point of master pattern is moved to the corresponding skeleton point of primitive man's picture through the first conversion;
Inverse transformation to the master pattern skeleton point plane projection point orthogonal projection conversion after translation.
Further, converter unit 4 does orthogonal projection conversion and specifically included:
The skeleton point of master pattern is converted into world coordinate system from self-defined coordinate system using model matrix Model;Profit
The skeleton point of master pattern is converted into visual coordinate system from world coordinate system with observation matrix View;Utilize projection matrix
Visual coordinate system is removed Z axis coordinate by Projection rectangular projection, it is fallen within screen plane.
Texture mapping unit 5 is used for the master pattern through the second conversion using presetting method unfolded surface as texture image;
Map unit 6 is used for the primitive man through the first conversion as mapping to the texture image.
Further, texture mapping unit 5 be used for by through second conversion master pattern using presetting method unfolded surface as
UV textures;
Map unit 6 is used for the primitive man through the first conversion as mapping to the UV textures.
Further, map unit 6 specifically includes the primitive man through the first conversion as mapping to the UV textures:
Emergence masking-out using pre-set dimension is shade, by the primitive man through the first conversion as being mapped by emergence suture way
On to the UV textures through colour of skin matching treatment.
Moreover it is preferred that in described three-dimensional face model building device, in addition to colour of skin matching unit 7, the colour of skin matching
Unit 7 is used to choose primitive man as upper one or more predeterminable areas, and colour of skin sampled value is obtained according to preset algorithm;And utilize institute
State colour of skin sampled value and discoloration processing is done to the UV textures.
Further, colour of skin matching unit 7 makees the specific bag of discoloration processing to the UV textures using the colour of skin sampled value
Include:
The colour of skin sample graph with UV textures formed objects is set up using the colour of skin sampled value, and using the UV textures as source
Picture, the colour of skin sample graph are that Target Photo carries out graph cut.
Due to common portrait by collection such environmental effects there is different degrees of shadow and highlight, this is to a certain degree
On can influence the performance of true skin tone, and in order that after modeling model color closer to face true skin tone, it is necessary to skin
Skin color is sampled.For example, using the result of face calibration, the region chosen between nose both sides and face feature point is carried out
Sampling, calculates the average of two field colors as the sampled value of the colour of skin, recycles colour of skin sampled value further to carry out at discoloration
Reason.
In addition, actually UV textures are a 2 d texture coordinate systems being defined, for determining how a line
Reason image is positioned over the surface of threedimensional model.In other embodiments, also the coordinate of identical purpose can be realized using other
System or method realize conversion of the master pattern to skin texture images.
Present embodiment drives the bone on master pattern using the coordinate of human face characteristic point by the way of bone alignment
Bone key point carries out alignment operation, can effectively fit real face shape.According to different ethnic group heads, face feature
Difference set up the human body head master pattern of differentiation, the error that fitting is produced can be reduced as far as, face wheel is improved
The reducing degree of wide and facial characteristics.When carrying out bone alignment, present embodiment is proposed will mark by alignment benchmark of pupil
The pre-align that facial image after standardization carries out pupil with standard textures is handled, and follow-up work need to only be translated skeleton point i.e.
Work of aliging can be completed, the calculating of models fitting is effectively simplified.In addition, the facial image inputted using user carries out the colour of skin
Sampling processing and calculating, graph cut is carried out by the color of textures according to the sampled value of the colour of skin, is retaining the base of source images details
The colour of skin and the color value of sampling on faceform is matched on plinth, model is preferably gone back protoplast on the whole
The color of face.Meanwhile, present embodiment is extracted wherein to facial contour and face based on the analysis carried out to face wire frame model
Feature influences larger characteristic point as bone key point, and the profile and face feature of face are fitted according to these key points.
In summary, the three-dimensional face model building device that present embodiment is provided can be realized with high universalizable and flexibility, operation letter
Just the three-dimensional face modeling function that, with low cost, fitting effect is accurate truly, fast and automatically change.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality
Body or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or deposited between operating
In any this actual relation or order.Moreover, term " comprising ", "comprising" or its any other variant are intended to
Nonexcludability is included, so that process, method, article or terminal device including a series of key elements not only include those
Key element, but also other key elements including being not expressly set out, or also include being this process, method, article or end
The intrinsic key element of end equipment.In the absence of more restrictions, limited by sentence " including ... " or " including ... "
Key element, it is not excluded that also there is other key element in the process including the key element, method, article or terminal device.This
Outside, herein, " being more than ", " being less than ", " exceeding " etc. are interpreted as not including this number;" more than ", " following ", " within " etc. understand
It is to include this number.
It should be understood by those skilled in the art that, the various embodiments described above can be provided as method, device or computer program production
Product.These embodiments can be using the embodiment in terms of complete hardware embodiment, complete software embodiment or combination software and hardware
Form.All or part of step in the method that the various embodiments described above are related to can be instructed by program correlation hardware come
Complete, described program can be stored in the storage medium that computer equipment can be read, for performing the various embodiments described above side
All or part of step described in method.The computer equipment, includes but is not limited to:Personal computer, server, general-purpose computations
Machine, special-purpose computer, the network equipment, embedded device, programmable device, intelligent mobile terminal, intelligent home device, Wearable
Smart machine, vehicle intelligent equipment etc.;Described storage medium, includes but is not limited to:RAM, ROM, magnetic disc, tape, CD, sudden strain of a muscle
Deposit, USB flash disk, mobile hard disk, storage card, memory stick, webserver storage, network cloud storage etc..
The various embodiments described above are with reference to method, equipment (system) and the computer program product according to embodiment
Flow chart and/or block diagram are described.It should be understood that can be by every in computer program instructions implementation process figure and/or block diagram
One flow and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computers can be provided
Programmed instruction is to the processor of computer equipment to produce a machine so that pass through the finger of the computing device of computer equipment
Order, which is produced, to be used to realize what is specified in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square frames
The device of function.
These computer program instructions may be alternatively stored in the computer that computer equipment can be guided to work in a specific way and set
In standby readable memory so that the instruction being stored in the computer equipment readable memory, which is produced, includes the manufacture of command device
Product, the command device is realized to be referred in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square frames
Fixed function.
These computer program instructions can be also loaded into computer equipment so that performed on a computing device a series of
Operating procedure is to produce computer implemented processing, so that the instruction performed on a computing device is provided for realizing in flow
The step of function of being specified in one flow of figure or multiple flows and/or one square frame of block diagram or multiple square frames.
Although the various embodiments described above are described, those skilled in the art once know basic wound
The property made concept, then can make other change and modification to these embodiments, so embodiments of the invention are the foregoing is only,
Not thereby the scope of patent protection of the present invention, the equivalent structure that every utilization description of the invention and accompanying drawing content are made are limited
Or equivalent flow conversion, or other related technical fields are directly or indirectly used in, similarly it is included in the patent of the present invention
Within protection domain.
Claims (18)
1. a kind of three-dimensional face modeling method, including step:
Obtain primitive man's picture and master pattern;
Skeleton point information is obtained according to master pattern, the skeleton point is that deformation quantity is pre- positioned at one in master pattern grid deformation
If interval characteristic point;
The primitive man is as upper mark characteristic point and records its coordinate information;
To the primitive man as doing the first conversion, first conversion make the primitive man as upper preset standard position with it is described
Preset standard aligned in position on master pattern, and same coordinate transform is done to the skeleton point of portrait;
The second conversion is done to the master pattern, second conversion is included the plane projection point of the skeleton point on master pattern
Move to the corresponding skeleton point of primitive man's picture through the first conversion;
By the master pattern through the second conversion using presetting method unfolded surface as texture image;By primitive man's picture through the first conversion
Map to the texture image;Specifically include:By the master pattern through the second conversion using presetting method unfolded surface as UV textures;
By the primitive man through the first conversion as mapping to the UV textures.
2. in three-dimensional face modeling method as claimed in claim 1, in step " by the primitive man through the first conversion as mapping to
Also include the colour of skin matching treatment that UV textures described in a pair are carried out before the UV textures ", the colour of skin matching treatment is specifically wrapped
Include:
Primitive man is chosen as upper one or more predeterminable areas, colour of skin sampled value is obtained according to preset algorithm;
Discoloration processing is done to the UV textures using the colour of skin sampled value.
3. in three-dimensional face modeling method as claimed in claim 2, step is " using the colour of skin sampled value to the UV textures
Do discoloration processing " specifically include:
The colour of skin sample graph with UV textures formed objects is set up using the colour of skin sampled value, and using the UV textures as source figure
Piece, the colour of skin sample graph are that Target Photo carries out graph cut.
4. it is described " by the primitive man through the first conversion as mapping to institute in three-dimensional face modeling method as claimed in claim 2
State UV textures " specifically include:
Emergence masking-out using pre-set dimension as shade, by through first conversion primitive man as by emergence suture way map to through
On the UV textures of colour of skin matching treatment.
5. in three-dimensional face modeling method as claimed in claim 1, the preset standard position be characterized a little in left and right pupil
Hole;
First conversion to primitive man's picture includes rotation, scaling or translated;
The rotation is specifically included:Primitive man's picture is rotated, is made in its left and right pupil standard portrait corresponding with master pattern
Left and right pupil horizontal level it is consistent;
The scaling is specifically included:Primitive man's picture is scaled, makes its left and right interpupillary distance standard people corresponding with master pattern
Left and right interpupillary distance as in is consistent;
The translation is specifically included:Primitive man's picture is translated, is made in its left and right pupil standard portrait corresponding with master pattern
The alignment of left and right pupil.
6. in three-dimensional face modeling method as claimed in claim 1, second conversion specifically includes following steps:
The bone spot projection for master pattern of changing commanders is become to screen plane by orthogonal projection;
The plane projection point of the skeleton point of master pattern is moved to the corresponding skeleton point of primitive man's picture through the first conversion;
Inverse transformation to the master pattern skeleton point plane projection point orthogonal projection conversion after translation.
7. in three-dimensional face modeling method as claimed in claim 6, the orthogonal projection conversion is specifically included:
The skeleton point of master pattern is converted into world coordinate system from self-defined coordinate system using model matrix Model;
The skeleton point of master pattern is converted into visual coordinate system from world coordinate system using observation matrix View;
Visual coordinate system is removed into Z axis coordinate using projection matrix Projection rectangular projection, it is fallen within screen plane.
8. in three-dimensional face modeling method as claimed in claim 1, the master pattern is as ethnic group with the primitive man
The master pattern matched somebody with somebody.
9. in three-dimensional face modeling method as claimed in claim 1, the primitive man is as meeting a default resolution condition
Or a default light and shade difference condition.
10. a kind of three-dimensional face model building device, including input block, skeleton point determining unit, characteristic point indexing unit, conversion list
Member, texture mapping unit and map unit;
The input block is used to obtain primitive man's picture and master pattern;
Skeleton point determining unit is used to obtain skeleton point information according to master pattern, and the skeleton point is in master pattern grid-shaped
Deformation quantity is located at the characteristic point of a pre-set interval during change;
Characteristic point indexing unit is used to as upper mark characteristic point and record its coordinate information in the primitive man;
Converter unit is used to make the primitive man as upper pre- bidding as doing the first conversion, first conversion to the primitive man
Level put with the preset standard aligned in position on the master pattern, and same coordinate transform is done to the skeleton point of portrait;
Converter unit is additionally operable to make the master pattern the second conversion, and second conversion is included the bone on master pattern
The plane projection point of point moves to the corresponding skeleton point of primitive man's picture through the first conversion;
Texture mapping unit is used for the master pattern through the second conversion using presetting method unfolded surface as texture image;Mapping is single
Member is used to the primitive man through the first conversion being additionally operable to through the second conversion as mapping to the texture image texture mapping unit
Master pattern using presetting method unfolded surface as UV textures;Map unit is additionally operable to the primitive man through the first conversion as mapping
To the UV textures.
11. in three-dimensional face model building device as claimed in claim 10, in addition to colour of skin matching unit, the colour of skin matching list
Member is used to choose primitive man as upper one or more predeterminable areas, and colour of skin sampled value is obtained according to preset algorithm;And described in
Colour of skin sampled value does discoloration processing to the UV textures.
12. in three-dimensional face model building device as claimed in claim 11, colour of skin matching unit utilizes the colour of skin sampled value pair
The UV textures do discoloration processing and specifically included:
The colour of skin sample graph with UV textures formed objects is set up using the colour of skin sampled value, and using the UV textures as source figure
Piece, the colour of skin sample graph are that Target Photo carries out graph cut.
13. in three-dimensional face model building device as claimed in claim 11, map unit is by the primitive man through the first conversion as reflecting
The UV textures are incident upon to specifically include:
Emergence masking-out using pre-set dimension is shade, by the primitive man through the first conversion as being mapped to by emergence suture way
On UV textures through colour of skin matching treatment.
14. in three-dimensional face model building device as claimed in claim 10, the preset standard position be characterized a little in left and right
Pupil;
First conversion of the converter unit to primitive man's picture includes rotation, scaling or translated;
The rotation is specifically included:Primitive man's picture is rotated, is made in its left and right pupil standard portrait corresponding with master pattern
Left and right pupil horizontal level it is consistent;
The scaling is specifically included:Primitive man's picture is scaled, makes its left and right interpupillary distance standard people corresponding with master pattern
Left and right interpupillary distance as in is consistent;
The translation is specifically included:Primitive man's picture is translated, is made in its left and right pupil standard portrait corresponding with master pattern
The alignment of left and right pupil.
15. in three-dimensional face model building device as claimed in claim 10, converter unit do the second conversion specifically include it is as follows
Step:
The bone spot projection for master pattern of changing commanders is become to screen plane by orthogonal projection;
The plane projection point of the skeleton point of master pattern is moved to the corresponding skeleton point of primitive man's picture through the first conversion;
Inverse transformation to the master pattern skeleton point plane projection point orthogonal projection conversion after translation.
16. in three-dimensional face model building device as claimed in claim 15, converter unit does orthogonal projection conversion and specifically included:
The skeleton point of master pattern is converted into world coordinate system from self-defined coordinate system using model matrix Model;
The skeleton point of master pattern is converted into visual coordinate system from world coordinate system using observation matrix View;
Visual coordinate system is removed into Z axis coordinate using projection matrix Projection rectangular projection, it is fallen within screen plane.
17. in three-dimensional face model building device as claimed in claim 10, the master pattern is as ethnic group with the primitive man
The master pattern of matching.
18. in three-dimensional face model building device as claimed in claim 10, the primitive man is as meeting a default resolution ratio bar
Part or a default light and shade difference condition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410687577.4A CN104376594B (en) | 2014-11-25 | 2014-11-25 | Three-dimensional face modeling method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410687577.4A CN104376594B (en) | 2014-11-25 | 2014-11-25 | Three-dimensional face modeling method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104376594A CN104376594A (en) | 2015-02-25 |
CN104376594B true CN104376594B (en) | 2017-09-29 |
Family
ID=52555483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410687577.4A Active CN104376594B (en) | 2014-11-25 | 2014-11-25 | Three-dimensional face modeling method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104376594B (en) |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104809687A (en) * | 2015-04-23 | 2015-07-29 | 上海趣搭网络科技有限公司 | Three-dimensional human face image generation method and system |
CN107452049B (en) * | 2016-05-30 | 2020-09-15 | 腾讯科技(深圳)有限公司 | Three-dimensional head modeling method and device |
CN106570822B (en) * | 2016-10-25 | 2020-10-16 | 宇龙计算机通信科技(深圳)有限公司 | Face mapping method and device |
CN106618734A (en) * | 2016-11-04 | 2017-05-10 | 王敏 | Face-lifting-model-comparison imprinting device |
CN106815568A (en) * | 2016-12-30 | 2017-06-09 | 易瓦特科技股份公司 | For the method and system being identified for destination object |
US10621771B2 (en) * | 2017-03-21 | 2020-04-14 | The Procter & Gamble Company | Methods for age appearance simulation |
CN106934073A (en) * | 2017-05-02 | 2017-07-07 | 成都通甲优博科技有限责任公司 | Face comparison system, method and mobile terminal based on three-dimensional image |
CN108932459B (en) * | 2017-05-26 | 2021-12-10 | 富士通株式会社 | Face recognition model training method and device and face recognition method |
CN107274493B (en) * | 2017-06-28 | 2020-06-19 | 河海大学常州校区 | Three-dimensional virtual trial type face reconstruction method based on mobile platform |
CN107316340B (en) * | 2017-06-28 | 2020-06-19 | 河海大学常州校区 | Rapid face modeling method based on single photo |
CN107578469A (en) * | 2017-09-08 | 2018-01-12 | 明利 | A kind of 3D human body modeling methods and device based on single photo |
CN108171788B (en) * | 2017-12-19 | 2021-02-19 | 西安蒜泥电子科技有限责任公司 | Body change representation method based on three-dimensional modeling |
CN108171789B (en) * | 2017-12-21 | 2022-01-18 | 迈吉客科技(北京)有限公司 | Virtual image generation method and system |
CN108470321B (en) * | 2018-02-27 | 2022-03-01 | 北京小米移动软件有限公司 | Method and device for beautifying photos and storage medium |
CN108596827B (en) * | 2018-04-18 | 2022-06-17 | 太平洋未来科技(深圳)有限公司 | Three-dimensional face model generation method and device and electronic equipment |
CN109118579A (en) * | 2018-08-03 | 2019-01-01 | 北京微播视界科技有限公司 | The method, apparatus of dynamic generation human face three-dimensional model, electronic equipment |
CN109191505A (en) * | 2018-08-03 | 2019-01-11 | 北京微播视界科技有限公司 | Static state generates the method, apparatus of human face three-dimensional model, electronic equipment |
CN109191393B (en) * | 2018-08-16 | 2021-03-26 | Oppo广东移动通信有限公司 | Three-dimensional model-based beauty method |
CN110853147B (en) * | 2018-08-21 | 2023-06-20 | 东方梦幻文化产业投资有限公司 | Three-dimensional face transformation method |
CN109191508A (en) * | 2018-09-29 | 2019-01-11 | 深圳阜时科技有限公司 | A kind of simulation beauty device, simulation lift face method and apparatus |
CN109859305B (en) * | 2018-12-13 | 2020-06-30 | 中科天网(广东)科技有限公司 | Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face |
CN110032927A (en) * | 2019-02-27 | 2019-07-19 | 视缘(上海)智能科技有限公司 | A kind of face identification method |
CN109993689B (en) * | 2019-03-14 | 2023-05-16 | 郑州阿帕斯科技有限公司 | Cosmetic method and device |
CN110111417B (en) * | 2019-05-15 | 2021-04-27 | 浙江商汤科技开发有限公司 | Method, device and equipment for generating three-dimensional local human body model |
CN110111247B (en) * | 2019-05-15 | 2022-06-24 | 浙江商汤科技开发有限公司 | Face deformation processing method, device and equipment |
CN110751078B (en) * | 2019-10-15 | 2023-06-20 | 重庆灵翎互娱科技有限公司 | Method and equipment for determining non-skin color region of three-dimensional face |
CN112949360A (en) * | 2019-12-11 | 2021-06-11 | 广州市久邦数码科技有限公司 | Video face changing method and device |
CN111696184B (en) * | 2020-06-10 | 2023-08-29 | 上海米哈游天命科技有限公司 | Bone skin fusion determination method, device, equipment and storage medium |
CN112418195B (en) * | 2021-01-22 | 2021-04-09 | 电子科技大学中山学院 | Face key point detection method and device, electronic equipment and storage medium |
CN113554745B (en) * | 2021-07-15 | 2023-04-07 | 电子科技大学 | Three-dimensional face reconstruction method based on image |
CN113920282B (en) * | 2021-11-15 | 2022-11-04 | 广州博冠信息科技有限公司 | Image processing method and device, computer readable storage medium, and electronic device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6944320B2 (en) * | 2000-03-09 | 2005-09-13 | Microsoft Corporation | Rapid computer modeling of faces for animation |
JP2011039869A (en) * | 2009-08-13 | 2011-02-24 | Nippon Hoso Kyokai <Nhk> | Face image processing apparatus and computer program |
CN103116902A (en) * | 2011-11-16 | 2013-05-22 | 华为软件技术有限公司 | Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking |
CN103606190A (en) * | 2013-12-06 | 2014-02-26 | 上海明穆电子科技有限公司 | Method for automatically converting single face front photo into three-dimensional (3D) face model |
CN103646416A (en) * | 2013-12-18 | 2014-03-19 | 中国科学院计算技术研究所 | Three-dimensional cartoon face texture generation method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101968891A (en) * | 2009-07-28 | 2011-02-09 | 上海冰动信息技术有限公司 | System for automatically generating three-dimensional figure of picture for game |
-
2014
- 2014-11-25 CN CN201410687577.4A patent/CN104376594B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6944320B2 (en) * | 2000-03-09 | 2005-09-13 | Microsoft Corporation | Rapid computer modeling of faces for animation |
JP2011039869A (en) * | 2009-08-13 | 2011-02-24 | Nippon Hoso Kyokai <Nhk> | Face image processing apparatus and computer program |
CN103116902A (en) * | 2011-11-16 | 2013-05-22 | 华为软件技术有限公司 | Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking |
CN103606190A (en) * | 2013-12-06 | 2014-02-26 | 上海明穆电子科技有限公司 | Method for automatically converting single face front photo into three-dimensional (3D) face model |
CN103646416A (en) * | 2013-12-18 | 2014-03-19 | 中国科学院计算技术研究所 | Three-dimensional cartoon face texture generation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN104376594A (en) | 2015-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104376594B (en) | Three-dimensional face modeling method and device | |
CN108509848B (en) | The real-time detection method and system of three-dimension object | |
WO2021093453A1 (en) | Method for generating 3d expression base, voice interactive method, apparatus and medium | |
CN108399649B (en) | Single-picture three-dimensional face reconstruction method based on cascade regression network | |
CN113012293B (en) | Stone carving model construction method, device, equipment and storage medium | |
CN102999942B (en) | Three-dimensional face reconstruction method | |
CN104077804B (en) | A kind of method based on multi-frame video picture construction three-dimensional face model | |
CN109285215A (en) | A kind of human 3d model method for reconstructing, device and storage medium | |
CN110675489B (en) | Image processing method, device, electronic equipment and storage medium | |
CN109544677A (en) | Indoor scene main structure method for reconstructing and system based on depth image key frame | |
CN106097348A (en) | A kind of three-dimensional laser point cloud and the fusion method of two dimensional image | |
CN107358648A (en) | Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image | |
CN109360262A (en) | The indoor locating system and method for threedimensional model are generated based on CAD diagram | |
CN105069751B (en) | A kind of interpolation method of depth image missing data | |
CN104346824A (en) | Method and device for automatically synthesizing three-dimensional expression based on single facial image | |
CN109934847A (en) | The method and apparatus of weak texture three-dimension object Attitude estimation | |
CN110399809A (en) | The face critical point detection method and device of multiple features fusion | |
CN106155299B (en) | A kind of pair of smart machine carries out the method and device of gesture control | |
CN106797458A (en) | The virtual change of real object | |
CN105913444B (en) | Livestock figure profile reconstructing method and Body Condition Score method based on soft laser ranging | |
CN107766851A (en) | A kind of face key independent positioning method and positioner | |
CN106021550A (en) | Hair style designing method and system | |
CN113449570A (en) | Image processing method and device | |
CN108665530A (en) | Three-dimensional modeling implementation method based on single picture | |
Zheng et al. | 4D reconstruction of blooming flowers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |