CN110858411A - Method for generating 3D head model based on single face picture - Google Patents
Method for generating 3D head model based on single face picture Download PDFInfo
- Publication number
- CN110858411A CN110858411A CN201810958773.9A CN201810958773A CN110858411A CN 110858411 A CN110858411 A CN 110858411A CN 201810958773 A CN201810958773 A CN 201810958773A CN 110858411 A CN110858411 A CN 110858411A
- Authority
- CN
- China
- Prior art keywords
- model
- face
- models
- dimensional matrix
- head model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 239000011159 matrix material Substances 0.000 claims abstract description 47
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 5
- 230000001815 facial effect Effects 0.000 claims description 10
- 241000542420 Sphyrna tudes Species 0.000 claims description 6
- 210000003128 head Anatomy 0.000 description 32
- 210000001508 eye Anatomy 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000003491 array Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 210000000697 sensory organ Anatomy 0.000 description 2
- QNRATNLHPGXHMA-XZHTYLCXSA-N (r)-(6-ethoxyquinolin-4-yl)-[(2s,4s,5r)-5-ethyl-1-azabicyclo[2.2.2]octan-2-yl]methanol;hydrochloride Chemical compound Cl.C([C@H]([C@H](C1)CC)C2)CN1[C@@H]2[C@H](O)C1=CC=NC2=CC=C(OCC)C=C21 QNRATNLHPGXHMA-XZHTYLCXSA-N 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2021—Shape modification
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a method for generating a 3D head model based on a single face picture, which comprises the following steps: a, extracting key points of a face according to a face picture; b, selecting a plurality of reference points from the key points; step C, forming a target two-dimensional matrix according to the distribution conditions of other key points and any reference point; d, calculating the matrix similarity of each standard two-dimensional matrix in a preset model library and a target two-dimensional matrix; e, selecting a standard two-dimensional matrix corresponding to the value with the maximum matrix similarity, and taking a face model corresponding to the standard two-dimensional matrix as a face model of the 3D head model; and F, determining skin color according to the face picture, and synthesizing a 3D head model. The technical scheme has the advantages of simple algorithm flow and high operation efficiency.
Description
Technical Field
The invention relates to a method for generating a 3D model, in particular to a method for generating a 3D head model based on a single face picture.
Background
With the development of science and technology, 3D technology gradually enters the lives of people, and 3D head portraits of the people or even doll models can be manufactured by one picture, so that the entertainment requirements of many people are met. In the prior art, for example, ItSee3D, the scheme is complex, the equipment cost is high, the manufacturing period is long, and the user experience is greatly influenced.
Disclosure of Invention
In view of the above, the present invention provides a method for generating a 3D head model based on a single face picture, the method comprising:
a, extracting key points of a face according to a face picture;
b, selecting a plurality of reference points from the key points;
step C, forming a target two-dimensional matrix according to the distribution conditions of other key points and any reference point;
d, calculating the matrix similarity of each standard two-dimensional matrix in a preset model library and a target two-dimensional matrix;
e, selecting a standard two-dimensional matrix corresponding to the value with the maximum matrix similarity, and taking a face model corresponding to the standard two-dimensional matrix as a face model of the 3D head model;
and F, determining skin color according to the face picture, and synthesizing a 3D head model.
Further, the model base comprises a plurality of standard two-dimensional matrixes corresponding to the face models, and the face models comprise round face models, oval face models, heart-shaped face models, rhombic face models, square face models, long face models and pear-shaped face models.
Further, each face model corresponds to a set of large eye models and a set of small eye models.
Further, the step E further includes: and calculating an external canthus included angle theta according to the key points of the face, if theta is larger than a specified angle, selecting a large eye model as an eye model of the 3D head model, and otherwise, selecting a small eye model as an eye model of the 3D head model.
Further, the step E further includes: and taking the preset nose, ear and lip models as the nose, ear and lip models of the 3D head model.
Further, the step F further includes: and determining the thickness of the lips according to the face picture, and determining a lip model of the 3D head model by combining the lip model.
Further, the step F further includes: and selecting a cheek part in the facial picture, and after the light removal treatment, taking the obtained color as the skin color of the 3D head model.
The technical scheme provided by the invention can finish the manufacture of the 3D head model by using one facial photo, has simple algorithm, high operation efficiency and low equipment operation cost, and effectively solves the problems of high cost, poor flexibility and the like in the prior art.
Drawings
FIG. 1 is a flow chart of a method provided by the present invention;
FIG. 2 is a flow diagram of another method provided by the present invention;
FIG. 3 is a schematic diagram of key point extraction from a face picture;
FIG. 4 is a flow chart for constructing a target two-dimensional matrix;
FIG. 5 is a schematic diagram of the calculation of the external canthus from the eye part key points;
FIG. 6 is a flow chart for determining skin color;
FIG. 7 generates a stretched texture map representation from a face picture;
FIG. 8 is a schematic view after an overlay process;
fig. 9 is a schematic diagram of a doll based on a 3D head model.
Detailed Description
Example one
The invention provides a method for generating a 3D head model based on a single face picture, which comprises the following specific steps as shown in figure 1:
step 101, extracting key points of a face according to a face picture;
102, selecting a plurality of reference points from key points;
103, forming a target two-dimensional matrix according to the distribution conditions of other key points and any reference point;
104, calculating the matrix similarity between each standard two-dimensional matrix in a preset model library and a target two-dimensional matrix;
105, selecting a standard two-dimensional matrix corresponding to the value with the maximum matrix similarity, and taking a face model corresponding to the standard two-dimensional matrix as a face model of the 3D head model;
and step 106, determining skin color according to the face picture, and synthesizing a 3D head model.
According to the technical scheme, the human face data information in the picture can be obtained by using a graph similarity algorithm according to the provided single facial picture, models of the face and five sense organs are constructed, and finally the 3D head model is synthesized. The method has high flexibility, can be completed by only using a single picture, and has simple algorithm flow and higher operation efficiency.
Example two
A method of generating a 3D head model based on a single facial picture, as shown in fig. 2, the method comprising the steps of:
step 201, extracting key points of a face according to a face picture;
the face picture is a single front face picture, the left-right deviation does not exceed 15 degrees, and besides the picture, a photo, a screenshot and the like can be used. After the picture is selected, the Dlib open source code is used to detect the face in the picture, and key points of the face are obtained, and 68 key points are usually taken, as shown in fig. 3.
Step 202, selecting a plurality of reference points from the key points;
after selecting the key points, several reference points are selected from the 68 key points, usually 6 reference points, and for better matching the face model, the nose tip 34, the upper lip emphasis 52, the chin tip 9, the left eyebrow angle 18, the left eye angle 37, and the left mouth angle 49 are generally taken as the reference points.
Specifically, the technical scheme provided by the application adopts a graph similarity algorithm to match a face model, and as the algorithm extracts key points according to the gray value of a face picture, the gray value of the face of the picture is uneven due to the influence of light, and the post-calculation processing is more accurate, points can be symmetrically and uniformly taken when the reference points are extracted. Since the larger the number of reference points, the larger the amount of calculation in the later stage, and the lower the execution efficiency, 6 points are generally taken.
Step 203, forming a target two-dimensional matrix according to the distribution conditions of other key points and any reference point;
as shown in fig. 4, after selecting the reference points, the specific steps of forming the target two-dimensional matrix are as follows:
step 301, selecting a reference point P1, establishing a coordinate system by taking P1 as a center, dividing an area every 10 degrees, and dividing 36 areas in total;
step 302, counting the number Wj of the key points in each area to form a group of arrays containing 36 numerical values, wherein j is more than or equal to 0 and less than or equal to 67, and j is a positive integer;
step 303, calculating an array formed by other 5 reference points according to steps 301 and 302;
and step 304, forming a target two-dimensional matrix by the 6 groups of arrays containing 36 numerical values.
After the steps, a target two-dimensional matrix is obtained so as to be convenient for the next calculation and processing.
Step 204, calculating the matrix similarity between each standard two-dimensional matrix in a preset model library and a target two-dimensional matrix;
in general, a plurality of face models, for example, a round face model, an elliptical face model, a heart-shaped face model, a diamond-shaped face model, a square face model, a long face model, a pear-shaped face model, and the like are stored in the model library. These face models can be extracted as face pictures from key points and reference points, and then a standard two-dimensional matrix corresponding to the face models is obtained through the method described in step 301-304. And then, calculating the matrix similarity of the target two-dimensional matrix calculated in the previous step and the standard two-dimensional matrices one by one to obtain a similarity value.
Step 205, selecting a standard two-dimensional matrix corresponding to the value with the maximum matrix similarity, and taking a face model corresponding to the standard two-dimensional matrix as a face model of the 3D head model;
and finding out a standard two-dimensional matrix with the maximum similarity to the target two-dimensional matrix according to the matrix similarity value calculated in the step 204, selecting a face model corresponding to the standard two-dimensional matrix, and taking the face model as the face model of the 3D head model being manufactured.
Step 206, determining an eye model;
in step 204, a plurality of face models are stored in the model library, and each face model corresponds to a set of large-eye models and a set of small-eye models. In step 201, a plurality of key points are obtained, which also include key points of the eye, as shown in fig. 5. According to the key points of the eyes, the included angle theta of the external canthus can be calculated, if theta is larger than a specified angle, a large eye model is selected as an eye model of the 3D head model, and otherwise, a small eye model is selected as an eye model of the 3D head model.
Step 207, determining nose and ear models;
since the face model stored in the model library has corresponding nose and ear models, and a single front face picture is difficult to obtain the height of the nose and cannot obtain the full view of the ear, the nose and ear models stored in the model library are usually used as the nose and ear models of the 3D head model being manufactured.
Step 208, determining a lip model;
because the lip model is stored in the model library, and the facial picture can further determine the thickness of the lips, the lip model and the thickness of the lips are combined, and finally the lip model of the 3D head model being manufactured is confirmed.
Step 209, determining the skin color according to the face picture, and synthesizing a 3D head model.
Finally, the skin color of the 3D head model needs to be determined, as shown in fig. 6, the specific steps are as follows:
step 401, selecting a cheek part in a facial picture;
step 402, performing a light removal treatment;
in step 403, the processed color is obtained as the skin color of the 3D head model.
After the face model, the five sense organs model and the skin color are determined, the 3D head model is formed after the texture superposition processing is performed thereon, as shown in fig. 7 and 8.
After the 3D head model is completed, the head model can be matched with favorite body shapes and clothes to further form a doll, as shown in fig. 9.
In summary, embodiments of the present invention provide a technical solution for generating a 3D head model based on a single facial picture, and the technical solution can realize generation of the 3D head model by using a single facial picture.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (7)
1. A method of generating a 3D head model based on a single facial picture, the method comprising:
a, extracting key points of a face according to a face picture;
b, selecting a plurality of reference points from the key points;
step C, forming a target two-dimensional matrix according to the distribution conditions of other key points and any reference point;
d, calculating the matrix similarity of each standard two-dimensional matrix in a preset model library and a target two-dimensional matrix;
e, selecting a standard two-dimensional matrix corresponding to the value with the maximum matrix similarity, and taking a face model corresponding to the standard two-dimensional matrix as a face model of the 3D head model;
and F, determining skin color according to the face picture, and synthesizing a 3D head model.
2. The method of claim 1, wherein the model library comprises a standard two-dimensional matrix corresponding to a plurality of face models, and the face models comprise round face models, oval face models, heart-shaped face models, diamond-shaped face models, square face models, long face models and pear-shaped face models.
3. The method of claim 2, wherein each face model corresponds to a set of large eye models and a set of small eye models.
4. The method of claim 3, wherein step E further comprises:
and calculating an external canthus included angle theta according to the key points of the face, if theta is larger than a specified angle, selecting a large eye model as an eye model of the 3D head model, and otherwise, selecting a small eye model as an eye model of the 3D head model.
5. The method of claim 1, wherein step E further comprises:
and taking the preset nose, ear and lip models as the nose, ear and lip models of the 3D head model.
6. The method of claim 5, wherein step F further comprises:
and determining the thickness of the lips according to the face picture, and determining a lip model of the 3D head model by combining the lip model.
7. The method of claim 1, wherein step F further comprises:
and selecting a cheek part in the facial picture, and after the light removal treatment, taking the obtained color as the skin color of the 3D head model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810958773.9A CN110858411A (en) | 2018-08-22 | 2018-08-22 | Method for generating 3D head model based on single face picture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810958773.9A CN110858411A (en) | 2018-08-22 | 2018-08-22 | Method for generating 3D head model based on single face picture |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110858411A true CN110858411A (en) | 2020-03-03 |
Family
ID=69634805
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810958773.9A Pending CN110858411A (en) | 2018-08-22 | 2018-08-22 | Method for generating 3D head model based on single face picture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110858411A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012185545A (en) * | 2011-03-03 | 2012-09-27 | Secom Co Ltd | Face image processing device |
CN105719326A (en) * | 2016-01-19 | 2016-06-29 | 华中师范大学 | Realistic face generating method based on single photo |
-
2018
- 2018-08-22 CN CN201810958773.9A patent/CN110858411A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012185545A (en) * | 2011-03-03 | 2012-09-27 | Secom Co Ltd | Face image processing device |
CN105719326A (en) * | 2016-01-19 | 2016-06-29 | 华中师范大学 | Realistic face generating method based on single photo |
Non-Patent Citations (1)
Title |
---|
谈国新、孙传明: "一种真实感三维人脸交互式生成方法" * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111028330B (en) | Three-dimensional expression base generation method, device, equipment and storage medium | |
CN111243093B (en) | Three-dimensional face grid generation method, device, equipment and storage medium | |
CN106648103B (en) | A kind of the gesture tracking method and VR helmet of VR helmet | |
US11900557B2 (en) | Three-dimensional face model generation method and apparatus, device, and medium | |
CN112419487B (en) | Three-dimensional hair reconstruction method, device, electronic equipment and storage medium | |
CN109376582A (en) | A kind of interactive human face cartoon method based on generation confrontation network | |
CN110163054A (en) | A kind of face three-dimensional image generating method and device | |
CN103208133A (en) | Method for adjusting face plumpness in image | |
CN104899563A (en) | Two-dimensional face key feature point positioning method and system | |
CN106778628A (en) | A kind of facial expression method for catching based on TOF depth cameras | |
CN102567716B (en) | Face synthetic system and implementation method | |
CN108921926A (en) | A kind of end-to-end three-dimensional facial reconstruction method based on single image | |
CN108121950B (en) | Large-pose face alignment method and system based on 3D model | |
CN111833236B (en) | Method and device for generating three-dimensional face model for simulating user | |
CN110675475A (en) | Face model generation method, device, equipment and storage medium | |
CN103593870A (en) | Picture processing device and method based on human faces | |
CN112102480B (en) | Image data processing method, apparatus, device and medium | |
CN107610239A (en) | The virtual try-in method and device of a kind of types of facial makeup in Beijing operas | |
WO2021197230A1 (en) | Three-dimensional head model constructing method, device, system, and storage medium | |
CN108717730B (en) | 3D character reconstruction method and terminal | |
CN114360031A (en) | Head pose estimation method, computer device, and storage medium | |
CN107886568B (en) | Method and system for reconstructing facial expression by using 3D Avatar | |
CN112507766B (en) | Face image extraction method, storage medium and terminal equipment | |
WO2023160074A1 (en) | Image generation method and apparatus, electronic device, and storage medium | |
CN110858411A (en) | Method for generating 3D head model based on single face picture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200303 |