CN109064547A - A kind of single image hair method for reconstructing based on data-driven - Google Patents

A kind of single image hair method for reconstructing based on data-driven Download PDF

Info

Publication number
CN109064547A
CN109064547A CN201810686955.5A CN201810686955A CN109064547A CN 109064547 A CN109064547 A CN 109064547A CN 201810686955 A CN201810686955 A CN 201810686955A CN 109064547 A CN109064547 A CN 109064547A
Authority
CN
China
Prior art keywords
hair
model
style
image
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810686955.5A
Other languages
Chinese (zh)
Other versions
CN109064547B (en
Inventor
齐越
包永堂
吴继强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201810686955.5A priority Critical patent/CN109064547B/en
Publication of CN109064547A publication Critical patent/CN109064547A/en
Application granted granted Critical
Publication of CN109064547B publication Critical patent/CN109064547B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of single image hair method for reconstructing based on data-driven, be divided into four steps: the first step is input single image, then basis draws out some strokes to the observation of hair style in image, these strokes must all follow topknot direction in image to the hair tip from root of hair to draw out, these draw the entire Hair model geometry topological structure of curve major embodiment;Second step is to obtain corresponding best match hair and matching hair style to each drafting hair by measuring the difference drawn in stroke and database between the hair of hair style;Third step is to prevent the uncertainty of matching result, adds a confirmation process to the result after matching and decides whether using the target hair style;These target hair styles obtained are merged by way of generating the field of direction and generate a final Hair model by the 4th step.The present invention can reconstruct the 3D Hair model with the complete hair rank of original image likeness in form.

Description

A kind of single image hair method for reconstructing based on data-driven
Technical field
The invention belongs to computer vision and area of computer graphics, specifically a kind of single width based on data-driven Image hair method for reconstructing is mainly used for the fields such as game, video display animation and virtual reality.
Background technique
In computer graphics, to the three-dimensional modeling of virtual portrait, be always by researcher concern one it is important Project.The field of either virtual reality, film special efficacy, video-game or other computer graphics, visual correlation, depending on Feel that real character's modeling technique is all widely used.And hair is then the important feature of personage, the form of hair because People and it is different, sometimes even distinguish different people important symbol.Meanwhile the hair for constituting hair is very more, the moulding of hair, fortune Dynamic and its optical characteristics is all sufficiently complex, and the scalp electroacupuncture of the sense of reality becomes a research hotspot and difficulty for computer graphics Point.Geometric Modeling to the Hair model of specific modality is always a cumbersome job.Currently, most of Hair model is still Carry out the work of 3-Dimensional Hair Model and modeling by hand using interactive tool by artist.In recent years, research is constantly begun with Person attempts to reconstruct the model similar with true hair from image data with the method for automation.However currently based on figure The three-dimensional hair method for reconstructing of picture concentrates on the modeling that image is acquired based on multiple views, needs to build complicated acquisition environment, nothing Method is applied to the environment other than laboratory.Only scalp electroacupuncture methods and results based on single image are unsatisfactory, institute The hair style that can be modeled is also than the reliability of uniformity and hair depth relatively limited and that not can guarantee natural on-off cycles of hair growth.Since hair is deposited Block certainly and the missing of back hair information, most of scalp electroacupuncture technology can only obtain the practical hairs for 2.5 dimensions Model, and its hair is more fine crushing.The present invention can generate the Hair model of the hair rank of complete 3 dimension, with original image Middle hair style is similar, and can be carried out the application scenarios such as subsequent hair rendering and dynamic simulation.
Summary of the invention
Technology of the invention solves the problems, such as: overcoming some limitations of the prior art, provides a kind of based on data-driven Single image hair method for reconstructing, this method function reconstruct appearance and the similar Hair model of original image, also can guarantee hair Silk growth and successional reliability, practical value with higher.
The present invention solves technology used by above-mentioned technical problem: building model database is carried out based on the database Single image hair is rebuild, comprising the following steps:
Step 1: input picture, using human face characteristic point detection algorithm, face area automatic Calibration goes out face spy in the picture Point is levied, and according to the standard head model of all Hair models of adaptation provided in USC-HairSalon Hair model database Three-dimensional is calculated to two-dimensional transformation matrix T, the observation of user is then based on, draws stroke in the input image, these draftings Stroke must all follow the crucial stroke in the topknot direction in image from hair to the hair tip, it is also desirable to embody the several of entire hair style What topological structure;Described image is single image;
Step 2: using the transformation matrix T being calculated in step 1, by the hair style model in Hair model database It projects on input picture, then calculates difference with the drafting hair inputted in step 1, hair is drawn to each user Obtain a best match hair and matching hair style;
Step 3: being confirmed for current best match hair each time with matching hair style, first acquisition target hair style Hair around middle target hair, use the Hausdorff distance with coefficient as in metric space between two hairs away from From calculating target hair and the similar ratio around it between hair, decide whether using the target hair and hair style, so Continue step 2 afterwards and find best match hair and matching hair style, finally obtains target hair and hair style;
Step 4: target hair and hair style based on acquisition generate corresponding direction field, and subsequent fusion process is considered as one Multi-tag assignment problem, by optimized energy formula, the field of direction that is finally merged;Then from the beginning according to this field of direction Skin tracking growth hair carries out rendering and dynamic simulation using the Hair model until obtaining final 3D Hair model Application operating.
The step 1 is implemented as follows:
The calculating of three-dimensional to two-dimensional transformation matrix T are as follows: input single image is obtained using human face characteristic point detection algorithm Take 68 characteristic points of face in image;Then all heads are adapted to provided by USC-HairSalon Hair model database It sends out the characteristic point demarcated in advance on the standard head model of model to be matched, be calculated using Golden Standard algorithm From three-dimensional to two-dimensional transformation matrix T.It using acquired transformation matrix T, is observed based on user, input is drawn in the input image Stroke processed, these draw the geometry topological structure that stroke needs to embody entire hair style.
It is implemented as follows in the step 2:
(1) using the transformation matrix T obtained in step 1, the Hair model in Hair model database is projected into input It in image, and is compared with the drafting stroke inputted in step 1: each sample point drawn on stroke being found and is sent out in projection Immediate sample point on silk, and calculate projection hair and draw the difference value of stroke;
(2) by the entire Hair model database of simple traversal search, stroke is drawn for each and is calculated one The smallest best match sample hair of difference and its corresponding sample Hair model.
The step 3 is implemented as follows:
Consider one of the matching sample hair being calculated in step 2 and its surrounding hair on corresponding sample Hair model Cause property is confirmed for current best match hair and matching hair style.Use the Hausdorff distance with coefficient as measurement The distance between two curves in space, and set distance threshold value, calculate sample hair with around it hair matching rate, set Matching threshold updates best match sample hair and its corresponding sample Hair model, otherwise if matching result is more than the threshold value Abandon the sample hair.
The step 4 is implemented as follows:
(1) for the best match sample hair being calculated and its corresponding sample Hair model, first to each sample Hair model generates three-dimensional field, using sample hair as guidance, obtains fused side by optimized energy formula To field;
(2) according to the guidance of the fused hair style field of direction, the tracking growth hair since scalp, until generating finally 3D Hair model.
Compared with prior art, present invention has an advantage that
(1) present invention is using the Hair model database extended, so that data-driven algorithm adapts to more targets Hair style, and only need single image that can reconstruct completely Hair model similar with input picture.
(2) it the invention proposes enhancing matching algorithm, is further confirmed that after the completion of matching, knot when improving simple matching Randomness existing for fruit hair, so that algorithm is more robust, the sample hair and its corresponding hair mould that matching step obtains Type is more accurate effectively.And by blending algorithm combination of edge is seamlessly transitted, so that fusion results can defer to drafting The guidance of stroke keeps details, is also able to maintain continuity on the whole.
(3) Hair model of the hair rank for the complete 3D that the present invention generates allows to carry out follow-up rendering and motion simulation Equal application operatings, and can be used for much needing to use the scene of complicated hair style.
Detailed description of the invention
Fig. 1 is the data flowchart of the method for the present invention;
Fig. 2 is that the process of the method for the present invention shows schematic diagram.
Specific embodiment
The present invention is described in detail with specific implementation with reference to the accompanying drawing.
Main flow of the invention is as shown in Figure 1, be broadly divided into following four step:
(1) crucial hair is drawn
Input picture goes out human face characteristic point using face area automatic Calibration in image, and according to database acceptance of the bid accuracy Portion's model calculates the three-dimensional observation and simple interaction for two-dimensional transformation matrix T, being then based on user and draws in the input image Stroke, these draw stroke must all follow the crucial stroke in the topknot direction in image from hair to the hair tip, and can be preferably The geometry topological structure for embodying entire hair style.
(2) matching algorithm
Difference in user's drafting information U and 3D hair style in order to measure 2D between hair S, is fallen into a trap using step (1) Obtained transformation matrix T, Hair model in model database is projected in input picture.Each of on drawing stroke U Sample point siOn find projection hair on immediate sample point sj, and difference value is calculated as follows:
Wherein | | p (si)-p(sj) | | it is two point s on the plane of delineationiAnd sjBetween Euclidean distance.Len () calculating is drawn Hair processed and the length for projecting hair.The formula has a constraint condition, exactly when for comparing U and S have it is much the same Just there is the value being compared when length.
The difference between U and S is defined by drawing the difference of information and topknot in the calculating plane of delineation, formula is as follows:
Above-mentioned formula is minimized by simple traversal search, U is inputted for each user and is calculated one best With hair and its corresponding database hair style.
(3) confirm algorithm
Since the method for step (2) can only ensure the accuracy of single hair, cause last matched hair style there may be It is uncertain.The consistency of hair in view of the sample hair obtained in step (2) and around it, the method for use be Timing is added confirmation process and decides whether to using the corresponding sample hair style of the sample hair.
First obtain target hair style δiIn target hair SiThe hair { S } of surrounding is sent out to distinguish with 3D of different shapes Silk, defines a measurement index, mainly considers: spatial position, hair length and corner cut value:
Wherein DH(Si,Sj) it is Hausdorff distance between two hairs, it is usually used in measuring in space two as one The distance between curve, it is expressed as follows:
Wherein, Si, SjI-th in target hair style δ and jth root hair are respectively represented, p and q are distributed across this two respectively Point on hair, | | p-q | | it is the Euclidean distance between two spaces point.
Two hairs of different shapes obtain lesser D because being closer in order to preventH, to Hausdorff distance public affairs Formula is added to a coefficient θi,j, θ herei,jRepresent corner cut information variable between two hairs.It is by between two hairs Corner cut dot product is calculated, and is defined as follows:
tan(pk),tan(qk) respectively represent SiAnd SjUpper pkAnd qkUnit corner cut vector, n is compared with the point on bob silk Quantity.Wherein, in order to enable all the points on hair are uniformly distributed, all hairs are carried out using cubic spline interpolation in advance Re-sampling operations.Be demonstrated by formula (3) close distribution, calculate between similar-length and two hairs of corner cut gained away from It is smaller from being worth.
Target hair style δ is assessed using measure aboveiIn target hair SiCompany between the hair { S } of surrounding Continuous property, calculates itself and target hair S first for every hair in { S }iDistance { Ds, if { DsIn 85% meet formula (6), then it determines to then proceed to find until finding best hair and hair style using the target hair style.
Here β is a threshold parameter, and the embodiment of the present invention is set to 0.01.
(4) hair style blending algorithm
Next the best match hair and model that the first two steps obtain need to merge these hair styles.Fusion Result not only to follow the guidance that user draws information, but also the continuity of hair style entirety is kept, because being matched to There may be very big differences for hair style.Therefore, the field of direction first is generated to all best match hair styles, and based on net in the field of direction Lattice carry out mixing operation.
The task of this Model Fusion can be considered as a multi-tag assignment problem, specifically be exactly, and matching is enabled to obtain Target hair style δiAnd its best match hair SiLabel be li, then by minimizing following energy theorem come for 3D sky Between in each grid distribute optimum label:
Wherein first item is data item, for ensuring to merge the guidance that the field of direction follows drafting information, passes through and minimizes net Lattice center and best match hair SiCorresponding label liThe distance between acquire:
p(gi) and p (sk) it is grid element center g respectivelyiWith hair SiUpper skSpatial position.Section 2 in formula (7) It is smooth item, ensure that the field of direction merges edge smoothing transition, with uniformity, wherein N (g) represents the grid around grid g, Formula is defined as follows:
Above in formula, two labels of two adjacent grids, which are considered as compatibility, has phase and if only if its corresponding field of direction As field of direction value, that is, giAnd gjField of direction value Fi(gi) and Fj(gj) between dot product result be greater than threshold tau=0.7.
Multi-tag distribution energy function in formula (7) can cut algorithm by figure to be solved.It is calculated each After the optimal label of grid, the fused field of direction is obtained, value is obtained according to its corresponding label.Then according to final The field of direction, tracking growth hair, obtains the Complete three-dimensional hair mould of final hair rank since scalp after the completion of tracking Type.As a result as shown in Fig. 2, for input picture, depicting three hairs based on observation, (black of hair zones draws pen in figure It draws).It is obtained using previously described matching confirmation algorithm and draws matched three hair styles of hair, be this below input picture The bandwagon effect figure of three hair styles, the rightmost side are to obtain final fusion results using blending algorithm.
The content that description in the present invention is not described in detail belongs to the prior art well known to professional and technical personnel in the field.
The foregoing is merely the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered This is considered as protection scope of the present invention.

Claims (5)

1. a kind of single image hair method for reconstructing based on data-driven, which comprises the following steps:
Step 1: input picture goes out human face characteristic point using face area automatic Calibration in described image, and according to USC- The standard head model of all Hair models of adaptation provided in HairSalon Hair model database calculates three-dimensional to two dimension Transformation matrix T, be then based on the observation of user, draw out for input picture from hair to the hair tip and all follow hair in image The crucial stroke of Shu Fangxiang, the key stroke need to embody the geometry topological structure of entire hair style;Described image is single width Image;
Step 2: using the transformation matrix T being calculated in step 1, by the hair style model projection in Hair model database Onto input picture, hair then being drawn with user and calculates difference, hair is drawn to each user and obtains one best With hair and matching hair style;
Step 3: being confirmed for current best match hair each time with matching hair style, first mesh in acquisition target hair style The hair around hair is marked, uses the Hausdorff distance with coefficient as the distance between two hairs in metric space, meter Calculate target hair and the similar ratio around it between hair, decide whether using the target hair and hair style, then after Continuous step 2 finds best match hair and matching hair style, finally obtains target hair and hair style;
Step 4: target hair and hair style based on acquisition generate corresponding direction field, and subsequent fusion process is considered as one more to be marked Sign assignment problem, by optimized energy formula, the field of direction that is finally merged;Then it is chased after according to this field of direction from scalp Track grows hair, until obtaining final 3D Hair model, carries out rendering and dynamic simulation application using the Hair model Operation.
2. the single image hair method for reconstructing according to claim 1 based on data-driven, it is characterised in that: the step In rapid 1, it is as follows to calculate the three-dimensional calculating to two-dimensional transformation matrix T: input picture is obtained using human face characteristic point detection algorithm Take 68 characteristic points of face in image;Then all heads are adapted to provided by USC-HairSalon Hair model database It sends out the characteristic point demarcated in advance on the standard head model of model to be matched, be calculated using Golden Standard algorithm From three-dimensional to two-dimensional transformation matrix T;It is then based on the observation and simple interaction of user, draws stroke in the input image, this A little strokes of drawing must all follow the topknot direction in image from hair to the hair tip, and can preferably embody entire hair style Gather topological structure.
3. the single image hair method for reconstructing according to claim 1 based on data-driven, it is characterised in that: the step Rapid 2 are implemented as follows:
(1) using the transformation matrix T obtained, the Hair model in Hair model database is projected in input picture, and with The drafting stroke inputted in step 1 is compared: being found to each sample point drawn on stroke closest on projection hair Sample point, and calculate projection hair and draw stroke difference value;
(2) by the entire Hair model database of simple traversal search, stroke is drawn for each, a difference is calculated The smallest best match sample hair and its corresponding sample Hair model.
4. the single image hair method for reconstructing according to claim 1 based on data-driven, it is characterised in that: the step Rapid 3 are implemented as follows: considering the matching sample hair being calculated in step 2 and its surrounding hair in corresponding sample hair mould Consistency in type uses the Hausdorff distance with coefficient as the distance between two curves in measurement space.And it sets Distance threshold calculates the matching rate of sample hair with hair around it, sets matching threshold, if matching result more than if the threshold value Best match sample hair and its corresponding sample Hair model are updated, the sample hair is otherwise abandoned.
5. the single image hair method for reconstructing according to claim 1 based on data-driven, it is characterised in that: the step Rapid 4 are implemented as follows:
(1) for the best match sample hair being calculated and its corresponding sample Hair model, first to each sample hair Model generates three-dimensional field, using sample hair as guidance, obtains the fused field of direction by optimized energy formula;
(2) according to the guidance of the fused hair style field of direction, the tracking growth hair since scalp, until generating final 3D head Send out model.
CN201810686955.5A 2018-06-28 2018-06-28 Data-driven single image hair reconstruction method Expired - Fee Related CN109064547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810686955.5A CN109064547B (en) 2018-06-28 2018-06-28 Data-driven single image hair reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810686955.5A CN109064547B (en) 2018-06-28 2018-06-28 Data-driven single image hair reconstruction method

Publications (2)

Publication Number Publication Date
CN109064547A true CN109064547A (en) 2018-12-21
CN109064547B CN109064547B (en) 2021-04-16

Family

ID=64818258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810686955.5A Expired - Fee Related CN109064547B (en) 2018-06-28 2018-06-28 Data-driven single image hair reconstruction method

Country Status (1)

Country Link
CN (1) CN109064547B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419487A (en) * 2020-12-02 2021-02-26 网易(杭州)网络有限公司 Three-dimensional hair reconstruction method and device, electronic equipment and storage medium
CN115311403A (en) * 2022-08-26 2022-11-08 北京百度网讯科技有限公司 Deep learning network training method, virtual image generation method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIWEN HU等: "Single-view hair modeling using a hairstyle database", 《ACM TRANSACTIONS ON GRAPHICS》 *
TADAS BALTRUSAITIS等: "Constrained local neural fields for robust facial landmark detection in the wild", 《2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS》 *
祖晨阳: "改进的ICP算法在快速三维人脸识别中的应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419487A (en) * 2020-12-02 2021-02-26 网易(杭州)网络有限公司 Three-dimensional hair reconstruction method and device, electronic equipment and storage medium
CN112419487B (en) * 2020-12-02 2023-08-22 网易(杭州)网络有限公司 Three-dimensional hair reconstruction method, device, electronic equipment and storage medium
CN115311403A (en) * 2022-08-26 2022-11-08 北京百度网讯科技有限公司 Deep learning network training method, virtual image generation method and device
CN115311403B (en) * 2022-08-26 2023-08-08 北京百度网讯科技有限公司 Training method of deep learning network, virtual image generation method and device

Also Published As

Publication number Publication date
CN109064547B (en) 2021-04-16

Similar Documents

Publication Publication Date Title
Hu et al. Single-view hair modeling using a hairstyle database
DeCarlo et al. An anthropometric face model using variational techniques
Pighin et al. Modeling and animating realistic faces from images
Hu et al. Capturing braided hairstyles
Xie et al. Tree modeling with real tree-parts examples
WO2014117447A1 (en) Virtual hairstyle modeling method of images and videos
CN105913416A (en) Method for automatically segmenting three-dimensional human face model area
US11282257B2 (en) Pose selection and animation of characters using video data and training techniques
CN101882326A (en) Three-dimensional craniofacial reconstruction method based on overall facial structure shape data of Chinese people
CN106960465A (en) A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
CN106021550A (en) Hair style designing method and system
KR20090000635A (en) 3d face modeling system and method considering the individual's preferences for beauty
CN103093488A (en) Virtual haircut interpolation and tweening animation producing method
Zhang et al. Styleavatar3d: Leveraging image-text diffusion models for high-fidelity 3d avatar generation
CN109064547A (en) A kind of single image hair method for reconstructing based on data-driven
Bao et al. A survey of image-based techniques for hair modeling
Cordier et al. Sketch-based modeling
Wither et al. Realistic hair from a sketch
CN114373043A (en) Head three-dimensional reconstruction method and equipment
US11361467B2 (en) Pose selection and animation of characters using video data and training techniques
Teng et al. Image-based tree modeling from a few images with very narrow viewing range
CN104091318B (en) A kind of synthetic method of Chinese Sign Language video transition frame
KR100450210B1 (en) System and method for compositing three dimension scan face model and recording medium having program for three dimension scan face model composition function
Vittert et al. A hierarchical curve-based approach to the analysis of manifold data
Zhang et al. Energyhair: Sketch-based interactive guide hair design using physics-inspired energy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210416