CN102419868A - Device and method for modeling 3D (three-dimensional) hair based on 3D hair template - Google Patents

Device and method for modeling 3D (three-dimensional) hair based on 3D hair template Download PDF

Info

Publication number
CN102419868A
CN102419868A CN2010105013368A CN201010501336A CN102419868A CN 102419868 A CN102419868 A CN 102419868A CN 2010105013368 A CN2010105013368 A CN 2010105013368A CN 201010501336 A CN201010501336 A CN 201010501336A CN 102419868 A CN102419868 A CN 102419868A
Authority
CN
China
Prior art keywords
hair
model
image
template
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010105013368A
Other languages
Chinese (zh)
Other versions
CN102419868B (en
Inventor
张辉
万涛
孙讯
林和燮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Original Assignee
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Samsung Telecommunications Technology Research Co Ltd, Samsung Electronics Co Ltd filed Critical Beijing Samsung Telecommunications Technology Research Co Ltd
Priority to CN201010501336.8A priority Critical patent/CN102419868B/en
Publication of CN102419868A publication Critical patent/CN102419868A/en
Application granted granted Critical
Publication of CN102419868B publication Critical patent/CN102419868B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a device and method for modeling 3D (three-dimensional) hair based on a 3D hair template. The device comprises a 3D hair template library and a 3D hair model generation unit, wherein the 3D hair template library is used for storing one or more 3D hair templates in advance; the 3D hair model generation unit is used for receiving a front elevation hair image and a 3D head model; and a real 3D hair model is generated by processing one or more 3D hair templates provided by the 3D hair template library based on the characteristics of the front elevation hair image in combination with the 3D head model.

Description

Carry out the equipment and the method for the modeling of 3D hair based on 3D hair template
Technical field
The present invention relates to three-dimensional (3D) hair modeling technique, particularly, relate to a kind of effective equipment and the method that can carry out the modeling of 3D hair automatically; Wherein, Utilize the head portrait of catching, carry out the modeling of 3D hair based on 3D hair template through simple filming apparatus (for example, general camera).
Background technology
People's hair (that is, hair style) is non-ly truly being played up, is being unusual part and parcel in personal portrait and the digital character animation, hair be created in the sense of reality that has significantly strengthened virtual role in virtual image (avatar) and the entertainment applications.Therefore, the hair modeling to 3D role is the key issue of 3D computer graphical and computer vision field.This technology allows in the environment of virtual/augmented reality, to copy real hair appearance.
In the 3D area of computer graphics, the 3D modeling is meant the processing that the arithmetic of any three dimensional object of generation is represented, the result that this processing produces is called the 3D model.Present most 3D hair modeling is mainly through manually coming the establishment of implementation model, and this mode not only expends time in, and needs a large amount of artificial participations.In addition; Other typical 3D hair modeling pattern needs special capture apparatus usually, and (like what adopt in " HairPhotobooth:Geometirc and Photometirc Acquisition of Real Hairstyles, Paris, Adobe; SIGGRAPH 08 " is very complicated filming apparatus; Comprise: 16 digital cameras, 150 led light sources and 3 DLP projector on large-scale dome is fluffy, fixing), in this mode, need to take a large amount of images; Thereby cost considerable time (like, several hrs) just can be created out 3D hair model.In addition; The needed input of some hair production method (like No. 7609261 disclosed hair modeling method of United States Patent (USP)) (as; Surface model, benchmark hair (guide hairs) and toupees (clump hairs)) be difficult to calculate; And the mode that adopts is that the existing hair of re-treatment (generates owing to hair is based on same basis in final hair style, to produce more hair details; Therefore may cause the outward appearance of hair untrue), the algorithm that some hair style production method (like No. 7418371 United States Patent (USP)) adopts is complicated (comprising the hair rendering module) very, and need the people to participate in the concrete processing procedure.
It should be noted that in addition some 3D modeling method that is directed against non-hair can't directly apply to the modeling of 3D hair, this is that speciality by people's hair itself determines.Particularly, compare with other modeling object, the difference of hair between individuality is bigger.In other words, different individualities possibly have very similarly shape of face or nose shape, but their hair style but respectively has oneself characteristic and individual character, and the difference that is shown also just is difficult to embody through simple modeling pattern.
Therefore, can find out, in the prior art, the image that can't catch based on common filming apparatus and carry out the modeling of 3D hair effectively automatically.
Summary of the invention
The object of the present invention is to provide a kind of equipment and method of carrying out the modeling of 3D hair based on 3D hair template; According to said 3D hair modeling equipment and method; (for example can utilize through simple filming apparatus; The head portrait of general camera) catching to realize automatically the modeling of effective 3D hair based on 3D hair template.
According to an aspect of the present invention, a kind of equipment that carries out the modeling of 3D hair based on 3D hair template is provided, said equipment can comprise: 3D hair ATL is used for storing in advance one or more 3D hair templates; And 3D hair model generation unit; Be used for receiving and face hair image and 3D head model; Face the characteristics of hair image and combine the 3D head model that the one or more 3D hair templates that provided by 3D hair ATL are operated based on said, thereby produce true 3D hair model.
Said 3D hair model generation unit can comprise: the hair taxon; Be used for receiving and face the hair image; Draw the said hair style classification of facing the hair image based on the said characteristics of facing the hair image, and in a plurality of 3D hair templates that provide by 3D hair ATL, find out and the corresponding 3D hair of said hair style classification template; And the hair alignment unit, the 3D hair model that is used to combine the 3D head model that the hair taxon is found out is calibrated, thereby produces true 3D hair model.
Said hair taxon can comprise: hair SHAPE DETECTION device is used to detect the hair shape of facing the hair image; Refinement device in border is used for the hair shape that is detected by hair SHAPE DETECTION device is carried out the border refinement; The Shape Feature Extraction device is used for from the hair Boundary Extraction shape facility by the refinement of border refinement device; The class definition device is used for a plurality of 3D hair templates that 3D hair ATL provides are divided into a plurality of hair style classifications; Sorter; Be used for confirming the said hair style classification of facing the hair image, and in the 3D hair template of a plurality of hair style classifications that the class definition device marks off, find out and the said corresponding 3D hair of the hair style classification template of facing the hair image based on the shape facility that extracts by the Shape Feature Extraction device.
The hair alignment unit can calculate 3D hair template based on reference head model and the deviation between the 3D hair template; Calculate then and be used to make that the reference head model becomes the transformation parameter of 3D head model; Deviation between reference head model that calculates and the 3D hair template is transformed to the deviation to the 3D head model, and the deviation after the conversion is applied to the 3D head model.
Hair SHAPE DETECTION device can detect through face, the profile of exporting the hair shape is handled in skin color modeling, hair color modeling, standard picture.
The shape facility of stating the extraction of Shape Feature Extraction device can comprise: the length that the amount of hair, the symmetry of hair, hair divide line position and hair.
3D hair ATL can be stored in advance and be applied to all relevant public 3D hair templates of facing the hair image; And said 3D hair model generation unit comprises: the hair deforming unit; Be used for receiving and face hair image and 3D head model; Face the characteristics of hair image and combine the 3D head model that the public 3D hair template that is provided by 3D hair ATL is carried out deformation process based on said, thereby produce true 3D hair model.
The hair deforming unit can comprise: knowledge data definition device is used to define the key point of public 3D hair template and the zone of dividing according to key point; Hair SHAPE DETECTION device is used to detect the hair shape of facing the hair image; The hair divider; Be used for and be divided into common part and individual character part by the detected hair shape of hair SHAPE DETECTION device; Wherein, common part indication hair more similar part between Different Individual, the part that individual character partly indicates hair between Different Individual, to differ greatly; Send out the sheet approximator, be used for the individual character that is gone out by the hair divider divides is partly carried out approximate processing; Send out the sheet modeler, be used for the sheet of sending out of sending out the individual character part after the sheet approximator is handled is carried out the 3D modeling; The 2D adaptation of key point is used to combine the 3D head model, and the border key point that knowledge data is defined the defined public 3D hair model of device is mated with the common portion boundary that is gone out by the hair divider divides; The 3D determiner of key point is used for the result that the 2D adaptation based on key point mated and combines the 3D head model and the 3D coordinate of key point is confirmed in the zone of knowledge data definition device definition; 3D interpolation of data device is used for the 3D coordinate based on the key point of being confirmed by the 3D determiner of key point, carries out the 3D interpolation of data in conjunction with the zone of knowledge data definition device definition; The model compositor is used for that each 3D that sends out the output of sheet modeler is sent out the interior slotting result that sheet and 3D interpolation of data device export and is combined to; And the texture generator, be used for producing corresponding texture, thereby produce true 3D hair model to the synthetic result of model compositor.
Said key point can comprise: the most outstanding forefront key point in the key point corresponding with the outer boundary of hair and inner boundary, the front elevation of 3D hair template, as the static line key point of the frontier point of the FX in the hair on the 3D hair template hindbrain.
The said zone of dividing according to key point can comprise: face apex zone, between key point on the inner boundary and forefront key point; Face forward region, between the key point on forefront key point and the outer boundary; Transitional region is between key point on the outer boundary and static line key point; FX, in the 3D hair template except facing apex zone, facing the remainder forward region and the transitional region.
3D interpolation of data device can be taked different interpolating methods to different zones.
The hair deforming unit can be used for receiving faces hair image and 3D head model; Face the characteristics of hair image and combine the 3D head model that the true 3D hair model that produces is carried out deformation process based on said, thus the true 3D hair model after output skew is handled.
According to a further aspect in the invention, a kind of method of carrying out the modeling of 3D hair based on 3D hair template is provided, said method can comprise: store one or more 3D hair templates in advance; And receive and to face hair image and 3D head model, face the characteristics of hair image and combine the 3D head model that one or more 3D hair templates are operated based on said, thereby produce true 3D hair model.
Hair image and 3D head model are faced in reception; Face the characteristics of hair image and combine the 3D head model that one or more 3D hair templates are operated based on said; Thereby the step that produces true 3D hair model can comprise: receive and face the hair image; Draw the said hair style classification of facing the hair image based on the said characteristics of facing the hair image, and in a plurality of 3D hair templates, find out and the corresponding 3D hair of said hair style classification template; And combine the 3D head model that the 3D hair model of finding out is calibrated, thereby produce true 3D hair model.
Drawing the said step of facing the hair style classification of hair image can comprise: detect the hair shape of facing the hair image; Hair shape to detecting is carried out the border refinement; Hair Boundary Extraction shape facility from refinement; The a plurality of 3D hair templates that provide are divided into a plurality of hair style classifications; Shape facility based on extracting is confirmed the said hair style classification of facing the hair image, and in the 3D hair template of a plurality of hair style classifications that mark off, finds out and the said corresponding 3D hair of the hair style classification template of facing the hair image.
The step of calibration can comprise: calculate 3D hair template based on reference head model and the deviation between the 3D hair template; Calculate then and be used to make that the reference head model becomes the transformation parameter of 3D head model; Deviation between reference head model that calculates and the 3D hair template is transformed to the deviation to the 3D head model, and the deviation after the conversion is applied to the 3D head model.
Can detect through face, skin color modeling, hair color modeling, standard picture handle the profile of exporting the hair shape.
What store in advance is to can be applicable to all relevant public 3D hair templates of facing the hair image; And receive and face hair image and 3D head model; Face the characteristics of hair image and combine the 3D head model that one or more 3D hair templates are operated based on said; Thereby the step that produces true 3D hair model can comprise: receive and face hair image and 3D head model; Face the characteristics of hair image and combine the 3D head model that public 3D hair template is carried out deformation process based on said, thereby produce true 3D hair model.
Deformation process can comprise: define the key point of public 3D hair template and the zone of dividing according to key point; The hair shape of hair image is faced in detection; Detected hair shape is divided into common part and individual character part, wherein, common part indication hair more similar part between Different Individual, the part that individual character partly indicates hair between Different Individual, to differ greatly; Individual character to marking off is partly carried out approximate processing; The sheet of sending out of the individual character part after pairing approximation is handled carries out the 3D modeling; In conjunction with the 3D head model, the border key point of defined public 3D hair model and the common portion boundary that marks off are mated; Based on the result of being mated and combine the zone of 3D head model and definition to confirm the 3D coordinate of key point; Based on the 3D coordinate of the key point of confirming, carry out the 3D interpolation of data in conjunction with the zone of definition; Each 3D after the modeling is sent out sheet and interior slotting result is combined to; And produce corresponding texture, thereby produce true 3D hair model to synthetic result.
Description of drawings
Through the description of carrying out below in conjunction with accompanying drawing to embodiment, above-mentioned and/or other purpose of the present invention and advantage will become apparent, wherein:
Fig. 1 illustrates the block diagram of 3D hair modeling equipment according to an exemplary embodiment of the present invention;
Fig. 2 illustrates the exemplary detailed structure of the 3D model generation unit in the 3D hair modeling equipment shown in Figure 1;
Fig. 3 illustrates the exemplary detailed structure of the hair taxon in the 3D model generation unit shown in Figure 2;
Fig. 4 illustrates and utilizes 3D hair modeling equipment shown in Figure 3 to carry out the process flow diagram of the method for 3D hair modeling according to an exemplary embodiment of the present invention;
Fig. 5 illustrates the process flow diagram of hair shape detecting method according to an exemplary embodiment of the present invention;
Fig. 6 illustrates according to an exemplary embodiment of the present the position in the detected unique point of face;
Fig. 7 illustrates the rectangular area that is used for skin color modeling and hair color modeling according to an exemplary embodiment of the present;
Fig. 8 illustrates the diagrammatic sketch of detected according to an exemplary embodiment of the present invention hair shape;
Fig. 9 illustrates the diagrammatic sketch of the hair outer boundary after the refinement according to an exemplary embodiment of the present invention;
Figure 10 illustrates and utilizes anthropological measuring information to adopt the diagrammatic sketch to the approximate model of head according to an exemplary embodiment of the present invention;
Figure 11 is the diagrammatic sketch that illustrates according to an exemplary embodiment of the present invention as the amount of the hair of hair shape facility;
Figure 12 illustrates the example that hair according to an exemplary embodiment of the present invention divides line position;
Figure 13 illustrates the diagrammatic sketch of side hair lengths according to an exemplary embodiment of the present invention;
Figure 14 illustrates according to an exemplary embodiment of the present invention and every kind of each hair style that characteristic is corresponding;
Figure 15 illustrates the diagrammatic sketch of selecting corresponding 3D hair template according to an exemplary embodiment of the present invention;
Figure 16 illustrates the diagrammatic sketch that carries out the calibration of 3D hair template according to an exemplary embodiment of the present;
Figure 17 illustrates another exemplary detailed structure of the 3D model generation unit in the 3D hair modeling equipment shown in Figure 1;
Figure 18 illustrates the exemplary detailed structure of the hair deforming unit in the 3D model generation unit shown in Figure 17;
The 3D hair modeling equipment shown in Figure 180 that utilizes that Figure 19 illustrates another exemplary embodiment according to the present invention carries out the process flow diagram of the method for 3D hair modeling;
Figure 20 illustrates the diagrammatic sketch of public according to an exemplary embodiment of the present invention 3D hair template;
Figure 21 illustrates the key point that defines in the public 3D hair template shown in Figure 20 to Figure 23;
Figure 24 illustrates each zone that 3D hair template is according to an exemplary embodiment of the present invention divided according to key point;
Figure 25 illustrates hair SHAPE DETECTION result according to an exemplary embodiment of the present invention;
Figure 26 illustrates the diagrammatic sketch that according to an exemplary embodiment of the present the hair shape is divided into common part and individual character part;
Figure 27 illustrates according to an exemplary embodiment of the present the result that the individual character to hair partly is similar to; And
Figure 28 shows utilization according to 3D hair modeling method of the present invention and the resulting 3D hair of equipment model to Figure 30.
Embodiment
At present will be in detail with reference to embodiments of the invention, the example of said embodiment is shown in the drawings, and wherein, identical label refers to identical parts all the time.Below will be through said embodiment is described with reference to accompanying drawing, so that explain the present invention.
Fig. 1 illustrates the block diagram of 3D hair modeling equipment according to an exemplary embodiment of the present invention.As shown in Figure 1; 3D hair modeling equipment comprises according to an exemplary embodiment of the present invention: 3D hair model generation unit 10 and 3D hair ATL 20; Wherein, 3D hair ATL 20 is used for storing in advance one or more 3D hair templates; 3D hair model generation unit 10 is used for receiving faces hair image and 3D head model, faces the characteristics of hair image and combines the 3D head model that the one or more 3D hair templates that provided by 3D hair ATL 20 are operated based on said, thereby produce true 3D hair model.Here, 3D hair template can manual designs be good in advance by the related personnel, and the 3D head model is meant the hero's who faces the hair image 3D head model.In addition, this hero's faces the hair image and can be obtained by common capture apparatus (for example, digital camera) fully.As can beappreciated from fig. 1; The 3D hair modeling pattern that the present invention proposes promptly, is handled said 3D hair template according to the characteristics of facing the hair image mainly by means of the 3D hair template that designs in advance; Simultaneously; The characteristics that in to the processing of 3D hair template, reflect hero's 3D head model in the lump, thus can be in the actual modeling of 3D hair, do not need artificial the participation and fully automatically carry out; This mode has been simplified the corresponding calculated complexity greatly, and it is more efficient to make modeling handle.
Based on above-mentioned general plotting of the present invention; Can adopt variety of way to make up concrete 3D model generation unit; A kind of exemplary detailed structure of the 3D model generation unit 10 in the 3D hair modeling equipment shown in Figure 1 below will be described with reference to Fig. 2; Wherein, said 3D hair model generation unit 10 comprises: hair taxon 100 is used for receiving and faces the hair image; Draw the said hair style classification of facing the hair image based on the said characteristics of facing the hair image, and in a plurality of 3D hair templates that provide by 3D hair ATL 20, find out and the corresponding 3D hair of said hair style classification template; Hair alignment unit 110, the 3D hair model that is used to combine the 3D head model that hair taxon 100 is found out is calibrated, thereby produces true 3D hair model.
As for hair taxon 100 shown in Figure 2; Can adopt variety of way to make up its concrete structure; Fig. 3 illustrates the exemplary detailed structure of the hair taxon 100 in the 3D model generation unit 10 shown in Figure 2; Wherein, said hair taxon 100 comprises: hair SHAPE DETECTION device 101 is used to detect the hair shape of facing the hair image; Border refinement device 102 is used for the hair shape that is detected by hair SHAPE DETECTION device 101 is carried out the border refinement; Shape Feature Extraction device 103 is used for from the hair Boundary Extraction shape facility by 102 refinements of border refinement device; Class definition device 104 is used for a plurality of 3D hair templates that 3D hair ATL 20 provides are divided into a plurality of hair style classifications; Sorter 105; Be used for confirming the said hair style classification of facing the hair image, and in the 3D hair template of a plurality of hair style classifications that class definition device 104 marks off, find out and the said corresponding 3D hair of the hair style classification template of facing the hair image based on the shape facility that extracts by Shape Feature Extraction device 103.
Fig. 4 illustrates and utilizes 3D hair modeling equipment shown in Figure 3 to carry out the process flow diagram of the method for 3D hair modeling according to an exemplary embodiment of the present invention.With reference to Fig. 4,, detect the hair shape of facing the hair image by hair SHAPE DETECTION device 101 at step S100.
Here, can adopt automatic detection to the hair shape, for example, imported to face the hair image be the RGB image or be converted under the situation of RGB image, come automatic detection head to send out shape through hair shape detecting method shown in Figure 5.With reference to Fig. 5,, utilize active shape model (ASM) method to detect facial unique point at step S101.For example, can detect the position of eyes and eyebrow and the position of nose and chin.These characteristic point coordinates can be expressed as:
X=[x 1, y 1..., x n, y n], wherein, n is the quantity of unique point.
Fig. 6 illustrates according to an exemplary embodiment of the present the position in the detected unique point of face.The unique point of these positions will be used to carry out skin color modeling and hair color modeling.
With reference to returning Fig. 5,, carry out the skin color modeling at step S102.Particularly, at first, can be to three rectangular areas in the skin region of great amount of samples image, selecting, that is, the training of skin data is carried out in two rectangular areas of eyes below and the zone (as shown in Figure 7) on the forehead.For example, can make up skin model to each pixel in the above-mentioned rectangular area<e i, S i, a i>, wherein, E iThe color of remarked pixel (for example, R, G, B component) mean value, S iThe standard deviation of the color value of remarked pixel, a iThe deviation that the brightness of remarked pixel changes.Through above skin modeling process, can draw a iStatistical distribution (for example, Gaussian distribution), thereby confirm to be used to judge whether to belong to a of skin pixels iThreshold value.
At step S103, carry out the hair modeling to facing the hair image.Particularly, suppose that hair appears at the specific region of front elevation picture, therefore, come to confirm mainly to occur automatically the zone of hair based on the facial characteristics point that detects.For example, three rectangles that are positioned at forehead and temples among Fig. 7 can be set to occur the zone of hair.Calculate each color of pixel mean value and standard deviations to these three zones, and then calculate its variance, through with this variance and a to each pixel in each zone iThreshold value compares to confirm that related pixel is that to belong to skin also be non-skin, and noncutaneous pixel is confirmed as the hair pixel, thereby is used for respectively each regional hair color being carried out modeling as seed.
At step S104, carry out standard picture for the above result that obtains and handle.For example, use such as the corrosion figure (corrosion) or (dilatation) technology of expansion and fill the cavity in the hair zones, thereby create continuous hair zones.At step S105, output is according to the detected hair shape of aforesaid way profile, as shown in Figure 8.
With reference to returning Fig. 4, after step S100 detects the hair shape,, carry out the border refinement by 102 pairs of hair shapes that detect by hair SHAPE DETECTION device 101 of border refinement device at step S200.Particularly, the hair outer boundary that produces from step S100 may worsen owing to unsuitable image-forming condition (such as bad illumination, noise and complicated background).Therefore, be necessary to carry out the position that the border micronization processes is adjusted the border.Can begin to carry out the border refinement as starting point from the point that exterior contour is positioned at lower left; At first; Select contiguous starting point on the exterior contour a bit as end point, then, between these two points, calculate the searching route (can adopt dijkstra's algorithm) of minimum cost; Thereby obtain a new path, and this new path is substituted the boundary member between said two points originally.Then; To be positioned at former end point on the new path as new starting point; And on the selection exterior contour contiguous this starting point a bit as new end point, then, between these two points, calculate the searching route of minimum cost once more and obtain a new path.By that analogy, the most bottom-right as end point on exterior contour, and carried out till the replacement of corresponding path.Through above-mentioned processing, appear at concave regions on the exterior contour and progressively level off to real border motif.Improved effect is shown in Fig. 9, wherein, and hair outer boundary after the refinement and before level and smooth more and the nature compared.
At step S300, by Shape Feature Extraction device 103 from hair Boundary Extraction shape facility by 102 refinements of border refinement device.Particularly, utilize anthropological measuring information to adopt approximate model (see figure 10) to head.Shown in Fig. 1 O; Suppose that head can be represented by the ellipse in the 2D space; Then the central point SE between two (can draw through the unique point that detects) is used to confirm the position of elliptical center G, and wherein, the horizontal ordinate of G is identical with SE; The ordinate of G is positioned on the SE, and the distance between the SE is 1.8 times of eye-level.In addition, oval minor axis r calculates according to following formula:
R=(e * (151.1/31.3)) * 0.5, wherein, e is the length of eyes.
Long axis of ellipse R calculates according to following formula:
R=1.35×r。
After obtaining the above-mentioned oval approximate model of head, can calculate following four types hair shape facility, be respectively:
● the amount of hair: the shape facility of the type mainly comprises five values, is respectively left side hair volume, middle hair volume, right side hair volume, hair transition angle, left side, hair transition angle, right side.Figure 11 is the diagrammatic sketch that illustrates according to an exemplary embodiment of the present invention as the amount of the hair of hair shape facility, and is shown in figure 11, on the left of the line of inner boundary and oval intersection point and elliptical center is divided into whole hair portion, centre and right side.Wherein, The left side hair volume is represented by the mean value of the distance between outer boundary in the left field and the inner boundary; Middle hair volume is represented that by the mean value of the distance between outer boundary in the zone line and the elliptic arc right side hair volume is represented by the mean value of the distance between outer boundary in the right side area and the inner boundary.Corresponding transition angle by the line of said intersection point and elliptical center respectively and the angle between the oval center line represent.
● the symmetry of hair: the shape facility of the type mainly is meant the ratio of left side hair volume and whole hair volume, and this value has reflected the symmetric case of hair.
● hair divides line position: the position of separated time is confirmed through the depression points of outer boundary and/or the depression points of inner boundary usually.Figure 12 illustrates the example that hair according to an exemplary embodiment of the present invention divides line position.
● the length of hair: the shape facility of the type mainly comprises three values, related maximum range value and lowest distance value and side hair lengths when being respectively the amount of calculating hair.Figure 13 illustrates the diagrammatic sketch of side hair lengths according to an exemplary embodiment of the present invention, said side hair lengths refer to eyes to corresponding inner boundary a left side down/vertical height between the lower-right most point.
After having extracted above-mentioned hair shape facility, can use these characteristics to describe corresponding hair style classification.
In addition, for a plurality of 3D hair templates that provide in the 3D hair ATL 20, class definition device 104 is divided into a plurality of hair style classifications with them.As an example, can classify: specifically be directed against the male sex's bob pattern, 26 kinds of hair styles of definable (, not considering the situation of long hair and bald head) at this according to following classifying rules.Should be understood that said 26 kinds of hair styles as just example, can also mark off various hair style classifications to different crowds.
For said 26 kinds of hair styles, can divide according to following four kinds of characteristics:
● S: hair divides line position (left and right, in, do not have)
● C: hair side angle (slick and sly, angled)
● V: the amount of hair (thick, thin)
● T: hair-skin transition line (high and low)
Wherein, when transition line (T) when low, characteristic " hair side angle (C) " is left in the basket and disregards.
Therefore, to the different values of above-mentioned four kinds of characteristics, the following 26 kinds of hair styles of definable:
Situation for the side separated time: 12 types=2 (S) * 2 (C) * 2 (V) * (T, height)
+ 2 (S) * 2 (V) * (T, low)
Situation for middle separated time: 10 types=2 (C, a left side) * 2 (C, the right side) * 2 (V) *
(T, height)+2 (V) * (T, low)
The situation that does not have separated time: 4 types=2 (V) * 2 (T)
Figure 14 illustrates according to an exemplary embodiment of the present invention and every kind of each hair style that characteristic is corresponding.In Figure 14, (a) the expression hair divides line position, the separated time position from left to right is shown successively is arranged in situation left and right and that do not have separated time; (b) expression hair side angle from left to right illustrates angled and slick and sly situation successively; (c) amount of expression hair, the amount that hair from left to right is shown successively is thick and thin situation; (d) expression hair-skin transition line from left to right illustrates transition line successively and is in low level and high-order situation.
At step S400; Confirm the said hair style classification of facing the hair image by sorter 105 based on the shape facility that extracts by Shape Feature Extraction device 103; And in the 3D hair template of a plurality of hair style classifications that class definition device 104 marks off, find out and the said corresponding 3D hair of the hair style classification template of facing the hair image, to carry out follow-up calibration process.Figure 15 illustrates the diagrammatic sketch of selecting corresponding 3D hair template according to an exemplary embodiment of the present invention.With reference to Figure 15, because the correspondence of hair style classification is to be that carry out on the basis with the coupling between the 2D image, therefore; 3D hair template in the 3D hair ATL is projected as the 2D form; At this moment, with the shape facility that extracts and the hair template applications after the projection in the classifier of using always, as; SVMs (SVM), thus in the 3D of a plurality of hair style classifications hair template, find out and the said corresponding 3D hair of the hair style classification template of facing the hair image.
At step S500, calibrate by the 3D hair model that hair alignment unit 110 combines the 3D head model that hair taxon 100 is found out, thereby produce true 3D hair model.Particularly; Because selected 3D hair template does not have with reference to the head 3D model that specifically carries out the hero of hair modeling when creating in advance in step S400; And only be based on the reference head model creation; Therefore, be necessary to combine said hero's 3D head model that the 3D hair template of selecting is calibrated.
Figure 16 illustrates the diagrammatic sketch that carries out the calibration of 3D hair template according to an exemplary embodiment of the present.With reference to Figure 16, at first, calculate the deviation between reference head model and the 3D hair template, then, calculating is used to make that the reference head model becomes the transformation parameter of 3D head model (being similar to through change of scale).Then; With the deviation conversion between reference head model that calculates and the 3D hair template (for example; Change of scale) is deviation to the 3D head model; And the deviation after the conversion is applied to 3D head model (for example, through existing " laplacian surface editor "), thereby create the 3D hair model after the calibration.
Through above description, can find out that 3D hair modeling example of equipment property embodiment of the present invention can utilize the head portrait of catching through simple filming apparatus (for example, general camera), carries out the modeling of 3D hair based on the 3D hair template of prior establishment.It should be noted that the structure for hair taxon 100 is not limited to the concrete structure shown in Fig. 3, under the situation that understanding the present invention conceives basically, those skilled in the art can adopt variety of way to realize 3D hair modeling equipment of the present invention.For example, can be by further merging or further divides in different unit, perhaps in unified processor, realizes.
The another kind of exemplary detailed structure of the 3D model generation unit 10 in the 3D hair modeling equipment shown in Figure 1 below will be described with reference to Figure 17; Wherein, 3D hair ATL 20 is stored in advance and is applied to all relevant public 3D hair templates of facing the hair image; And said 3D hair model generation unit 10 comprises: hair deforming unit 200; Be used for receiving and face hair image and 3D head model, face the characteristics of hair image and combine the 3D head model that the public 3D hair template that is provided by 3D hair ATL is carried out deformation process based on said, thereby produce true 3D hair model.
As for hair deforming unit 200 shown in Figure 17; Can adopt variety of way to make up its concrete structure; Figure 18 illustrates the exemplary detailed structure of the hair deforming unit 200 in the 3D model generation unit 10 shown in Figure 17; Wherein, hair deforming unit 200 comprises: knowledge data definition device 201 is used to define the key point of public 3D hair template and the zone of dividing according to key point; Hair SHAPE DETECTION device 202 is used to detect the hair shape of facing the hair image; Hair divider 203; Be used for and be divided into common part and individual character part by hair SHAPE DETECTION device 202 detected hair shapes; Wherein, common part indication hair more similar part between Different Individual, the part that individual character partly indicates hair between Different Individual, to differ greatly; Send out sheet approximator 204, be used for the individual character that is marked off by hair divider 203 is partly carried out approximate processing; Send out sheet modeler 205, be used for the individual character sheet of sending out after sheet approximator 204 is handled of sending out is partly carried out the 3D modeling; The 2D adaptation 208 of key point is used to combine the 3D head model, and the border key point that knowledge data is defined device 201 defined public 3D hair models is mated with the common portion boundary that is marked off by hair divider 203; The 3D determiner 209 of key point is used for the result that the 2D adaptation 208 based on key point mated and combines the 3D head model and the 3D coordinate of key point is confirmed in the zone of knowledge data definition device 201 definition; 3D interpolation of data device 210 is used for the 3D coordinate based on the key point of being confirmed by the 3D determiner 209 of key point, carries out the 3D interpolation of data in conjunction with the zone of knowledge data definition device 201 definition; Model compositor 206 is used for that each 3D that sends out 205 outputs of sheet modeler is sent out the interior slotting result that sheet and 3D interpolation of data device 210 export and is combined to; Texture generator 207 is used for producing corresponding texture to model compositor 206 synthetic results, thereby produces true 3D hair model.
The 3D hair modeling equipment shown in Figure 180 that utilizes that Figure 19 illustrates another exemplary embodiment according to the present invention carries out the process flow diagram of the method for 3D hair modeling.With reference to Figure 19, at step S10, by the key point of the public 3D hair template of knowledge data definition device 201 definition and the zone of dividing according to key point.Particularly, in this embodiment, has only public 3D hair template in the 3D hair ATL 20; As an example; For the bob pattern of common man, this public 3D hair template is shown in figure 20, wherein; (a) and (b), (c) illustrate the hair portion on the face template, and the texture of template hair (d) is shown.In this step; Knowledge data definition device 201 is defined as key point with some point in the 3D hair template, and said key point such as Figure 21 are to shown in Figure 23, wherein; Figure 21 illustrates outer boundary and the corresponding key point of inner boundary with hair; Figure 22 illustrates " forefront " key point the most outstanding in the front elevation, and Figure 23 illustrates " static line " key point as the frontier point of the FX in the hair on the hindbrain, and the 3D coordinate of this part key point remains unchanged in modeling process.
Figure 21 is divided into different zones to the above-mentioned key point shown in Figure 23 with whole 3D hair template.Figure 24 illustrates each zone that 3D hair template is according to an exemplary embodiment of the present invention divided according to key point.With reference to Figure 24, key point is divided into 4 zones with 3D hair model, is respectively: face apex zone, between inner boundary key point and forefront key point; Face forward region, between forefront key point and outer boundary key point; Transitional region is between outer boundary key point and static line key point; FX, in the 3D hair template except facing apex zone, facing the remainder forward region and the transitional region.
At step S20, detect the hair shape of facing the hair image by hair SHAPE DETECTION device 202, wherein, the structure of hair SHAPE DETECTION device 202 can be identical with the hair SHAPE DETECTION device 101 among Fig. 3, and therefore, step S20 is similar to the step S1OO among Fig. 4.Figure 25 illustrates hair SHAPE DETECTION result according to an exemplary embodiment of the present invention.With reference to Figure 25, in detected boundary shape, comprise following: the end, a left side/right side point marks with circle in Figure 25; The hair outer boundary is as the top curve between hair and the background; The hair inner boundary is as the bottom curve between hair and the skin of face.
At step S30, will be divided into common part and individual character part by hair SHAPE DETECTION device 202 detected hair shapes by hair divider 203.It should be noted that and to adopt different modes to carry out above-mentioned division.A kind of feasible mode is to calculate the convex closure of hair inner boundary, thereby on the convex closure is common part, and remainder is the individual character part, shown in the light lines among Figure 26.
Below will partly take different modelings to handle to common part and individual character respectively.
At step S40, by the 2D adaptation 208 combination 3D head models of key point, the border key point that knowledge data is defined device 201 defined public 3D hair models is mated with the common portion boundary that is marked off by hair divider 203.Particularly, the 2D adaptation 208 of key point is projected as the 2D image with 3D hair template with having an X-rayed.Then, can be respectively the point of the left end on 3D hair template and 2D hair border be mated with point of the right end.After this; For each the 3D key point (shown in figure 21) on hair outer boundary/inner boundary; The direction of the 2D plane of delineation is pointed in calculating from the predetermined center point of 3D head model; And along this direction projection to the 2D plane of delineation, thereby on 2D hair border, find the point corresponding with outer boundary/inner boundary key point.
At step S50, the result of being mated based on the 2D adaptation 208 of key point by the 3D determiner 209 of key point also combines the 3D head model and the 3D coordinate of key point is confirmed in the zone of knowledge data definition device 201 definition.Particularly, for the inner boundary key point, oppositely be transmitted to the 3D head model through the corresponding key point on will be along above-mentioned direction from the 2D image and intersect the 3D coordinate that obtains the inner boundary key point.For the outer boundary key point, also adopt the mode of back projection, and keep the depth value of outer boundary key point in 3D hair template.For the forefront key point, the relative position that they arrive outer boundary and inner boundary remains unchanged, thereby confirms its 3D coordinate.For the static line key point, their 3D coordinates in 3D hair template remain unchanged.Through with upper type, obtained the 3D coordinate of each key point in conjunction with the 3D head model.
At step S60,, carry out the 3D interpolation of data in conjunction with the zone of knowledge data definition device 201 definition by the 3D coordinate of 3D interpolation of data device 210 based on the key point of confirming by the 3D determiner 209 of key point.As a kind of optimal way,, adopt different modes that the 3D interpolation of data is carried out in each zone at this.For the point of facing in apex zone and the transitional region, keep their directions with respect to outer boundary through simple interpolation algorithm.For other point, adopt common interpolation algorithm (for example, RBF RBF interpolating method) to carry out the 3D interpolation of data.The benefit of aforesaid way is to help producing hair separated time effect and produce level and smooth shape simultaneously.Below promptly accomplished 3D modeling to the common part of hair.
In addition, for the individual character part of hair,, partly carry out approximate processing by sending out 204 pairs of individual characteies that mark off by hair divider 203 of sheet approximator at step S70.For example, can adopt the segmented fitting algorithm that individual character partly is similar to, thereby abandon a sheet (patch) less in the individual character part.Send out the approximate result of sheet shown in Figure 27,, come to confirm again approx inner boundary through line segment with reference to Figure 27.Adopt the individual character after triangulation method will be similar to partly to be expressed as one group of triangle.
At step S80; The sheet of sending out by sending out the individual character part after 205 pairs sheet approximator of sheet modeler 204 are handled carries out the 3D modeling; Thereby confirm the 3D coordinate of each point in the individual character part, wherein, with the 2D point back projection of individual character part in 3d space; Calculate and facial position of intersecting, thereby correspondingly confirm the 3D coordinate.Simultaneously, also produce texture coordinate for each point in the individual character part.In fact, each in the individual character part sent out sheet and is comprised in the bigger triangle as polygon, and this triangle is mapped to the texture coordinate space.Then, facing the texture coordinate that center-of-mass coordinate on the hair image produces said each point through each point that keeps in the individual character part.Below promptly accomplished 3D modeling to the common part of hair.
At step S90, the interior slotting result that each 3D that will send out sheet modeler 205 output by model compositor 206 sends out sheet and 210 outputs of 3D interpolation of data device is combined to, and deletes the point of repetition.
At step S95, produce corresponding texture by texture generator 207 to model compositor 206 synthetic results, thereby produce true 3D hair model.Particularly, utilize simple raster algorithm, based on texture coordinate and project to picture position on the two dimensional surface and will face the hair image mapped to the texture coordinate plane.Then, carrying out texture-combined combines with the texture of template with the texture with mapping.As optimal way, can use common colour correction algorithm and make the texture appearance of hair template be similar to the texture of mapping, and the alpha transition is used for synthetic above-mentioned two kinds of textures.
For above exemplary embodiment; Should note; The structure of hair deforming unit 200 is not limited to the concrete structure shown in Figure 18, and under the situation that understanding the present invention conceives basically, those skilled in the art can adopt variety of way to realize 3D hair modeling equipment of the present invention.For example, can be by further merging or further divides in different unit, perhaps in unified processor, realizes.
In addition; In the equipment that employed public 3D hair template can be Fig. 2 and Fig. 3 in the equipment of Figure 17 and Figure 18 by the selected 3D hair template that is complementary with the hair style classification of facing the hair image in classification hair unit, thereby further improve the validity of 3D hair modeling.
According to 3D hair modeling equipment of the present invention and method, do not need the artificial concrete modeling process of participating in, only rely on and simply face the hair image, can carry out the modeling of 3D hair based on the 3D hair template of creating in advance.Owing to people's hair is divided into common part and individual character part, and has taked different processing modes, therefore, make it possible under the situation that reduces computation complexity, utilize comparatively simple hardware and software platform, carry out the modeling of 3D hair automatically effectively.Figure 28 shows utilization according to 3D hair modeling method of the present invention and the resulting 3D hair of equipment model to Figure 30, and wherein, arithmetic speed and similarity degree have all been obtained significant improvement.
It should be noted that 3D hair modeling method and equipment can be included in the generating apparatus that is used for animation or virtual image according to an exemplary embodiment of the present invention, also can be applicable to photo synthesizer, scene generating apparatus or other stunt generating apparatus.In said apparatus; Except 3D hair modeling equipment according to an exemplary embodiment of the present invention; Also comprise the data input cell, data analysis unit of related object (such as animation, virtual image, photo and scene etc.) and as a result outside the generation unit, because these unit all belong to the prior art beyond the present invention, therefore; Obscure for fear of theme of the present invention is caused, be not elaborated at this.
Above each embodiment of the present invention only is exemplary, and the present invention is not limited to this.Those skilled in the art should understand that: anyly relate to the 3D hair template that utilization is created in advance, all fall among the scope of the present invention in conjunction with the mode that hair image and 3D head image carry out the modeling of 3D hair faced of input.Under the situation that does not break away from principle of the present invention and spirit, can change these embodiments, wherein, scope of the present invention limits in claim and equivalent thereof.

Claims (23)

1. equipment that carries out the modeling of 3D hair based on 3D hair template, said equipment comprises:
3D hair ATL is used for storing in advance one or more 3D hair templates; And
3D hair model generation unit; Be used for receiving and face hair image and 3D head model; Face the characteristics of hair image and combine the 3D head model that the one or more 3D hair templates that provided by 3D hair ATL are operated based on said, thereby produce true 3D hair model.
2. equipment as claimed in claim 1, wherein, said 3D hair model generation unit comprises:
The hair taxon; Be used for receiving and face the hair image; Draw the said hair style classification of facing the hair image based on the said characteristics of facing the hair image, and in a plurality of 3D hair templates that provide by 3D hair ATL, find out and the corresponding 3D hair of said hair style classification template; And
The hair alignment unit, the 3D hair model that is used to combine the 3D head model that the hair taxon is found out is calibrated, thereby produces true 3D hair model.
3. equipment as claimed in claim 2, wherein, said hair taxon comprises:
Hair SHAPE DETECTION device is used to detect the hair shape of facing the hair image;
Refinement device in border is used for the hair shape that is detected by hair SHAPE DETECTION device is carried out the border refinement;
The Shape Feature Extraction device is used for from the hair Boundary Extraction shape facility by the refinement of border refinement device;
The class definition device is used for a plurality of 3D hair templates that 3D hair ATL provides are divided into a plurality of hair style classifications;
Sorter; Be used for confirming the said hair style classification of facing the hair image, and in the 3D hair template of a plurality of hair style classifications that the class definition device marks off, find out and the said corresponding 3D hair of the hair style classification template of facing the hair image based on the shape facility that extracts by the Shape Feature Extraction device.
4. equipment as claimed in claim 3; Wherein, The hair alignment unit calculate 3D hair template based on reference head model and the deviation between the 3D hair template; Calculating is used to make that the reference head model becomes the transformation parameter of 3D head model then, the deviation between reference head model that calculates and the 3D hair template is transformed to the deviation to the 3D head model, and the deviation after the conversion is applied to the 3D head model.
5. equipment as claimed in claim 3, wherein, hair SHAPE DETECTION device is handled the profile of exporting the hair shape through face detection, skin color modeling, hair color modeling, standard picture.
6. equipment as claimed in claim 3, wherein, the shape facility that said Shape Feature Extraction device extracts comprises: the length that the amount of hair, the symmetry of hair, hair divide line position and hair.
7. equipment as claimed in claim 1, wherein, 3D hair ATL is stored in advance and is applied to all relevant public 3D hair templates of facing the hair image, and said 3D hair model generation unit comprises:
The hair deforming unit; Be used for receiving and face hair image and 3D head model; Face the characteristics of hair image and combine the 3D head model that the public 3D hair template that is provided by 3D hair ATL is carried out deformation process based on said, thereby produce true 3D hair model.
8. equipment as claimed in claim 7, wherein, the hair deforming unit comprises:
Knowledge data definition device is used to define the key point of public 3D hair template and the zone of dividing according to key point;
Hair SHAPE DETECTION device is used to detect the hair shape of facing the hair image;
The hair divider; Be used for and be divided into common part and individual character part by the detected hair shape of hair SHAPE DETECTION device; Wherein, common part indication hair more similar part between Different Individual, the part that individual character partly indicates hair between Different Individual, to differ greatly;
Send out the sheet approximator, be used for the individual character that is gone out by the hair divider divides is partly carried out approximate processing;
Send out the sheet modeler, be used for the sheet of sending out of sending out the individual character part after the sheet approximator is handled is carried out the 3D modeling;
The 2D adaptation of key point is used to combine the 3D head model, and the border key point that knowledge data is defined the defined public 3D hair model of device is mated with the common portion boundary that is gone out by the hair divider divides;
The 3D determiner of key point is used for the result that the 2D adaptation based on key point mated and combines the 3D head model and the 3D coordinate of key point is confirmed in the zone of knowledge data definition device definition;
3D interpolation of data device is used for the 3D coordinate based on the key point of being confirmed by the 3D determiner of key point, carries out the 3D interpolation of data in conjunction with the zone of knowledge data definition device definition;
The model compositor is used for that each 3D that sends out the output of sheet modeler is sent out the interior slotting result that sheet and 3D interpolation of data device export and is combined to; And
The texture generator is used for producing corresponding texture to the synthetic result of model compositor, thereby produces true 3D hair model.
9. equipment as claimed in claim 8; Wherein, said key point comprises: the most outstanding forefront key point in the key point corresponding with the outer boundary of hair and inner boundary, the front elevation of 3D hair template, as the static line key point of the frontier point of the FX in the hair on the 3D hair template hindbrain.
10. equipment as claimed in claim 9, wherein, the said zone of dividing according to key point comprises: face apex zone, between key point on the inner boundary and forefront key point; Face forward region, between the key point on forefront key point and the outer boundary; Transitional region is between key point on the outer boundary and static line key point; FX, in the 3D hair template except facing apex zone, facing the remainder forward region and the transitional region.
11. equipment as claimed in claim 8, wherein, 3D interpolation of data device is taked different interpolating methods to different zones.
12. equipment as claimed in claim 3 also comprises,
The hair deforming unit is used for receiving and faces hair image and 3D head model, and face the characteristics of hair image and combine the 3D head model that the true 3D hair model that produces is carried out deformation process based on said, thus the true 3D hair model after output skew is handled.
13. animation generation device; Comprise animation data input block, animation data analytic unit and animation generation unit as a result, it is characterized in that: also comprise like one of any described equipment that carries out the modeling of 3D hair based on 3D hair template in the claim 1 to 12.
14. virtual image generating apparatus; Comprise virtual image data input cell, virtual image data analysis unit and virtual image generation unit as a result, it is characterized in that: also comprise one of any described equipment that carries out the modeling of 3D hair based on 3D hair template like claim 1 to 12.
15. photo synthesizer; Comprise picture data input block, picture data analytic unit and photo generation unit as a result, it is characterized in that: also comprise like one of any described equipment that carries out the modeling of 3D hair based on 3D hair template in the claim 1 to 12.
16. scene generating apparatus; Comprise contextual data input block, contextual data analytic unit and scene generation unit as a result, it is characterized in that: also comprise like one of any described equipment that carries out the modeling of 3D hair based on 3D hair template in the claim 1 to 12.
17. a method of carrying out the modeling of 3D hair based on 3D hair template, said method comprises:
Store one or more 3D hair templates in advance; And
Hair image and 3D head model are faced in reception, face the characteristics of hair image and combine the 3D head model that one or more 3D hair templates are operated based on said, thereby produce true 3D hair model.
18. method as claimed in claim 17; Wherein, Hair image and 3D head model are faced in reception, face the characteristics of hair image and combine the 3D head model that one or more 3D hair templates are operated based on said, thereby the step that produces true 3D hair model comprises:
The hair image is faced in reception, draws the said hair style classification of facing the hair image based on the said characteristics of facing the hair image, and in a plurality of 3D hair templates, finds out and the corresponding 3D hair of said hair style classification template; And
In conjunction with the 3D head model 3D hair model of finding out is calibrated, thereby produced true 3D hair model.
19. method as claimed in claim 18 wherein, draws the said step of facing the hair style classification of hair image and comprises:
The hair shape of hair image is faced in detection;
Hair shape to detecting is carried out the border refinement;
Hair Boundary Extraction shape facility from refinement;
The a plurality of 3D hair templates that provide are divided into a plurality of hair style classifications;
Shape facility based on extracting is confirmed the said hair style classification of facing the hair image, and in the 3D hair template of a plurality of hair style classifications that mark off, finds out and the said corresponding 3D hair of the hair style classification template of facing the hair image.
20. method as claimed in claim 19; Wherein, The step of calibration comprises: calculate 3D hair template based on reference head model and the deviation between the 3D hair template; Calculating is used to make that the reference head model becomes the transformation parameter of 3D head model then, the deviation between reference head model that calculates and the 3D hair template is transformed to the deviation to the 3D head model, and the deviation after the conversion is applied to the 3D head model.
21. method as claimed in claim 19 wherein, is handled the profile of exporting the hair shape through face detection, skin color modeling, hair color modeling, standard picture.
22. method as claimed in claim 17; Wherein, What store in advance is to be applied to all relevant public 3D hair templates of facing the hair image; And receive and face hair image and 3D head model, face the characteristics of hair image and combine the 3D head model that one or more 3D hair templates are operated based on said, thereby the step that produces true 3D hair model comprises:
Hair image and 3D head model are faced in reception, face the characteristics of hair image and combine the 3D head model that public 3D hair template is carried out deformation process based on said, thereby produce true 3D hair model.
23. method as claimed in claim 22, wherein, deformation process comprises:
Define the key point of public 3D hair template and the zone of dividing according to key point;
The hair shape of hair image is faced in detection;
Detected hair shape is divided into common part and individual character part, wherein, common part indication hair more similar part between Different Individual, the part that individual character partly indicates hair between Different Individual, to differ greatly;
Individual character to marking off is partly carried out approximate processing;
The sheet of sending out of the individual character part after pairing approximation is handled carries out the 3D modeling;
In conjunction with the 3D head model, the border key point of defined public 3D hair model and the common portion boundary that marks off are mated;
Based on the result of being mated and combine the zone of 3D head model and definition to confirm the 3D coordinate of key point;
Based on the 3D coordinate of the key point of confirming, carry out the 3D interpolation of data in conjunction with the zone of definition;
Each 3D after the modeling is sent out sheet and interior slotting result is combined to; And
Result to synthetic produces corresponding texture, thereby produces true 3D hair model.
CN201010501336.8A 2010-09-28 2010-09-28 Equipment and the method for 3D scalp electroacupuncture is carried out based on 3D hair template Active CN102419868B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010501336.8A CN102419868B (en) 2010-09-28 2010-09-28 Equipment and the method for 3D scalp electroacupuncture is carried out based on 3D hair template

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010501336.8A CN102419868B (en) 2010-09-28 2010-09-28 Equipment and the method for 3D scalp electroacupuncture is carried out based on 3D hair template

Publications (2)

Publication Number Publication Date
CN102419868A true CN102419868A (en) 2012-04-18
CN102419868B CN102419868B (en) 2016-08-03

Family

ID=45944269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010501336.8A Active CN102419868B (en) 2010-09-28 2010-09-28 Equipment and the method for 3D scalp electroacupuncture is carried out based on 3D hair template

Country Status (1)

Country Link
CN (1) CN102419868B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image
CN103093488A (en) * 2013-02-02 2013-05-08 浙江大学 Virtual haircut interpolation and tweening animation producing method
CN103366400A (en) * 2013-07-24 2013-10-23 深圳市华创振新科技发展有限公司 Method for automatically generating three-dimensional head portrait
WO2014117447A1 (en) * 2013-02-02 2014-08-07 浙江大学 Virtual hairstyle modeling method of images and videos
CN104376597A (en) * 2014-12-05 2015-02-25 北京航空航天大学 Multi-direction constrained hair reconstruction method
CN104867173A (en) * 2014-02-03 2015-08-26 梦工厂动画公司 Efficient And Stable Approach To Elasticity And Collisions For Hair Animation
CN105045968A (en) * 2015-06-30 2015-11-11 青岛理工大学 Hair style design method and system
CN105117445A (en) * 2015-08-13 2015-12-02 北京建新宏业科技有限公司 Automatic hairstyle matching method, device and system
WO2016045016A1 (en) * 2014-09-24 2016-03-31 Intel Corporation Furry avatar animation
CN106504063A (en) * 2016-11-01 2017-03-15 广州增强信息科技有限公司 A kind of virtual hair tries video frequency showing system on
CN106815883A (en) * 2016-12-07 2017-06-09 珠海金山网络游戏科技有限公司 The hair treating method and system of a kind of game role
CN107122791A (en) * 2017-03-15 2017-09-01 国网山东省电力公司威海供电公司 Electricity business hall employee's hair style specification detection method based on color development and Texture Matching
WO2017156746A1 (en) * 2016-03-17 2017-09-21 Intel Corporation Simulating motion of complex objects in response to connected structure motion
WO2017181332A1 (en) * 2016-04-19 2017-10-26 浙江大学 Single image-based fully automatic 3d hair modeling method
CN107392099A (en) * 2017-06-16 2017-11-24 广东欧珀移动通信有限公司 Extract the method, apparatus and terminal device of hair detailed information
CN107615337A (en) * 2016-04-28 2018-01-19 华为技术有限公司 A kind of three-dimensional scalp electroacupuncture method and device
CN108340405A (en) * 2017-11-10 2018-07-31 广东康云多维视觉智能科技有限公司 A kind of robot three-dimensional scanning system and method
CN108463823A (en) * 2016-11-24 2018-08-28 华为技术有限公司 A kind of method for reconstructing, device and the terminal of user's Hair model
CN109408653A (en) * 2018-09-30 2019-03-01 叠境数字科技(上海)有限公司 Human body hair style generation method based on multiple features retrieval and deformation
CN111182350A (en) * 2019-12-31 2020-05-19 广州华多网络科技有限公司 Image processing method, image processing device, terminal equipment and storage medium
CN111583367A (en) * 2020-05-22 2020-08-25 构范(厦门)信息技术有限公司 Hair simulation method and system
CN113269822A (en) * 2021-05-21 2021-08-17 山东大学 Person hair style portrait reconstruction method and system for 3D printing
CN113713387A (en) * 2021-08-27 2021-11-30 网易(杭州)网络有限公司 Virtual curling model rendering method, device, equipment and storage medium
CN117389676A (en) * 2023-12-13 2024-01-12 成都白泽智汇科技有限公司 Intelligent hairstyle adaptive display method based on display interface

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1164220C (en) * 2000-01-07 2004-09-01 法伊鲁特股份有限公司 Hair styling method
CN1787012A (en) * 2004-12-08 2006-06-14 索尼株式会社 Method,apparatua and computer program for processing image
US20060224366A1 (en) * 2005-03-30 2006-10-05 Byoungwon Choe Method and system for graphical hairstyle generation using statistical wisp model and pseudophysical approaches
CN1979556A (en) * 2005-12-05 2007-06-13 林锦育 Hair-style virtual design method and apparatus generated by computer software
US20080170069A1 (en) * 2004-05-17 2008-07-17 Pacific Data Images Llc Modeling hair using interpolation and clumping in an iterative process

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1164220C (en) * 2000-01-07 2004-09-01 法伊鲁特股份有限公司 Hair styling method
US20080170069A1 (en) * 2004-05-17 2008-07-17 Pacific Data Images Llc Modeling hair using interpolation and clumping in an iterative process
CN1787012A (en) * 2004-12-08 2006-06-14 索尼株式会社 Method,apparatua and computer program for processing image
US20060224366A1 (en) * 2005-03-30 2006-10-05 Byoungwon Choe Method and system for graphical hairstyle generation using statistical wisp model and pseudophysical approaches
CN1979556A (en) * 2005-12-05 2007-06-13 林锦育 Hair-style virtual design method and apparatus generated by computer software

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ALI MD.HAIDER等: "Realistic 3D Head Modeling from Video Captured Images and CT Data", 《2000 IEEE EMBS INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY APPLICATIONS IN BIOMEDICINE,2000 PROCEEDINGS》, 10 November 2000 (2000-11-10), pages 238 - 243 *
JAMIE WITHER等: "Realistic Hair from a Sketch", 《SMI’07 IEEE INTERNATIONAL CONFERENCE ON SHAPE MODELING INTERNATIONAL》, 30 June 2007 (2007-06-30), pages 33 - 42, XP031116731 *
LIEU-HEN CHEN等: "A system of 3D hair style synthesis based on the wisp model", 《THE VISUAL COMPUTER》, vol. 15, 31 December 1999 (1999-12-31), pages 159 - 170 *
TOMAS LAY HERRERADENG: "Toward Image-Based Facial Hair Modeling", 《PROCEEDINGS OF THE 26TH SPRING CONFERENCE ON COMPUTER GRAPHICS(SCCG 2010)》, 31 May 2010 (2010-05-31), pages 1 - 8 *
李笑岚等: "重建发型的真实感头部建模", 《计算机辅助设计与图形学学报》, vol. 18, no. 8, 31 August 2006 (2006-08-31), pages 1117 - 1122 *
林源等: "基于正交图像的全自动三维头部重建", 《图像图形技术研究与应用(2010)》, 2 April 2010 (2010-04-02), pages 241 - 347 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800129B (en) * 2012-06-20 2015-09-30 浙江大学 A kind of scalp electroacupuncture based on single image and portrait edit methods
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image
US9367940B2 (en) 2012-06-20 2016-06-14 Zhejiang University Method for single-view hair modeling and portrait editing
WO2014117447A1 (en) * 2013-02-02 2014-08-07 浙江大学 Virtual hairstyle modeling method of images and videos
CN103093488B (en) * 2013-02-02 2015-11-04 浙江大学 A kind of virtual hair style interpolation and gradual-change animation generation method
US9792725B2 (en) 2013-02-02 2017-10-17 Zhejiang University Method for image and video virtual hairstyle modeling
CN103093488A (en) * 2013-02-02 2013-05-08 浙江大学 Virtual haircut interpolation and tweening animation producing method
CN103366400B (en) * 2013-07-24 2017-09-12 深圳市华创振新科技发展有限公司 A kind of three-dimensional head portrait automatic generation method
CN103366400A (en) * 2013-07-24 2013-10-23 深圳市华创振新科技发展有限公司 Method for automatically generating three-dimensional head portrait
CN104867173A (en) * 2014-02-03 2015-08-26 梦工厂动画公司 Efficient And Stable Approach To Elasticity And Collisions For Hair Animation
CN104867173B (en) * 2014-02-03 2019-10-18 梦工厂动画公司 For the method for animation hair, readable storage medium storing program for executing and system
US9691172B2 (en) 2014-09-24 2017-06-27 Intel Corporation Furry avatar animation
WO2016045016A1 (en) * 2014-09-24 2016-03-31 Intel Corporation Furry avatar animation
CN104376597A (en) * 2014-12-05 2015-02-25 北京航空航天大学 Multi-direction constrained hair reconstruction method
CN104376597B (en) * 2014-12-05 2017-03-29 北京航空航天大学 A kind of hair method for reconstructing based on multi-direction constraint
CN105045968A (en) * 2015-06-30 2015-11-11 青岛理工大学 Hair style design method and system
CN105117445A (en) * 2015-08-13 2015-12-02 北京建新宏业科技有限公司 Automatic hairstyle matching method, device and system
US10699463B2 (en) 2016-03-17 2020-06-30 Intel Corporation Simulating the motion of complex objects in response to connected structure motion
WO2017156746A1 (en) * 2016-03-17 2017-09-21 Intel Corporation Simulating motion of complex objects in response to connected structure motion
WO2017181332A1 (en) * 2016-04-19 2017-10-26 浙江大学 Single image-based fully automatic 3d hair modeling method
US10665013B2 (en) 2016-04-19 2020-05-26 Zhejiang University Method for single-image-based fully automatic three-dimensional hair modeling
CN107615337B (en) * 2016-04-28 2020-08-25 华为技术有限公司 Three-dimensional hair modeling method and device
CN107615337A (en) * 2016-04-28 2018-01-19 华为技术有限公司 A kind of three-dimensional scalp electroacupuncture method and device
CN106504063A (en) * 2016-11-01 2017-03-15 广州增强信息科技有限公司 A kind of virtual hair tries video frequency showing system on
CN108463823A (en) * 2016-11-24 2018-08-28 华为技术有限公司 A kind of method for reconstructing, device and the terminal of user's Hair model
CN106815883A (en) * 2016-12-07 2017-06-09 珠海金山网络游戏科技有限公司 The hair treating method and system of a kind of game role
CN106815883B (en) * 2016-12-07 2020-06-30 珠海金山网络游戏科技有限公司 Method and system for processing hair of game role
CN107122791A (en) * 2017-03-15 2017-09-01 国网山东省电力公司威海供电公司 Electricity business hall employee's hair style specification detection method based on color development and Texture Matching
CN107392099A (en) * 2017-06-16 2017-11-24 广东欧珀移动通信有限公司 Extract the method, apparatus and terminal device of hair detailed information
CN107392099B (en) * 2017-06-16 2020-01-10 Oppo广东移动通信有限公司 Method and device for extracting hair detail information and terminal equipment
CN108340405A (en) * 2017-11-10 2018-07-31 广东康云多维视觉智能科技有限公司 A kind of robot three-dimensional scanning system and method
CN108340405B (en) * 2017-11-10 2021-12-07 广东康云多维视觉智能科技有限公司 Robot three-dimensional scanning system and method
CN109408653A (en) * 2018-09-30 2019-03-01 叠境数字科技(上海)有限公司 Human body hair style generation method based on multiple features retrieval and deformation
CN109408653B (en) * 2018-09-30 2022-01-28 叠境数字科技(上海)有限公司 Human body hairstyle generation method based on multi-feature retrieval and deformation
CN111182350A (en) * 2019-12-31 2020-05-19 广州华多网络科技有限公司 Image processing method, image processing device, terminal equipment and storage medium
CN111182350B (en) * 2019-12-31 2022-07-26 广州方硅信息技术有限公司 Image processing method, device, terminal equipment and storage medium
CN111583367A (en) * 2020-05-22 2020-08-25 构范(厦门)信息技术有限公司 Hair simulation method and system
CN111583367B (en) * 2020-05-22 2023-02-10 构范(厦门)信息技术有限公司 Hair simulation method and system
CN113269822A (en) * 2021-05-21 2021-08-17 山东大学 Person hair style portrait reconstruction method and system for 3D printing
CN113269822B (en) * 2021-05-21 2022-04-01 山东大学 Person hair style portrait reconstruction method and system for 3D printing
CN113713387A (en) * 2021-08-27 2021-11-30 网易(杭州)网络有限公司 Virtual curling model rendering method, device, equipment and storage medium
CN117389676A (en) * 2023-12-13 2024-01-12 成都白泽智汇科技有限公司 Intelligent hairstyle adaptive display method based on display interface
CN117389676B (en) * 2023-12-13 2024-02-13 成都白泽智汇科技有限公司 Intelligent hairstyle adaptive display method based on display interface

Also Published As

Publication number Publication date
CN102419868B (en) 2016-08-03

Similar Documents

Publication Publication Date Title
CN102419868A (en) Device and method for modeling 3D (three-dimensional) hair based on 3D hair template
US11423556B2 (en) Methods and systems to modify two dimensional facial images in a video to generate, in real-time, facial images that appear three dimensional
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN106920274B (en) Face modeling method for rapidly converting 2D key points of mobile terminal into 3D fusion deformation
US8537155B2 (en) Image processing apparatus and method
US8624901B2 (en) Apparatus and method for generating facial animation
CN103366400B (en) A kind of three-dimensional head portrait automatic generation method
KR102386642B1 (en) Image processing method and apparatus, electronic device and storage medium
CN105719326A (en) Realistic face generating method based on single photo
KR101759188B1 (en) the automatic 3D modeliing method using 2D facial image
CN101968892A (en) Method for automatically adjusting three-dimensional face model according to one face picture
CN109903377B (en) Three-dimensional face modeling method and system without phase unwrapping
CN103208133A (en) Method for adjusting face plumpness in image
CN101968891A (en) System for automatically generating three-dimensional figure of picture for game
CN103854301A (en) 3D reconstruction method of visible shell in complex background
CN106652037B (en) Face mapping processing method and device
CN112784621A (en) Image display method and apparatus
WO2020134925A1 (en) Illumination detection method and apparatus for facial image, and device and storage medium
CN111127642A (en) Human face three-dimensional reconstruction method
KR20220054955A (en) Device, method and computer program for replacing user face with another face
CN109448093B (en) Method and device for generating style image
JP5419773B2 (en) Face image synthesizer
JP5419777B2 (en) Face image synthesizer
KR20010084996A (en) Method for generating 3 dimension avatar using one face image and vending machine with the same
JP2002015310A (en) Method for fitting face to point group and modeling device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant