CN108510500B - Method and system for processing hair image layer of virtual character image based on human face skin color detection - Google Patents

Method and system for processing hair image layer of virtual character image based on human face skin color detection Download PDF

Info

Publication number
CN108510500B
CN108510500B CN201810138228.5A CN201810138228A CN108510500B CN 108510500 B CN108510500 B CN 108510500B CN 201810138228 A CN201810138228 A CN 201810138228A CN 108510500 B CN108510500 B CN 108510500B
Authority
CN
China
Prior art keywords
skin color
area
face
processing
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810138228.5A
Other languages
Chinese (zh)
Other versions
CN108510500A (en
Inventor
陈嘉莉
蒋念娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wang Conghai
Original Assignee
Shenzhen Cloudream Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cloudream Information Technology Co ltd filed Critical Shenzhen Cloudream Information Technology Co ltd
Priority to CN201810138228.5A priority Critical patent/CN108510500B/en
Publication of CN108510500A publication Critical patent/CN108510500A/en
Application granted granted Critical
Publication of CN108510500B publication Critical patent/CN108510500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A hair layer processing method of a virtual character based on human face skin color detection is characterized by comprising the steps of detecting a human face area and feature points; processing a skin color area; obtaining a face mask by using the skin color area, the feature point outline, the head-scratching mask and other information; face mask processing, and natural edge transition; and rendering each layer to generate a final image. The invention provides a method and a system for processing a hair layer of a virtual character based on human face skin color detection. Because the difficulty of hair matting in head matting is higher, the effect of hair matting becomes unnatural due to the influence of various backgrounds and clothes, so the invention adopts the scheme that the head is distributed behind the body and the clothes layer, and the unnatural hair part is covered, so that the generated figure image is more beautiful.

Description

Method and system for processing hair image layer of virtual character image based on human face skin color detection
Technical Field
The invention relates to the field of computer image processing, in particular to a method and a system for processing a hair image layer of a virtual human image based on human face skin color detection.
Background
With the continuous development of artificial intelligence technology, image processing plays an increasingly important role in our daily life. The virtual character image hair processing is an important field in image processing, and before the virtual character image hair processing, human face, feature points and skin color area detection are required. The detection of the human face and the characteristic points is mainly based on a detection technology of machine learning. The skin color detection is mainly a process of selecting a corresponding color range in an image as a skin color according to the inherent color of the skin, namely selecting pixel points of an area where the human skin is located in the image.
The existing scheme mainly comprises the following steps: detecting face information of a current frame picture of a picture, obtaining an approximate outline of a face region according to an Active Shape Model (ASM) algorithm, estimating a skin region of the face according to the outline region, avoiding some regions (such as eyes, eyebrows and lip regions) which are possibly misled, performing threshold segmentation on the estimated skin color region of the face according to a skin threshold empirical parameter which is set in advance according to the estimated skin color region of the face, uniformly selecting a certain number of skin color seeds in different skin color regions, and performing spreading and detection on a peripheral connected region according to the selected seed points, so that all connected skin color regions can be detected; scheme II: the method comprises the steps of obtaining a face area from a gray-scale image of a current frame picture of the picture, calculating a histogram of the face area, finding approximate valley points of the histogram, and dividing a skin color area and a non-skin color area in the face area through the approximate valley points. However, the above two schemes have great difficulty in hair matting in matting, and the effect of hair matting can become unnatural due to the influence of various backgrounds and clothes.
Disclosure of Invention
The invention provides a method for processing a hair layer of a virtual character based on human face skin color detection, which comprises the following steps,
detecting a face area and feature points;
processing a skin color area;
obtaining a face mask by using the skin color area, the feature point outline, the head-scratching mask and other information;
face mask processing, and natural edge transition;
and rendering each layer to generate a final image.
And detecting the face region of the picture, and obtaining feature points of the five sense organs and the face contour by using a face region detection and feature point detection technology based on machine learning.
The skin color area processing comprises the following steps of,
combining the characteristic points and the brightness information to obtain an initial skin color area;
combining the brightness and the color information in the initial skin color area to obtain an accurate skin color area;
estimating an ellipsoid space of Skin Color according to a Skin Color Modeling method of a Skin Color Digital Photographic image of Skin Color Modeling based on the Skin Color area;
and solving by a maximum flow minimum cut (max flow/min cut) method in graph cuts (graph cuts) based on the spatial distance of the skin color ellipsoid to obtain an optimized skin color area.
The initial skin color region is obtained by combining the feature points and the brightness information, including,
255 is filled in a polygonal region surrounded by the feature points 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,45,46,47,42,27,39,40,41,36, and 0 is filled in a polygonal region surrounded by the feature points 48,49,50,51,52,53,54,55,56,57,58, 59; the region with luminance value L <30 is filled with 0 to obtain the initial skin color region.
The combination of luminance and color information in the initial skin tone region yields an accurate skin tone region including,
counting to obtain a brightness point set of a skin color area, wherein the brightness and position information of each pixel in the skin color area are recorded;
sorting the brightness point sets from small to large according to the brightness values, and removing 15% -20% of over-black and 10% -15% of over-bright pixel points;
selecting a middle brightness point set of a skin color area as a seed point set, searching the position of a middle point according to the point set of the skin color area, and selecting a left-right plus-minus 5 range as the seed point set;
selecting a new candidate skin color area, selecting feature points 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,27,26,25,24,23,22,21,20,19,18 as outer contours, pulling down portions of 18, 19, 20, 21, 22 (left eyebrow) and 23, 24, 25, 26, 27 (right eyebrow), removing portions surrounded by 31,32,33,34,35, 36 (nose), and removing portions surrounded by 37,38,39,40,41, 42 (right eye) and 43,44,45,46,47, 48 (left eye);
traversing the seed point set, and taking all pixels with the BGR color distance less than 50 from the seed point in the candidate skin color region obtained in the step (d) as a new skin color region.
The method for Modeling Skin Color Modeling of Digital Photographic Images based on the Skin Color area estimates the ellipsoid space of the Skin Color according to the Skin Color of the Digital Photographic Images, wherein the ellipsoid space model of the Skin Color is as follows:
Φ(X)=[X-Ψ]TA-l[X-Ψ]
wherein the content of the first and second substances,
Figure BDA0001576894290000031
Figure BDA0001576894290000032
Figure BDA0001576894290000033
Figure BDA0001576894290000034
x1.. Xn represents the color appearing in the skin color region, and f (Xi) represents the number of times the color Xi appears.
Estimating an ellipsoid spatial model of skin color based on the skin color region;
and substituting each pixel of the matting mask into the ellipsoid space model of the skin color to solve to obtain the ellipsoid space distance between each pixel and the skin color.
The smoother skin color area is obtained by solving by a maximum flow minimum cut (max flow/min cut) method in graph cut (graph cut) based on the spatial distance of the skin color ellipsoid, comprising,
setting a smooth item to represent the weight between adjacent points,
Figure BDA0001576894290000041
dist1 represents the color difference, σ, between adjacent pixels1Set to 15, α is set to 20;
setting weight values between the data items, the representative points and the Source Sink (Source/Sink),
Figure BDA0001576894290000042
Figure BDA0001576894290000043
dist2 represents the ellipsoid spatial distance (5c) of the pixel from skin color, with β set to 1, δ2Set to 15.
The mask processing of the human face, the transition of the edge is natural, including,
performing operation of expanding the face mask for 7 circles and then reducing the face mask for 7 circles;
performing Gaussian blur of 5x5 for 2 times to ensure the transition of the edges of the human face to be natural;
and obtaining the processed face mask and the face four-channel picture.
And rendering each layer to generate a final image, including,
pasting a real head map layer, and matting a four-channel picture obtained by a head mask;
pasting virtual body and clothes layers;
pasting a real face layer on the top layer;
the final image creates the effect.
The invention provides a hair layer processing system of a virtual character based on human face skin color detection, which comprises,
the detection module is used for detecting a face area and feature points;
the area processing module is used for processing the skin color area;
the head scratching processing module is used for obtaining a face mask by using information such as a skin color area, a characteristic point outline, a head scratching mask and the like;
the human face mask processing module is used for human face mask processing and natural edge transition;
and the rendering module is used for rendering each layer to generate a final image.
The invention provides a product for processing a hair layer of a virtual character based on human face skin color detection, which comprises images suitable for virtual reality, virtual fitting, virtual social contact, clothes, shoes and accessories and non-real contact measuring bodies.
Is advantageous inThe effect is as follows:
the invention provides a method and a system for processing a hair layer of a virtual character based on human face skin color detection. Because the difficulty of hair matting in head matting is higher, the effect of hair matting becomes unnatural due to the influence of various backgrounds and clothes, so the invention adopts the scheme that the head is distributed behind the body and the clothes layer, and the unnatural hair part is covered, so that the generated figure image is more beautiful.
Description of the drawings:
FIG. 1 is a schematic diagram of feature points of the five sense organs and facial contours
FIG. 2 is a schematic diagram of a mask
FIG. 3 is a schematic view of an initial skin tone region
FIG. 4 is a schematic diagram of an accurate skin tone region
FIG. 5 is a schematic view of a new skin tone region
FIG. 6 is a schematic view of the ellipsoid spatial distance between each pixel and skin tone
FIG. 7 is a schematic diagram of optimizing skin tone regions
FIG. 8 is a schematic diagram of face mask
FIG. 9 is a schematic diagram of processed face mask
FIG. 10 is a four-channel schematic diagram of a processed face
FIG. 11 is a final image generation effect diagram
Detailed Description
The embodiment provides a method for processing a hair layer of a virtual character based on human face skin color detection, which comprises the following steps,
detecting a face area and feature points;
processing a skin color area;
obtaining a face mask by using the skin color area, the feature point outline, the head-scratching mask and other information;
face mask processing, and natural edge transition;
and rendering each layer to generate a final image.
In a preferred embodiment, the face region of the detected picture in this embodiment uses a face region detection and feature point detection technology based on machine learning to obtain feature points of five sense organs and face contour.
The preferred embodiment, the skin tone region processing in this embodiment, includes,
combining the characteristic points and the brightness information to obtain an initial skin color area;
combining the brightness and the color information in the initial skin color area to obtain an accurate skin color area;
estimating an ellipsoid space of Skin Color according to a Skin Color Modeling method of a Skin Color Digital Photographic image of Skin Color Modeling based on the Skin Color area;
and solving by a maximum flow minimum cut (max flow/min cut) method in graph cuts (graph cuts) based on the spatial distance of the skin color ellipsoid to obtain an optimized skin color area.
In a preferred embodiment, the initial skin color region is obtained by combining the feature points and the luminance information in this embodiment, including filling 255 in a polygonal region surrounded by the feature points 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,45,46,47,42,27,39,40,41,36, and filling 0 in a polygonal region surrounded by the feature points 48,49,50,51,52,53,54,55,56,57,58, 59; the region with luminance value L <30 is filled with 0 to obtain the initial skin color region.
In the preferred embodiment, the present embodiment combines the luminance and color information in the initial skin color region to obtain the accurate skin color region, including,
counting to obtain a brightness point set of a skin color area, wherein the brightness and position information of each pixel in the skin color area are recorded;
sorting the brightness point sets from small to large according to the brightness values, and removing 15% -20% of over-black and 10% -15% of over-bright pixel points;
selecting a middle brightness point set of a skin color area as a seed point set, searching the position of a middle point according to the point set of the skin color area, and selecting a left-right plus-minus 5 range as the seed point set;
selecting a new candidate skin color area, selecting feature points 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,27,26,25,24,23,22,21,20,19,18 as outer contours, pulling down portions of 18, 19, 20, 21, 22 (left eyebrow) and 23, 24, 25, 26, 27 (right eyebrow), removing portions surrounded by 31,32,33,34,35, 36 (nose), and removing portions surrounded by 37,38,39,40,41, 42 (right eye) and 43,44,45,46,47, 48 (left eye);
traversing the seed point set, and taking all pixels with the BGR color distance less than 50 from the seed point in the candidate skin color region obtained in the step (d) as a new skin color region.
In a preferred embodiment, in this embodiment, based on the Skin Color region, the ellipsoid space of the Skin Color is estimated according to a method of Skin Color Modeling of Digital Photographic Images of the Digital Photographic image, and the ellipsoid space model of the Skin Color is:
Φ(X)=[X-Ψ]TA-l[X-Ψ]
wherein the content of the first and second substances,
Figure BDA0001576894290000071
Figure BDA0001576894290000072
Figure BDA0001576894290000073
Figure BDA0001576894290000074
x1.. Xn represents the color appearing in the skin color region, and f (Xi) represents the number of times the color Xi appears.
Estimating an ellipsoid spatial model of skin color based on the skin color region;
and substituting each pixel of the matting mask into the ellipsoid space model of the skin color to solve to obtain the ellipsoid space distance between each pixel and the skin color.
In the preferred embodiment, in this embodiment, based on the spatial distance of the skin color ellipsoid, a smoother skin color region is obtained by solving with a max flow minimum cut (max flow/min cut) method in graph cut (graph cut), including,
setting a smooth item to represent the weight between adjacent points,
Figure BDA0001576894290000081
dist1 represents the color difference, σ, between adjacent pixels1Set to 15, α is set to 20;
setting weight values between the data items, the representative points and the Source Sink (Source/Sink),
Figure BDA0001576894290000082
Figure BDA0001576894290000083
dist2 represents the ellipsoid spatial distance (5c) of the pixel from skin color, with β set to 1, δ2Set to 15.
In the preferred embodiment, the face mask processing, edge transition natural, including,
performing operation of expanding the face mask for 7 circles and then reducing the face mask for 7 circles;
performing Gaussian blur of 5x5 for 2 times to ensure the transition of the edges of the human face to be natural;
and obtaining the processed face mask and the face four-channel picture.
In the preferred embodiment, each layer is rendered in this embodiment to generate a final avatar, including,
pasting a real head map layer, and matting a four-channel picture obtained by a head mask;
pasting virtual body and clothes layers;
pasting a real face layer on the top layer;
the final image creates the effect.
The embodiment provides a hair layer processing system of virtual character based on human face skin color detection, which comprises,
the detection module is used for detecting a face area and feature points;
the area processing module is used for processing the skin color area;
the head scratching processing module is used for obtaining a face mask by using information such as a skin color area, a characteristic point outline, a head scratching mask and the like;
the human face mask processing module is used for human face mask processing and natural edge transition;
and the rendering module is used for rendering each layer to generate a final image.
The embodiment provides a product for processing the hair layer of a virtual character based on human face skin color detection, which comprises images suitable for virtual reality, virtual fitting, virtual social contact, clothes, shoes and accessories and non-real contact measuring bodies.

Claims (6)

1. A method for processing the hair layer of virtual figure based on human face skin color detection is characterized by comprising the following steps,
detecting a face area and feature points;
processing a skin color area;
obtaining a face mask according to the skin color area, the feature point outline and the head-matting mask;
face mask processing, and natural edge transition;
rendering each layer to generate a final image;
the skin color area processing comprises the following steps:
combining the characteristic points and the brightness information to obtain an initial skin color area;
combining the brightness and the color information in the initial skin color area to obtain an accurate skin color area;
estimating an ellipsoid space of the skin color according to a skin color modeling method of the digital photographic image based on the skin color area;
solving by using a maximum flow minimum cut method in graph cutting based on the space distance of the skin color ellipsoid to obtain an optimized skin color area;
the combination of the feature points and the brightness information to obtain an initial skin color area comprises the following steps:
filling 255 in a polygonal area formed by enclosing the feature points corresponding to the outer contour of the face and the feature points corresponding to the lower parts of the left eye and the right eye, and filling 0 in a polygonal area formed by enclosing the feature points corresponding to the mouth;
filling 0 in the region with the brightness value L <30 to obtain an initial skin color region;
rendering layers to generate a final avatar, including,
pasting a real head map layer, and matting a four-channel picture obtained by a head mask;
pasting virtual body and clothes layers;
the top layer is pasted with a real face layer, the head is distributed behind the body and the clothing layer, and the final image generates an effect.
2. The method as claimed in claim 1, wherein the facial region of the detected picture is processed by using facial region detection and feature point detection based on machine learning to obtain feature points of five sense organs and facial contour.
3. The method as claimed in claim 1, wherein said combining brightness and color information in the initial skin color region to obtain the accurate skin color region comprises,
counting to obtain a brightness point set of a skin color area, wherein the brightness and position information of each pixel in the skin color area are recorded;
sorting the brightness point sets from small to large according to the brightness values, and removing 15% -20% of over-black and 10% -15% of over-bright pixel points;
selecting a middle brightness point set of a skin color area as a seed point set, searching the position of a middle point according to the point set of the skin color area, and selecting a left-right plus-minus 5 range as the seed point set;
selecting a new candidate skin color area, selecting outer contour feature points, pulling down parts of the left eyebrow feature points and the right eyebrow feature points, removing a part surrounded by the nose feature points, and removing a part surrounded by the right eye feature points and the left eye feature points;
and traversing the seed point set, and taking all pixels with the BGR color distance less than 50 from the seed points in the candidate skin color region as a new skin color region.
4. The method for processing the hair layer of the virtual character based on the human face complexion detection as claimed in claim 1, wherein the human face mask is processed, the edge transition is natural, comprising,
performing operation of expanding the face mask for 7 circles and then reducing the face mask for 7 circles;
performing Gaussian blur of 5x5 for 2 times to ensure the transition of the edges of the human face to be natural;
and obtaining the processed face mask and the face four-channel picture.
5. A hair layer processing system of virtual character based on human face skin color detection is characterized in that the system comprises,
the detection module is used for detecting a face area and feature points;
the area processing module is used for processing the skin color area;
the head scratching processing module is used for obtaining a face mask according to the skin color area, the feature point outline and the head scratching mask;
the human face mask processing module is used for processing the human face mask and enabling the edge transition to be natural;
the rendering module is used for rendering each layer to generate a final image;
the skin color area processing comprises the following steps:
combining the characteristic points and the brightness information to obtain an initial skin color area;
combining the brightness and the color information in the initial skin color area to obtain an accurate skin color area;
estimating an ellipsoid space of the skin color according to a skin color modeling method of the digital photographic image based on the skin color area;
solving by using a maximum flow minimum cut method in graph cutting based on the space distance of the skin color ellipsoid to obtain an optimized skin color area;
the combination of the feature points and the brightness information to obtain an initial skin color area comprises the following steps:
filling 255 in a polygonal area formed by enclosing the feature points corresponding to the outer contour of the face and the feature points corresponding to the lower parts of the left eye and the right eye, and filling 0 in a polygonal area formed by enclosing the feature points corresponding to the mouth area;
filling 0 in the region with the brightness value L <30 to obtain an initial skin color region;
the rendering module, comprising,
pasting a real head map layer, and matting a four-channel picture obtained by a head mask;
pasting virtual body and clothes layers;
the top layer is pasted with a real face layer, the head is distributed behind the body and the clothing layer, and the final image generates an effect.
6. A product for processing a hair layer of a virtual figure based on face skin color detection is characterized by comprising images suitable for virtual reality, virtual fitting, virtual social contact, clothes, shoes, accessories and non-real contact measuring bodies, and the product for processing the hair layer of the virtual figure based on the face skin color detection is the method and the system for processing the hair layer of the virtual figure based on the face skin color detection in any one of claims 1 to 4.
CN201810138228.5A 2018-05-14 2018-05-14 Method and system for processing hair image layer of virtual character image based on human face skin color detection Active CN108510500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810138228.5A CN108510500B (en) 2018-05-14 2018-05-14 Method and system for processing hair image layer of virtual character image based on human face skin color detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810138228.5A CN108510500B (en) 2018-05-14 2018-05-14 Method and system for processing hair image layer of virtual character image based on human face skin color detection

Publications (2)

Publication Number Publication Date
CN108510500A CN108510500A (en) 2018-09-07
CN108510500B true CN108510500B (en) 2021-02-26

Family

ID=63374657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810138228.5A Active CN108510500B (en) 2018-05-14 2018-05-14 Method and system for processing hair image layer of virtual character image based on human face skin color detection

Country Status (1)

Country Link
CN (1) CN108510500B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685876B (en) * 2018-12-21 2020-11-03 北京达佳互联信息技术有限公司 Hair rendering method and device, electronic equipment and storage medium
CN111862290B (en) * 2020-07-03 2021-05-11 完美世界(北京)软件科技发展有限公司 Radial fuzzy-based fluff rendering method and device and storage medium
CN111931908B (en) * 2020-07-23 2024-06-11 北京电子科技学院 Face image automatic generation method based on face contour
CN112270735B (en) * 2020-10-27 2023-07-28 北京达佳互联信息技术有限公司 Virtual image model generation method, device, electronic equipment and storage medium
CN112465734A (en) * 2020-10-29 2021-03-09 星业(海南)科技有限公司 Method and device for separating picture layers
CN113426138B (en) * 2021-05-28 2023-03-31 广州三七极创网络科技有限公司 Edge description method, device and equipment of virtual role
CN114155324B (en) * 2021-12-02 2023-07-25 北京字跳网络技术有限公司 Virtual character driving method and device, electronic equipment and readable storage medium
CN114565507A (en) * 2022-01-17 2022-05-31 北京新氧科技有限公司 Hair processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236786A (en) * 2011-07-04 2011-11-09 北京交通大学 Light adaptation human skin colour detection method
CN104902189A (en) * 2015-06-24 2015-09-09 小米科技有限责任公司 Picture processing method and picture processing device
CN105719234A (en) * 2016-01-26 2016-06-29 厦门美图之家科技有限公司 Automatic gloss removing method and system for face area and shooting terminal
CN107562963A (en) * 2017-10-12 2018-01-09 杭州群核信息技术有限公司 A kind of method and apparatus screened house ornamentation design and render figure
CN107730573A (en) * 2017-09-22 2018-02-23 西安交通大学 A kind of personal portrait cartoon style generation method of feature based extraction

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN103456010B (en) * 2013-09-02 2016-03-30 电子科技大学 A kind of human face cartoon generating method of feature based point location
US9928601B2 (en) * 2014-12-01 2018-03-27 Modiface Inc. Automatic segmentation of hair in images
CN106652037B (en) * 2015-10-30 2020-04-03 深圳超多维光电子有限公司 Face mapping processing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236786A (en) * 2011-07-04 2011-11-09 北京交通大学 Light adaptation human skin colour detection method
CN104902189A (en) * 2015-06-24 2015-09-09 小米科技有限责任公司 Picture processing method and picture processing device
CN105719234A (en) * 2016-01-26 2016-06-29 厦门美图之家科技有限公司 Automatic gloss removing method and system for face area and shooting terminal
CN107730573A (en) * 2017-09-22 2018-02-23 西安交通大学 A kind of personal portrait cartoon style generation method of feature based extraction
CN107562963A (en) * 2017-10-12 2018-01-09 杭州群核信息技术有限公司 A kind of method and apparatus screened house ornamentation design and render figure

Also Published As

Publication number Publication date
CN108510500A (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN108510500B (en) Method and system for processing hair image layer of virtual character image based on human face skin color detection
CN112669447B (en) Model head portrait creation method and device, electronic equipment and storage medium
US8831379B2 (en) Cartoon personalization
US9013489B2 (en) Generation of avatar reflecting player appearance
Liao et al. Automatic caricature generation by analyzing facial features
US8913847B2 (en) Replacement of a person or object in an image
Arbel et al. Shadow removal using intensity surfaces and texture anchor points
US20090263038A1 (en) Method for creating photo cutouts and collages
CN110390632B (en) Image processing method and device based on dressing template, storage medium and terminal
WO2022095721A1 (en) Parameter estimation model training method and apparatus, and device and storage medium
US20080309662A1 (en) Example Based 3D Reconstruction
JP4979033B2 (en) Saliency estimation of object-based visual attention model
WO2022143645A1 (en) Three-dimensional face reconstruction method and apparatus, device, and storage medium
CN104794693B (en) A kind of portrait optimization method of face key area automatic detection masking-out
CN108197533A (en) A kind of man-machine interaction method based on user&#39;s expression, electronic equipment and storage medium
US20220245912A1 (en) Image display method and device
KR20230097157A (en) Method and system for personalized 3D head model transformation
CN113628327A (en) Head three-dimensional reconstruction method and equipment
KR20230085931A (en) Method and system for extracting color from face images
JP2024506170A (en) Methods, electronic devices, and programs for forming personalized 3D head and face models
CN111243051A (en) Portrait photo-based stroke generating method, system and storage medium
KR101112142B1 (en) Apparatus and method for cartoon rendering using reference image
CN113870404B (en) Skin rendering method of 3D model and display equipment
Guo Digital anti-aging in face images
Aizawa et al. Do you like sclera? Sclera-region detection and colorization for anime character line drawings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231117

Address after: Gao Lou Zhen Hong Di Cun, Rui'an City, Wenzhou City, Zhejiang Province, 325200

Patentee after: Wang Conghai

Address before: 10 / F, Yihua financial technology building, 2388 Houhai Avenue, high tech park, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Patentee before: SHENZHEN CLOUDREAM INFORMATION TECHNOLOGY CO.,LTD.