CN102682420A - Method and device for converting real character image to cartoon-style image - Google Patents

Method and device for converting real character image to cartoon-style image Download PDF

Info

Publication number
CN102682420A
CN102682420A CN2012100936474A CN201210093647A CN102682420A CN 102682420 A CN102682420 A CN 102682420A CN 2012100936474 A CN2012100936474 A CN 2012100936474A CN 201210093647 A CN201210093647 A CN 201210093647A CN 102682420 A CN102682420 A CN 102682420A
Authority
CN
China
Prior art keywords
style
image
cartoon
describing
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012100936474A
Other languages
Chinese (zh)
Inventor
任晓倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baishunhuanian Culture Communication Co Ltd
Original Assignee
Beijing Baishunhuanian Culture Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baishunhuanian Culture Communication Co Ltd filed Critical Beijing Baishunhuanian Culture Communication Co Ltd
Priority to CN2012100936474A priority Critical patent/CN102682420A/en
Publication of CN102682420A publication Critical patent/CN102682420A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a device for converting a real character image to a cartoon-style image. The method comprises the following steps of: acquiring a first image including the real character image; analyzing the first image to obtain parameter information representing five sense organs in the real character image; converting the parameter information representing the five sense organs of the character to location parameter information of the corresponding five sense organs in the cartoon style; according to the location parameter information of the five sense organs, generating a second image with cartoon style and the real character. The scheme provided by the invention can convert the real character image to the cartoon-style image to enable the image of the real character to be matched with the style in the cartoon scene.

Description

A kind of real human object image converts the method and the device of cartoon style image into
Technical field
The present invention relates to technical field of image processing, be meant that especially a kind of real human object image converts the method and the device of cartoon style image into.
Background technology
In the prior art, having various cartoon images, have staticly, have dynamically, but all is the cartoon character that generates non-true personage according to certain software.
True personage is not converted into the method for cartoon style image in the prior art.
Summary of the invention
The technical matters that the present invention will solve provides method and the device that a kind of real human object image converts the cartoon style image into, can convert the real human object image into certain cartoon style image, and true personage's image is conformed to style in the cartoon scene.
For solving the problems of the technologies described above, a kind of real human object image of embodiments of the invention converts the method for cartoon style image into, comprising:
Acquisition includes first image of real human object image;
Said first image is carried out analyzing and processing, obtain the parameter information of reflection personage face in the said real human object image;
Convert the parameter information of said reflection personage face in the cartoon style corresponding face position parameter data;
According to said face position parameter data, generate second image that has said cartoon style and comprise said true personage.
Wherein, said said first image is carried out analyzing and processing, the step that obtains the parameter information of reflection personage face in the said real human object image comprises:
Through face recognition technology, said first image is carried out analyzing and processing, obtain the parameter information of reflection personage face in the said real human object image; Wherein, the parameter information of said reflection personage face comprises: the position parameter data of two pupils, the center parameter information of going up lower lip and the angle parameter information of two pupils and horizontal direction.
Wherein, said parameter information with the said reflection personage face step that converts corresponding face position parameter data in the cartoon style into comprises:
According to preset uniform zoom parameter and/or rotation parameter, convert the said parameter information of personage's face that reflects in the said cartoon style corresponding face position parameter data; Wherein, said face position parameter data comprises: the position parameter data of two pupils, the center parameter information of going up lower lip and the angle parameter information of two pupils and horizontal direction in the said cartoon style.
Wherein, said according to said face position parameter data, generation has said cartoon style and comprises that the step of said true personage's second image comprises:
According to said face position parameter data, when specify generating second image lines describe style, said real human object image is played up second image that has said cartoon style and comprise said true personage; Wherein, the style of describing of said lines comprises: the texture of the color of lines, the thickness of lines, line weight variation pattern, transparency and lines.
Wherein, said according to said face position parameter data, generation has said cartoon style and comprises that the step of said true personage's second image comprises:
According to said face position parameter data; Lines describes style when specify generating second image; Generation has first intermediate image of describing style of said lines; Wherein, the style of describing of said lines comprises: the texture of the color of lines, the thickness of lines, line weight variation pattern, transparency and lines;
With said first intermediate image when specify generating second image color describe style, play up second image of describing style with said color, wherein, the style of describing of said color comprises: the collocation mode of form and aspect in the picture.
Wherein, said according to said face position parameter data, generation has said cartoon style and comprises that the step of said true personage's second image comprises:
According to said face position parameter data; Lines describes style when specify generating second image; Generation has second intermediate image of describing style of said lines; Wherein, the style of describing of said lines comprises: the texture of the color of lines, the thickness of lines, line weight variation pattern, transparency and lines;
With said second intermediate image when specify generating second image color describe style, generate the 3rd intermediate image of describing style with said color, wherein, the style of describing of said color comprises: the collocation mode of form and aspect in the picture;
With said the 3rd intermediate image when specify generating second image style of writing describe style, play up second image of describing style with said style of writing, wherein, the style of describing of said style of writing comprises: the style of the vestige of wieling the pen in the picture.
Wherein, with said second intermediate image when specify generating second image color describe style, the step that generates the 3rd intermediate image of describing style with said color comprises:
Set up the figure layer of describing the style place of said color;
The style place figure layer of describing of said second intermediate image place figure layer and said color is synthesized processing, obtain having the 3rd intermediate image of describing style of said color.
Wherein, with said the 3rd intermediate image when specify generating second image style of writing describe style, the step of playing up second image of describing style with said style of writing comprises:
Set up the figure layer of describing the style place of said style of writing;
The figure layer at said the 3rd intermediate image place and the figure layer of describing the style place of said style of writing are synthesized processing, obtain having second image that style described in said style of writing.
Wherein, according to said face position parameter data, generation has said cartoon style and comprises that the step of said true personage's second image comprises:
According to said face position parameter data, directly said first image is played up according to the style of describing of describing style and/or style of writing of color, obtain said second image.
Wherein, said method also comprises: in the true personage's in said second image the face position one or more, carry out local wind's processing of formatting.
Wherein, in the true personage's in said second image the face position one or more, carry out local wind and format to handle and comprise:
One or more of true personage's in said second image face position carried out style design fix, perhaps carry out effect and dress up; Wherein, Effect is dressed up and is comprised: add a pupillogram layer at pupil position, above the uppermost lines of eyes, add transparent or translucent eye shadow figure layer, above said eye shadow figure layer, add eyelashes figure layer in the informer position of eyes; On the position of cheek, add opaque or translucent rouge figure layer; Add eyebrow figure layer in the position of eyebrow and change form and aspect, lightness and the saturation degree of lip, add shadow figure layer in nasal area, facial optimization at mouth position; Hair treatment, shape of face are selected and are generated personalized emoticons at said face.
Wherein, comprise in the said facial personalized emoticons that generates:
The personalized emoticons that the background style that meets cartoon style in said second image in said facial generation is consistent; Perhaps
Directly in the said facial personalized emoticons that meets cartoon style in said second image that generates.
Embodiments of the invention also provide a kind of real human object image to convert the device of cartoon style image into, comprising:
Obtain module, be used to obtain to include first image of real human object image;
Analysis module is used for said first image is carried out analyzing and processing, obtains the parameter information of reflection personage face in the said real human object image;
Modular converter is used for converting the parameter information of said reflection personage face into cartoon style corresponding face position parameter data;
Generation module is used for according to said face position parameter data, generates second image that has said cartoon style and comprise said true personage.
The beneficial effect of technique scheme of the present invention is following:
In the such scheme,, obtain the parameter information of reflection personage face in the said real human object image through first image that comprises the real human object image is carried out analyzing and processing; Convert the parameter information of said reflection personage face in the cartoon style corresponding face position parameter data; And, generate second image that has said cartoon style and comprise said true personage according to said face position parameter data; Thereby realize letting true personage become the role in the specific style cartoon scene, the image of oneself is conformed to the style of describing in the scene.The specific cartoon style of computer simulation describe gimmick, image personage face is played up and is modified, make it to reach the form of expression that matches with the picture style.
Description of drawings
Fig. 1 is the method flow diagram that real human object image of the present invention converts the cartoon style image into;
Fig. 2 is to describe the legend that style handle of first image in the method shown in Figure 1 according to lines;
Fig. 3 is to the image that obtains in the method shown in Figure 2 legend that style is handled of describing according to color;
Fig. 4 is after first image to method shown in Figure 1 carries out describing style and handling of lines, and according to the legend that style is handled of describing of describing style and style of writing of color;
Fig. 5 is directly according to the legend that style is handled of describing of describing style and style of writing of color to first image that comprises the real human object image in the method shown in Figure 1;
Fig. 6 is the legend 1 that the privileged site to the true personage in second image with cartoon style that generates carries out the processing of certain illustrative style;
Fig. 7 is the legend 2 that the privileged site to the true personage in second image with cartoon style that generates carries out the processing of certain illustrative style;
Fig. 8 is the legend according to the different set localized immobilization position pattern of picture style to the true personage in second image with cartoon style that generates;
After Fig. 9 is the transparency adjustment to the figure layer at the true personage place in second image with cartoon style that generates, and with the first original image effect legend after synthetic;
Figure 10 is the legend of the true personage's in second image with cartoon style that generates appointed area being carried out color identification and simulation;
Figure 11 is the legend that the true personage's in second image with cartoon style that generates face is optimized;
Figure 12 is the legend to the true personage's in second image with cartoon style that generates shape of face change;
Figure 13 is the legend that the true personage's in second image with cartoon style that generates shape of face is selected;
Figure 14 is the legend to the true personage's in second image with cartoon style that generates hair style change;
Figure 15 is the legend 1 of the true personage in second image with cartoon style that generates being added expression;
Figure 16 is the legend 2 of the true personage in second image with cartoon style that generates being added expression
Figure 17 is the legend 3 of the true personage in second image with cartoon style that generates being added expression.
Embodiment
For technical matters, technical scheme and advantage that the present invention will be solved is clearer, will combine accompanying drawing and specific embodiment to be described in detail below.
As shown in Figure 1, embodiments of the invention provide a kind of real human object image to convert the method for cartoon style image into, comprising:
Step 11, acquisition includes first image of real human object image;
Step 12 is carried out analyzing and processing to said first image, obtains the parameter information of reflection personage face in the said real human object image;
Step 13 converts the said parameter information of personage's face that reflects in the cartoon style corresponding face position parameter data;
Step 14 according to said face position parameter data, generates second image that has said cartoon style and comprise said true personage.
This embodiment of the present invention obtains the parameter information of reflection personage face in the said real human object image through first image that comprises the real human object image is carried out analyzing and processing; Convert the parameter information of said reflection personage face in the cartoon style corresponding face position parameter data; And, generate second image that has said cartoon style and comprise said true personage according to said face position parameter data; Thereby realize letting true personage become the role in the specific style cartoon scene, the image of oneself is conformed to the style of describing in the scene.The specific cartoon style of computer simulation describe gimmick, image personage face is played up and is modified, make it to reach the form of expression that matches with the picture style.
Wherein, in the said method, step 12 comprises: through face recognition technology, said first image is carried out analyzing and processing, obtain the parameter information of reflection personage face in the said real human object image; Wherein, the parameter information of said reflection personage face comprises: the position parameter data of two pupils, the center parameter information of going up lower lip and the angle parameter information of two pupils and horizontal direction.
Step 13 comprises: according to preset uniform zoom parameter and/or rotation parameter, convert the said parameter information of personage's face that reflects in the said cartoon style corresponding face position parameter data; Wherein, said face position parameter data comprises: the position parameter data of two pupils, the center parameter information of going up lower lip and the angle parameter information of two pupils and horizontal direction in the said cartoon style.
The cartoon of every kind of style all has the software program of its corresponding style; Become the method for particular card air grating following first image making that includes the real human object image: through scanning real human object image; Get access to the position of facial face in the photo; The position that comprises two pupils, the angle of the center of last lower lip and two pupils and horizontal direction is according to preset designated parameter; Use uniform zoom, rotation waits the adjustment mode position of facial face to be reached the size of the program requirement of certain cartoon style.
Wherein, In the said method; Step 14 specifically comprises: step 14A is according to said face position parameter data, according to the style of describing of specifying lines when generating second image, said real human object image played up second image that has said cartoon style and comprise said true personage; Wherein, the style of describing of said lines comprises: the texture of the color of lines, the thickness of lines, line weight variation pattern, transparency and lines.
When the real human object image is specified the stylized Flame Image Process of cartoon style, the style of describing of lines in the time of can generating image according to appointment.Lines are described style and are referred to: the color of lines, thickness, thickness variation pattern, texture etc.Play up the form that lines are described with having the personage part in the photo.The color of specified line bar in software, thickness, thickness variation pattern; Transparency, and texture pattern, make it with picture in the lines that adopt of other positions to describe style identical; As shown in Figure 2, for to the real human object image when generating image lines describe the legend that style is handled.
Wherein, in the said method, step 14 can also specifically comprise:
Step 14B1; According to said face position parameter data; Lines describes style when specify generating second image; Generation has first intermediate image of describing style of said lines, and wherein, the style of describing of said lines comprises: the texture of the color of lines, the thickness of lines, line weight variation pattern, transparency and lines;
Step 14B2, with said first intermediate image when specify generating second image color describe style, play up second image of describing style with said color, wherein, the style of describing of said color comprises: the collocation mode of form and aspect in the picture.
After promptly first image being carried out describing style and handling of above-mentioned lines; Further play up according to the style of describing of the color of appointment; Concrete grammar is following: the collocation mode of describing form and aspect in the style finger drawing face of color, the table property form of saturation degree and lightness, the processing mode of relationship between light and dark; The pattern of style of writing texture, and the pattern of the color gradual change of color etc.The color of the image that in software, specify to generate is described style, makes it that to describe style consistent with the color of the whole picture in its place.Such as, the highlights colour of personage's body skin color is #f5bb9d in the picture, and dark portion colour is #db916a, and the highlights colour of the image that is generated (role is facial) color also is #f5bb9d, and dark portion also is #db916a.If the whole style of picture is monochromatic no gradual change color lump style, the role's face that is generated so also is the style of monochromatic no gradual change, and promptly the skin all colours all is a colour.Make the image result that source of photos generated of different form and aspect have identical or close colour.The light of user image is different, and color facial in the photo is different.When image generates, adopt with specific colour, like the method for #f2c79f rendered picture, the image when image is generated all passes through playing up of #f2c79f color.Thereby the result that the image source that makes different form and aspect generates reaches identical or approximate skin color.As shown in Figure 3, for according to after the describing style and handle of lines, again according to the legend that style is handled of describing of color.
Wherein, in the said method, step 14 can also specifically comprise:
Step 14C1; According to said face position parameter data; Lines describes style when specify generating second image; Generation has second intermediate image of describing style of said lines, and wherein, the style of describing of said lines comprises: the texture of the color of lines, the thickness of lines, line weight variation pattern, transparency and lines;
Step 14C2, with said second intermediate image when specify generating second image color describe style, generate the 3rd intermediate image of describing style with said color, wherein, the style of describing of said color comprises: the collocation mode of form and aspect in the picture;
Step 14C3, with said the 3rd intermediate image when specify generating second image style of writing describe style, play up second image of describing style with said style of writing, wherein, the style of describing of said style of writing comprises: the style of the vestige of wieling the pen in the picture.
Promptly to first image carry out above-mentioned lines describe the describing style and handle of style and color after, the style of describing of style of writing is handled during further according to the generation image of appointment.The style of describing of style of writing texture is meant: the style of the vestige of wieling the pen in the picture, also be skin texture.Change through the transparency contrast of color, the actual situation that the weight dynamics produces, the gradual change style of color, the pattern of skin texture, the color of shadow and pattern reach the expression effect with specific style.Style described in the style of writing of the image that specify to generate, and makes it that to describe style consistent with the style of writing of the whole picture in its place.Generate the style of writing texture that the image that comes out has specific style.
Further, in above-mentioned steps 14B2 or step 14C2 and 14C3, how color is described style and style of writing and describes style and implant in the picture, comprising:
Set up the figure layer of describing the style place of said color;
The style place figure layer of describing of said second intermediate image place figure layer and said color is synthesized processing, obtain having the 3rd intermediate image of describing style of said color.
Further, also comprise:
Set up the figure layer of describing the style place of said style of writing;
The figure layer at said the 3rd intermediate image place and the figure layer of describing the style place of said style of writing are synthesized processing, obtain having second image that style described in said style of writing.
Particularly; As shown in Figure 4; Image for the processing of describing style of describing style and style of writing of describing style, color of first image having been carried out lines; Concrete grammar comprises: preestablish a figure layer of describing style of describing style and specific style of writing with specific color; Combine with the figure layer that renders then (like above-mentioned describing of mentioning, or the facial characteristics image described of color) with lines with real human object image characteristic thus synthetic one promptly has true character features and has specific color again and describe the result that style described in style and style of writing, i.e. second image.
Another kind of mode is: according to said face position parameter data, directly said first image is played up according to the style of describing of describing style and/or style of writing of color, obtained said second image.
As shown in Figure 5, for directly original image (promptly not carrying out first image that comprises the real human object image that lines are described) being played up, make legend with second image of describing the style that style conforms to of other position colors of picture and style of writing.
Further, in the said method, have second image of certain cartoon style in generation after, can further include: in the true personage's in said second image the face position one or more, carry out local wind's processing of formatting.
Specifically comprise: subregional treatment technology respectively.Through to first image scanning; Judging the face position; Diverse location is carried out the subregion different disposal, and (this subarea processing has 3 kinds of situation: 1. directly first image (original image) that comprises the real human object image is handled, promptly described to accomplish simultaneously with lines.2. to the image of describing style that generates, carry out Local treatment again with lines.To lines describe style, the describing of color and style of writing carried out Local treatment after style is handled).Such as the processing of eyes being amplified or widening lines, eyebrow is carried out the processing of obfuscation, arbitrary position is carried out change and filter processing of the playing up of designated color, saturation degree, lightness, luminance contrast etc.
Further, in the true personage's in said second image the face position one or more, carry out local wind and format to handle and comprise:
One or more of true personage's in said second image face position carried out style design fix, perhaps carry out effect and dress up; Wherein, Effect is dressed up and is comprised: add a pupillogram layer at pupil position; Above the uppermost lines of eyes, add transparent or translucent eye shadow figure layer; Informer position at eyes adds eyelashes figure layer above said eye shadow figure layer, on the position of cheek, add opaque or translucent rouge figure layer, adds eyebrow figure layer in the position of eyebrow and adds shadow figure layer in form and aspect, lightness and the saturation degree of mouth position change lip and in nasal area.
As shown in Figure 6, the image for the certain illustrative gimmick of true personage's privileged site in second image of certain cartoon style is handled has 2 kinds of situation: 1. the processing of pair privileged site is accomplished with generation lines and second image of describing style simultaneously.2. to lines, the describing of color and style of writing carried out Local treatment after style is handled.Finger to finger allocation (as: face) carries out specific style and describes gimmick.The privileged site specific style is described gimmick and is referred to: in a certain drawing style, to the fixedly technique of expression of privileged site.The length that comprises specific lines, the lines array mode, the pattern that thickness changes, the depth changes, and actual situation changes, the pattern of shade, style of writing skin texture, and color and color combinations.Such as the means that the style of northern bar department adopts when handling eyebrow, as shown in Figure 7, to image displaying the time, find the position and the zone of user's eyebrow, the gimmick of northern bar department style is implanted in this zone as fixed pattern.Or carry out eyebrow material file in advance with this style, by assigned address it is implanted in the image again.The nose of specific style, face, eyes, the implementation method that reaches other positions such as facial shade is the same.
As shown in Figure 8, according to the local fixed position pattern of the different set of picture style, divide 3 kinds of situation: 1. describe to carry out simultaneously at lines.2. at lines, color, style of writing carry out after describing.3. at lines, color, style of writing are described to carry out between any one.Like shape of face, the nose type, the mouth type, eyebrow type, hair style all can be the fixed position in certain style.Its pattern is changeless.No matter that is: what the corresponding site of original image is, net result all can show with the pattern of fixed position.Example: when different user was made, some facial position of user can Unified Treatment become the same shape, and some position still keeps its original characteristic.
As shown in Figure 9, in the said method, can also further let result seem truer.Extract former image facial zone, make it to place upper strata or the lower floor that generates image.Adjustment generates the transparency of image or former image, and former image is mixed with the generation image layer, thereby realizes making the characteristic of former image to be apparent in the result who generates in the image.
Further, effect is dressed up also and comprised: facial optimization, hair treatment and shape of face are selected and generate personalized emoticons at said face.
Specifically comprise: the position that goes out face according to computer judges can be painted at the completion lines, and color after style of writing is described, carries out cosmetics by zones of different and adds.Can according to the hobby of oneself select add the color and the pattern of cosmetics.As: judge the position of pupil, add the figure layer of a pupil on the pupil position of generation image.This figure layer can have different colours and pattern to supply the user to select.Add eye shadow: judge the uppermost lines of eyes, add an opaque or translucent eye shadow figure layer above it.Add eyelashes: judge informer's position, above the eye shadow layer, add eyelashes figure layer with this as the starting point.The user can oneself adjust the crooked radian and the length of eyelashes.Add rouge: on the position that generates the image cheek, add opaque or translucent rouge figure layer.Change the eyebrow shape.After software recognized the zone of eyebrow, the user can change its color, size, length, thickness, and radian.Add eyebrow: find the position of eyebrow, add the eyebrow type material that designs, thereby change the pattern of eyebrow.Because the eyebrow type and the former eyebrow type shape of adding are different, the outer peripheral areas of adding the eyebrow type has the color consistent with the colour of skin, can not expose to guarantee previous eyebrow.Add lipstick: computing machine finds the regional extent of mouth, and the user can change form and aspect, lightness and the saturation degree of lip through panel.Also can in this zone, add the transparent or opaque color layers of the higher authorities, thereby change the lip color.Also can in lip-region, be written into various patterns, reach different effect of shadow.Beautify the nose type.Find nasal area,, change the three-dimensional highly sense of nose type through adding shadow figure layer.Or the direct nose type that designs that adds.
Color identification and analogue technique are carried out in the appointed area.Lines describe and the spanned file of color after describing on, cover new material layer (eyebrow, eyes, nose, mouth).New material can cover in the position on original image.In order to make original position can not expose vestige, at the outer designated color that is with certain zone that adds material.This regional color value is that (as: add eyebrow, the eyebrow periphery has a circle color to the color value that is same as certain appointed area on the spanned file.The colour of this color depends on the colour that generates eyebrow periphery skin in the image file, and is shown in figure 10).
Preferably, can also be the cosmetics of animation effect.Cosmetics layer file can be animation file.Such as eye shadow, rouge, the state of stellar scintillation appears in the pattern of lipstick and color frequently.
Preferably, can also be to carry out face optimization.According to aesthetic theory, define the position that can influence face appearance.Identifying influences such as shade, wrinkle lines attractive in appearance and color deletes it or desalinates.For example: the shade of the bridge of the nose that the shade except that the face shape, light cause, the wing of nose, mouth, eye socket, pouch, wrinkle etc., shown in figure 11.
Preferably, the disposal route of hair.True personage's hair may block local facial, causes hair to appear on the face.Disposal route: 4.1 eliminate hair.Judge the position of hair, the zone of hair is filled to the color of the facial peripheral colour of skin.Reach the result who eliminates hair.4.2 change hair color.Judge the zone that appears at facial hair, the color in this zone is adjusted to the color of user-selected hair style, thereby look and be called one with user-selected hair style.
Further, the user selects the method for own shape of face: shown in figure 12, comprise following several function:
1. selection shape of face; 2. the backstage generates image; 3. extract; 4. generate multiple head portrait; 5. modify
Wherein, function 1, function 2, function 3 can be chosen in the foreground with function 4 or the backstage makes, and function 3 and function 4 can location swaps; The position of function 1 also can be placed on the back of function 4.
All functions both can all be accomplished at front end (interface at terminal), also can (backstage at terminal or service end) accomplish all in the rear end.
Wherein, above-mentioned functions 1 (selection shape of face): in a plurality of shapes of face that provide, choose oneself satisfied shape of face.
Function 2 (backstage generation image):, comprise the position of two pupils, the angle of the center of last lower lip and two pupils and horizontal direction through scanning the position that true human image gets access to facial face in the photo; According to preset designated parameter, use uniform zoom, rotation waits the adjustment mode; And a series of images plays up processing, or playing up of designated color carried out at arbitrary position, and saturation degree; Lightness, the change of brightness and contrast, or the filter processing generates the face characteristic.
Function 3 (extraction): the shape of face template layer is the shape of face masking-out of some hollow outs, and when the user selected shape of face at the terminal, the hollow out shape of face masking-out changed with selection.After the selected shape of face of user, software can merge the shape of shape of face masking-out with the generation image, extract then.
Function 4 (generating multiple head portrait): generate image (the same image that is meant after above-mentioned process color or the style of writing are described) amalgamation of different shape of face masking-out phase with many moneys, generate the image of many different shapes of face, come out in user terminal displays together then.
Function 5 (modification): the user revises unsatisfied head portrait of in above-mentioned functions, selecting, and is shown in figure 13.
The user can change the hair style of oneself at any time at the terminal.Because the shape of face shape is unfixed, have wide have narrow.Some hair style need be exposed the width of shape of face.So all hair style width are as the criterion with the width that can expose the wideest shape of face.Between hair style and narrow shape of face, the space occurs, software is implanted a hair style bottom that conforms to the hair style color.Place the back of hair style layer.Like this, no matter which kind of shape of face the user selects, and shape of face all can show, and the space can not occur between hair style and the shape of face simultaneously, and is shown in figure 14.Further, when above-mentioned hair style is handled, can change the hair style color.Every money hair style all has its corresponding hair style bottom, and the shape of hair style bottom and color are different and different according to hair style shape and color.
In addition; Above-mentioned second image for generating adds personalized emoticons, specifically can be that " personalized emoticons button " realized, clicks " personalized emoticons " button; As before do not have cartoon head portrait; Begin to make the cartoon head portrait of specifying style, treat that head portrait is carried out after, the user selects the style of dynamic expression.Will before client occurs, ready-made cartoon head portrait be the dynamic expression of image.Like before existing cartoon head portrait, click " personalized emoticons ", after selecting pattern, with the result who directly sees selected expression, wherein, the backstage has several existing expression files folders.Each expression file folder is represented an expression, and several expression files are contained in the lining.Need be with the cartoon head portrait that generates as bottom, expression is placed on it, thus the expression of generation.If dynamic expression just has many expression figure in the file, in order successively with the amalgamation of former generation image, play by the time, generate animation file;
The user can send expression through clicking the personalized emoticons icon that has generated in certain scene.Also can through the content of monitor user ' literal or phonetic entry, the corresponding expression of literal be accessed automatically in use, directly send.
Comprise in the said facial personalized emoticons that generates:
The personalized emoticons that the background style that meets cartoon style in said second image in said facial generation is consistent; Promptly the style of describing of describing style and cartoon head portrait of expression is consistent.If promptly cartoon style is the black and white lines, the dynamic expression that is generated also is the black and white lines.If the cartoon head portrait style is colored, the dynamic expression of generation also is colored, shown in figure 16.
Further, above-mentioned expression can also be transferred to high in the clouds, after handling; Return the foreground at terminal, after the user carries out cartoon head portrait, select the style of dynamic expression at the terminal; Data are passed to the backstage, and the backstage generates corresponding dynamic expression file according to selected data, re-sends to user terminal.
Also can be that the user can select to express one's feelings behind the pattern at the terminal, directly generate animation file.
Wherein, personalized emoticons can be static, also can be dynamic.
The generation of static personalized emoticons: carry out the cartoon head portrait of specifying style earlier.In the upper strata or the lower floor that generate image, or levels adding expression layer, the file that has expression layer primary colors generated.The generation of personalized dynamic expression: carry out the cartoon head portrait of specifying style earlier.In the upper strata or the lower floor that generate image, or levels adding expression layer, the expression layer has the expression file of some different contents.Be merged into the file of each frame successively with the cartoon head portrait layer, the time of pressing designated frame occurs.The final animation file that forms.Wherein can have a frame or multiframe to present former cartoon head portrait, the layer of promptly not expressing one's feelings hides.Thereby can see the original appearance of cartoon head portrait when reaching the animation broadcast moment, shown in figure 15.
When generating personalized emoticons, can also be directly in the said facial personalized emoticons that meets cartoon style in said second image that generates.Shown in figure 17, promptly personalized emoticons is based on that head image that said method generates carries out, and promptly itself does not have health and background in this second image.Promptly has only a head image.
When the true personage's in above-mentioned second image face added personalized emoticons, new personalized emoticons of adding can cover the corresponding site on original image.In order to make original position can not expose vestige, at the outer designated color that is with certain zone that adds expression.This regional color value be the color value that is same as certain appointed area on the spanned file (as: expression of interpolation includes mouth, eye, mouth and eye periphery have a circle color so.The colour of this color depends on the colour of mouth periphery skin in the generation image file and the colour of eye periphery skin.
Embodiments of the invention also provide a kind of real human object image to convert the device of cartoon style image into, comprising:
Obtain module, be used to obtain to include first image of real human object image;
Analysis module is used for said first image is carried out analyzing and processing, obtains the parameter information of reflection personage face in the said real human object image;
Modular converter is used for converting the parameter information of said reflection personage face into cartoon style corresponding face position parameter data;
Generation module is used for according to said face position parameter data, generates second image that has said cartoon style and comprise said true personage.
Need to prove: the embodiment of this device is corresponding fully with the embodiment of said method, and therefore, all implementations among the said method embodiment all are applicable to also can reach identical technique effect among the embodiment of this device, repeats no more at this.
The above is a preferred implementation of the present invention; Should be pointed out that for those skilled in the art, under the prerequisite that does not break away from principle according to the invention; Can also make some improvement and retouching, these improvement and retouching also should be regarded as protection scope of the present invention.

Claims (13)

1. a real human object image converts the method for cartoon style image into, it is characterized in that, comprising:
Acquisition includes first image of real human object image;
Said first image is carried out analyzing and processing, obtain the parameter information of reflection personage face in the said real human object image;
Convert the parameter information of said reflection personage face in the cartoon style corresponding face position parameter data;
According to said face position parameter data, generate second image that has said cartoon style and comprise said true personage.
2. real human object image according to claim 1 converts the method for cartoon style image into, it is characterized in that, said said first image is carried out analyzing and processing, and the step that obtains the parameter information of reflection personage face in the said real human object image comprises:
Through face recognition technology, said first image is carried out analyzing and processing, obtain the parameter information of reflection personage face in the said real human object image; Wherein, the parameter information of said reflection personage face comprises: the position parameter data of two pupils, the center parameter information of going up lower lip and the angle parameter information of two pupils and horizontal direction.
3. real human object image according to claim 2 converts the method for cartoon style image into, it is characterized in that, the step that said parameter information with said reflection personage face converts corresponding face position parameter data in the cartoon style into comprises:
According to preset uniform zoom parameter and/or rotation parameter, convert the said parameter information of personage's face that reflects in the said cartoon style corresponding face position parameter data; Wherein, said face position parameter data comprises: the position parameter data of two pupils, the center parameter information of going up lower lip and the angle parameter information of two pupils and horizontal direction in the said cartoon style.
4. real human object image according to claim 3 converts the method for cartoon style image into, it is characterized in that, and is said according to said face position parameter data, generates to have said cartoon style and comprise that the step of said true personage's second image comprises:
According to said face position parameter data, when specify generating second image lines describe style, said real human object image is played up second image that has said cartoon style and comprise said true personage; Wherein, the style of describing of said lines comprises: the texture of the color of lines, the thickness of lines, line weight variation pattern, transparency and lines.
5. real human object image according to claim 3 converts the method for cartoon style image into, it is characterized in that, and is said according to said face position parameter data, generates to have said cartoon style and comprise that the step of said true personage's second image comprises:
According to said face position parameter data; Lines describes style when specify generating second image; Generation has first intermediate image of describing style of said lines; Wherein, the style of describing of said lines comprises: the texture of the color of lines, the thickness of lines, line weight variation pattern, transparency and lines;
With said first intermediate image when specify generating second image color describe style, play up second image of describing style with said color, wherein, the style of describing of said color comprises: the collocation mode of form and aspect in the picture.
6. real human object image according to claim 3 converts the method for cartoon style image into, it is characterized in that, and is said according to said face position parameter data, generates to have said cartoon style and comprise that the step of said true personage's second image comprises:
According to said face position parameter data; Lines describes style when specify generating second image; Generation has second intermediate image of describing style of said lines; Wherein, the style of describing of said lines comprises: the texture of the color of lines, the thickness of lines, line weight variation pattern, transparency and lines;
With said second intermediate image when specify generating second image color describe style, generate the 3rd intermediate image of describing style with said color, wherein, the style of describing of said color comprises: the collocation mode of form and aspect in the picture;
With said the 3rd intermediate image when specify generating second image style of writing describe style, play up second image of describing style with said style of writing, wherein, the style of describing of said style of writing comprises: the style of the vestige of wieling the pen in the picture.
7. real human object image according to claim 6 converts the method for cartoon style image into; It is characterized in that; With said second intermediate image when specify generating second image color describe style, the step that generates the 3rd intermediate image of describing style with said color comprises:
Set up the figure layer of describing the style place of said color;
The style place figure layer of describing of said second intermediate image place figure layer and said color is synthesized processing, obtain having the 3rd intermediate image of describing style of said color.
8. convert the method for cartoon style image into according to claim 6 or 7 described real human object images; It is characterized in that; With said the 3rd intermediate image when specify generating second image style of writing describe style, the step of playing up second image of describing style with said style of writing comprises:
Set up the figure layer of describing the style place of said style of writing;
The figure layer at said the 3rd intermediate image place and the figure layer of describing the style place of said style of writing are synthesized processing, obtain having second image that style described in said style of writing.
9. real human object image according to claim 3 converts the method for cartoon style image into, it is characterized in that, according to said face position parameter data, generation has said cartoon style and comprises that the step of said true personage's second image comprises:
According to said face position parameter data, directly said first image is played up according to the style of describing of describing style and/or style of writing of color, obtain said second image.
10. real human object image according to claim 1 converts the method for cartoon style image into, it is characterized in that, also comprises:
To in the true personage's in said second image the face position one or more, carry out local wind's processing of formatting.
11. real human object image according to claim 10 converts the method for cartoon style image into, it is characterized in that, in the true personage's in said second image the face position one or more, carry out local wind and format to handle and comprise:
One or more of true personage's in said second image face position carried out style design fix, perhaps carry out effect and dress up; Wherein, Effect is dressed up and is comprised: add a pupillogram layer at pupil position, above the uppermost lines of eyes, add transparent or translucent eye shadow figure layer, above said eye shadow figure layer, add eyelashes figure layer in the informer position of eyes; On the position of cheek, add opaque or translucent rouge figure layer; Add eyebrow figure layer in the position of eyebrow and change form and aspect, lightness and the saturation degree of lip, add shadow figure layer in nasal area, facial optimization at mouth position; Hair treatment, shape of face are selected and are generated personalized emoticons at said face.
12. real human object image according to claim 11 converts the method for cartoon style image into, it is characterized in that, comprises in the said facial personalized emoticons that generates:
The personalized emoticons that the background style that meets cartoon style in said second image in said facial generation is consistent; Perhaps
Directly in the said facial personalized emoticons that meets cartoon style in said second image that generates.
13. a real human object image converts the device of cartoon style image into, it is characterized in that, comprising:
Obtain module, be used to obtain to include first image of real human object image;
Analysis module is used for said first image is carried out analyzing and processing, obtains the parameter information of reflection personage face in the said real human object image;
Modular converter is used for converting the parameter information of said reflection personage face into cartoon style corresponding face position parameter data;
Generation module is used for according to said face position parameter data, generates second image that has said cartoon style and comprise said true personage.
CN2012100936474A 2012-03-31 2012-03-31 Method and device for converting real character image to cartoon-style image Pending CN102682420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012100936474A CN102682420A (en) 2012-03-31 2012-03-31 Method and device for converting real character image to cartoon-style image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012100936474A CN102682420A (en) 2012-03-31 2012-03-31 Method and device for converting real character image to cartoon-style image

Publications (1)

Publication Number Publication Date
CN102682420A true CN102682420A (en) 2012-09-19

Family

ID=46814291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012100936474A Pending CN102682420A (en) 2012-03-31 2012-03-31 Method and device for converting real character image to cartoon-style image

Country Status (1)

Country Link
CN (1) CN102682420A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463779A (en) * 2014-12-18 2015-03-25 北京奇虎科技有限公司 Portrait caricature generating method and device
CN105096353A (en) * 2014-05-05 2015-11-25 腾讯科技(深圳)有限公司 Image processing method and device
CN106127841A (en) * 2016-06-22 2016-11-16 丁焱 A kind of method generating individual cartoon Dynamic Graph based on human face photo
CN106327539A (en) * 2015-07-01 2017-01-11 北京大学 Image reconstruction method and device based on example
CN106570911A (en) * 2016-08-29 2017-04-19 上海交通大学 DAISY descriptor-based facial caricature synthesis method
CN106791347A (en) * 2015-11-20 2017-05-31 比亚迪股份有限公司 A kind of image processing method, device and the mobile terminal using the method
CN106815309A (en) * 2016-12-20 2017-06-09 北京奇虎科技有限公司 A kind of image method for pushing, device and mobile terminal
CN106887024A (en) * 2015-12-16 2017-06-23 腾讯科技(深圳)有限公司 The processing method and processing system of photo
CN107464271A (en) * 2017-07-18 2017-12-12 山东捷瑞数字科技股份有限公司 A kind of method for the shade that added drop shadow for displaying content dynamic
CN107516290A (en) * 2017-07-14 2017-12-26 北京奇虎科技有限公司 Image switching network acquisition methods, device, computing device and storage medium
CN107967667A (en) * 2017-12-21 2018-04-27 广东欧珀移动通信有限公司 Generation method, device, terminal device and the storage medium of sketch
CN108012091A (en) * 2017-11-29 2018-05-08 北京奇虎科技有限公司 Image processing method, device, equipment and its storage medium
CN108614994A (en) * 2018-03-27 2018-10-02 深圳市智能机器人研究院 A kind of Human Head Region Image Segment extracting method and device based on deep learning
CN108734754A (en) * 2018-05-28 2018-11-02 北京小米移动软件有限公司 Image processing method and device
CN108986018A (en) * 2018-07-02 2018-12-11 陈超 Automatic U.S. figure platform based on the beautification of the face cheek
CN109299658A (en) * 2018-08-21 2019-02-01 腾讯科技(深圳)有限公司 Face area detecting method, face image rendering method, device and storage medium
CN109461118A (en) * 2018-11-12 2019-03-12 泰普智能有限公司 A kind of image processing method and device
CN109509141A (en) * 2017-09-15 2019-03-22 阿里巴巴集团控股有限公司 Image processing method, head portrait setting method and device
CN110298326A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 A kind of image processing method and device, storage medium and terminal
CN110580677A (en) * 2018-06-08 2019-12-17 北京搜狗科技发展有限公司 Data processing method and device and data processing device
CN110648384A (en) * 2019-06-19 2020-01-03 北京巴别时代科技股份有限公司 Cartoon stylized rendering method
CN111583103A (en) * 2020-05-14 2020-08-25 北京字节跳动网络技术有限公司 Face image processing method and device, electronic equipment and computer storage medium
CN111862290A (en) * 2020-07-03 2020-10-30 完美世界(北京)软件科技发展有限公司 Radial fuzzy-based fluff rendering method and device and storage medium
CN112270735A (en) * 2020-10-27 2021-01-26 北京达佳互联信息技术有限公司 Virtual image model generation method and device, electronic equipment and storage medium
WO2021083028A1 (en) * 2019-11-01 2021-05-06 北京字节跳动网络技术有限公司 Image processing method and apparatus, electronic device and storage medium
CN113052757A (en) * 2021-03-08 2021-06-29 Oppo广东移动通信有限公司 Image processing method, device, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1516078A (en) * 2003-01-09 2004-07-28 春水堂科技娱乐股份有限公司 Virtual portrait production system
US20060082579A1 (en) * 2004-10-18 2006-04-20 Reallusion Inc. Caricature generating system and method
CN1906631A (en) * 2004-01-30 2007-01-31 数码时尚株式会社 Makeup simulation program, makeup simulation device, and makeup simulation method
CN1972274A (en) * 2006-11-07 2007-05-30 搜图科技(南京)有限公司 System and method for processing facial image change based on Internet and mobile application
CN101847268A (en) * 2010-04-29 2010-09-29 北京中星微电子有限公司 Cartoon human face image generation method and device based on human face images
CN101887366A (en) * 2010-06-01 2010-11-17 云南大学 Digital simulation and synthesis technology with artistic style of Yunnan heavy-color painting

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1516078A (en) * 2003-01-09 2004-07-28 春水堂科技娱乐股份有限公司 Virtual portrait production system
CN1906631A (en) * 2004-01-30 2007-01-31 数码时尚株式会社 Makeup simulation program, makeup simulation device, and makeup simulation method
US20060082579A1 (en) * 2004-10-18 2006-04-20 Reallusion Inc. Caricature generating system and method
CN1972274A (en) * 2006-11-07 2007-05-30 搜图科技(南京)有限公司 System and method for processing facial image change based on Internet and mobile application
CN101847268A (en) * 2010-04-29 2010-09-29 北京中星微电子有限公司 Cartoon human face image generation method and device based on human face images
CN101887366A (en) * 2010-06-01 2010-11-17 云南大学 Digital simulation and synthesis technology with artistic style of Yunnan heavy-color painting

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096353A (en) * 2014-05-05 2015-11-25 腾讯科技(深圳)有限公司 Image processing method and device
CN104463779A (en) * 2014-12-18 2015-03-25 北京奇虎科技有限公司 Portrait caricature generating method and device
CN106327539B (en) * 2015-07-01 2019-06-28 北京大学 Image rebuilding method and device based on sample
CN106327539A (en) * 2015-07-01 2017-01-11 北京大学 Image reconstruction method and device based on example
CN106791347A (en) * 2015-11-20 2017-05-31 比亚迪股份有限公司 A kind of image processing method, device and the mobile terminal using the method
CN106887024A (en) * 2015-12-16 2017-06-23 腾讯科技(深圳)有限公司 The processing method and processing system of photo
US10354125B2 (en) 2015-12-16 2019-07-16 Tencent Technology(Shenzhen) Company Limited Photograph processing method and system
CN106127841A (en) * 2016-06-22 2016-11-16 丁焱 A kind of method generating individual cartoon Dynamic Graph based on human face photo
CN106570911A (en) * 2016-08-29 2017-04-19 上海交通大学 DAISY descriptor-based facial caricature synthesis method
CN106570911B (en) * 2016-08-29 2020-04-10 上海交通大学 Method for synthesizing facial cartoon based on daisy descriptor
CN106815309A (en) * 2016-12-20 2017-06-09 北京奇虎科技有限公司 A kind of image method for pushing, device and mobile terminal
CN107516290A (en) * 2017-07-14 2017-12-26 北京奇虎科技有限公司 Image switching network acquisition methods, device, computing device and storage medium
CN107516290B (en) * 2017-07-14 2021-03-19 北京奇虎科技有限公司 Image conversion network acquisition method and device, computing equipment and storage medium
CN107464271A (en) * 2017-07-18 2017-12-12 山东捷瑞数字科技股份有限公司 A kind of method for the shade that added drop shadow for displaying content dynamic
CN109509141A (en) * 2017-09-15 2019-03-22 阿里巴巴集团控股有限公司 Image processing method, head portrait setting method and device
CN108012091A (en) * 2017-11-29 2018-05-08 北京奇虎科技有限公司 Image processing method, device, equipment and its storage medium
CN107967667A (en) * 2017-12-21 2018-04-27 广东欧珀移动通信有限公司 Generation method, device, terminal device and the storage medium of sketch
CN108614994A (en) * 2018-03-27 2018-10-02 深圳市智能机器人研究院 A kind of Human Head Region Image Segment extracting method and device based on deep learning
CN108734754B (en) * 2018-05-28 2022-05-06 北京小米移动软件有限公司 Image processing method and device
CN108734754A (en) * 2018-05-28 2018-11-02 北京小米移动软件有限公司 Image processing method and device
CN110580677A (en) * 2018-06-08 2019-12-17 北京搜狗科技发展有限公司 Data processing method and device and data processing device
CN108986018A (en) * 2018-07-02 2018-12-11 陈超 Automatic U.S. figure platform based on the beautification of the face cheek
CN109299658A (en) * 2018-08-21 2019-02-01 腾讯科技(深圳)有限公司 Face area detecting method, face image rendering method, device and storage medium
CN109299658B (en) * 2018-08-21 2022-07-08 腾讯科技(深圳)有限公司 Face detection method, face image rendering device and storage medium
CN109461118A (en) * 2018-11-12 2019-03-12 泰普智能有限公司 A kind of image processing method and device
CN110648384A (en) * 2019-06-19 2020-01-03 北京巴别时代科技股份有限公司 Cartoon stylized rendering method
CN110648384B (en) * 2019-06-19 2023-01-03 北京巴别时代科技股份有限公司 Cartoon stylized rendering method
CN110298326A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 A kind of image processing method and device, storage medium and terminal
US11593983B2 (en) * 2019-11-01 2023-02-28 Beijing Bytedance Network Technology Co., Ltd. Image processing method and apparatus, electronic device, and storage medium
WO2021083028A1 (en) * 2019-11-01 2021-05-06 北京字节跳动网络技术有限公司 Image processing method and apparatus, electronic device and storage medium
US20220172418A1 (en) * 2019-11-01 2022-06-02 Beijing Bytedance Network Technology Co., Ltd. Image processing method and apparatus, electronic device, and storage medium
CN111583103A (en) * 2020-05-14 2020-08-25 北京字节跳动网络技术有限公司 Face image processing method and device, electronic equipment and computer storage medium
CN111583103B (en) * 2020-05-14 2023-05-16 抖音视界有限公司 Face image processing method and device, electronic equipment and computer storage medium
CN111862290A (en) * 2020-07-03 2020-10-30 完美世界(北京)软件科技发展有限公司 Radial fuzzy-based fluff rendering method and device and storage medium
CN112270735A (en) * 2020-10-27 2021-01-26 北京达佳互联信息技术有限公司 Virtual image model generation method and device, electronic equipment and storage medium
CN112270735B (en) * 2020-10-27 2023-07-28 北京达佳互联信息技术有限公司 Virtual image model generation method, device, electronic equipment and storage medium
CN113052757A (en) * 2021-03-08 2021-06-29 Oppo广东移动通信有限公司 Image processing method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN102682420A (en) Method and device for converting real character image to cartoon-style image
CN108292423B (en) Partial makeup making, partial makeup utilizing device, method, and recording medium
US9424811B2 (en) Digital collage creation kit
CN101779218B (en) Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
US5960099A (en) System and method for creating a digitized likeness of persons
JP5324031B2 (en) Beauty simulation system
CN110390632B (en) Image processing method and device based on dressing template, storage medium and terminal
JP4753025B2 (en) Makeup simulation method
CN106709781A (en) Personal image design and collocation purchasing device and method
JP2010507854A (en) Method and apparatus for virtual simulation of video image sequence
JP2011209887A (en) Method and program for creating avatar, and network service system
JP2004537901A (en) Automatic frame selection and layout of one or more images and generation of images bounded by frames
CN108805090A (en) A kind of virtual examination cosmetic method based on Plane Gridding Model
CN107705240A (en) Virtual examination cosmetic method, device and electronic equipment
Hopkins Fashion drawing
TW200805175A (en) Makeup simulation system, makeup simulation device, makeup simulation method and makeup simulation program
WO2020104990A1 (en) Virtually trying cloths & accessories on body model
CN106447739A (en) Method for generating makeup region dynamic image and beauty makeup assisting method and device
JP2021144582A (en) Makeup simulation device, makeup simulation method and program
CN112465606A (en) Cosmetic customization system
CN103258271A (en) System and method of natural person digitalized image design
JP2000151985A (en) Picture processing method and recording medium
Zhao et al. Research on the application of computer image processing technology in painting creation
JP2013178789A (en) Beauty simulation system
JP2017228901A (en) Image processing apparatus and computer program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20120919