WO2022230298A1 - 顔画像生成装置 - Google Patents
顔画像生成装置 Download PDFInfo
- Publication number
- WO2022230298A1 WO2022230298A1 PCT/JP2022/005194 JP2022005194W WO2022230298A1 WO 2022230298 A1 WO2022230298 A1 WO 2022230298A1 JP 2022005194 W JP2022005194 W JP 2022005194W WO 2022230298 A1 WO2022230298 A1 WO 2022230298A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- makeup
- face image
- standard data
- user
- pattern
- Prior art date
Links
- 238000000605 extraction Methods 0.000 claims abstract description 61
- 239000000284 extract Substances 0.000 claims abstract description 15
- 230000001815 facial effect Effects 0.000 claims description 129
- 238000012937 correction Methods 0.000 claims description 54
- 238000003860 storage Methods 0.000 claims description 30
- 238000006243 chemical reaction Methods 0.000 claims description 22
- 238000009826 distribution Methods 0.000 claims description 12
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 description 61
- 210000004709 eyebrow Anatomy 0.000 description 59
- 238000000034 method Methods 0.000 description 33
- 230000008569 process Effects 0.000 description 19
- 230000004048 modification Effects 0.000 description 11
- 238000012986 modification Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 239000000843 powder Substances 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000006249 magnetic particle Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/80—Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- One aspect of the present invention relates to a face image generation device.
- a device acquires a user's face image and generates a post-makeup image by applying makeup to the face image.
- Japanese Patent Laid-Open No. 2002-200000 discloses a method in which a user selects situations such as business, shopping, outdoor activities, weddings, and funerals, automatically generates a post-makeup image to which makeup is applied according to the selected situation, and is disclosed.
- a post-makeup image is generated by applying makeup prepared in advance for each situation to the face image of the user's bare face without makeup.
- makeup for each situation is determined according to a uniform standard prepared in advance.
- features of the user's makeup for example, preferences and preferences regarding makeup
- the after-makeup image for example, when generating an avatar's face image defined in a virtual space, there is a user's need to incorporate user's preferences, preferences, etc. into the avatar's face image.
- one aspect of the present invention provides a face image generation method capable of automatically generating a face image reflecting features of a user's makeup when generating a face image with makeup applied corresponding to a predetermined pattern.
- the purpose is to provide an apparatus.
- a facial image generation device includes a first facial image acquisition unit that acquires a first facial image with makeup corresponding to a first pattern by a user, a first facial image, and a first pattern.
- a feature extracting unit for extracting the user's makeup features based on the first standard data on the standard makeup features corresponding to the standard makeup features corresponding to the second pattern different from the first pattern;
- a second face image generation unit for generating a second face image with makeup corresponding to the second pattern by reflecting the makeup features of the user extracted by the feature extraction unit on the second standard data relating to the features.
- a presenting unit that presents the second facial image generated by the second facial image generating unit to the user.
- the features of makeup applied by the user for a certain pattern are combined with the first standard data about the features of standard makeup corresponding to the first face image and the first pattern. extracted based on Then, by reflecting the extracted features of makeup of the user in the second standard data about the features of standard makeup corresponding to the second pattern, a second face image with makeup corresponding to the second pattern is applied. is generated. Then, the second face image generated in this manner is presented to the user. That is, according to the facial image generating apparatus, the user's makeup corresponding to one pattern (first pattern) is used as a base and customized for another pattern (second pattern), and the features of the user's makeup are changed. A reflected face image (second face image) is obtained. Therefore, according to the face image generation device, when generating a face image with makeup applied in accordance with a predetermined pattern, it is possible to automatically generate a face image that reflects the features of the user's makeup.
- face image generation capable of automatically generating a face image that reflects features of a user's makeup when generating a face image with makeup corresponding to a predetermined pattern.
- Equipment can be provided.
- FIG. 1 is a block diagram showing the functional configuration of a face image generation device according to an embodiment
- FIG. FIG. 4 is a diagram showing an example of makeup target parts and parameters; It is a figure which shows the example of some standard data. It is a figure which shows typically an example of the process of a feature extraction part. It is a figure which shows typically an example of the process of a 2nd face image production
- FIG. 7 is a flowchart showing an example of a processing procedure in step S3 of FIG. 6;
- FIG. FIG. 7 is a flowchart showing an example of a processing procedure in step S4 of FIG. 6;
- FIG. 10 is a flow chart showing a first modified example of processing of the face image generating device;
- FIG. 11 is a flowchart showing a second modified example of processing of the face image generation device;
- FIG. 10 is a flow chart showing a third modified example of processing of the face image generating device; It is a figure which shows an example of the hardware constitutions of a face image production
- FIG. 1 is a block diagram showing the functional configuration of the facial image generation device 10 according to one embodiment.
- the facial image generator 10 automatically generates a facial image to be applied to a user's avatar defined in a virtual space shared by multiple users.
- Virtual space refers to a virtual two-dimensional or three-dimensional space represented by images displayed on a computer.
- An "avatar” is a user's alter ego in a virtual space represented by a computer.
- An avatar is a type of virtual object defined in virtual space.
- a "virtual object” is an object that does not exist in the real world and is represented on a computer system.
- Avatars (virtual objects) may be represented two-dimensionally or three-dimensionally. For example, by providing a user with a content image containing a virtual space and a virtual object (including an avatar), the user can experience Augmented Reality (AR), Virtual Reality (VR), or You can experience mixed reality (MR).
- AR Augmented Reality
- VR Virtual Reality
- MR mixed reality
- the facial image generation device 10 determines the user's makeup features (for example, preference, preference, etc.) based on the facial image (first facial image) of the avatar to which the user has applied makeup for a certain purpose (first pattern). etc.) is automatically generated to automatically generate a face image (second face image) of the avatar on which makeup for another use (second pattern) is applied.
- first facial image facial image
- second face image face image
- the face image generation device 10 can be, for example, a mobile terminal such as a high-performance mobile phone (smartphone), a tablet terminal, a wearable terminal (e.g., head-mounted display (HMD), smart glasses, etc.), a laptop personal computer, or a mobile phone. good.
- the face image generation device 10 may be a stationary terminal such as a desktop personal computer.
- the face image generation device 10 may be a user terminal possessed by each user as described above, or may be a server device configured to be able to communicate with each user's user terminal.
- the face image generation device 10 may be configured by a combination of a user terminal and a server device. That is, the face image generation device 10 may be configured by a single computer device, or may be configured by a plurality of computer devices that can communicate with each other.
- the face image generation device 10 includes a storage unit 11, a standard data generation unit 12, a makeup input unit 13, a first face image acquisition unit 14, a feature extraction unit 15, a second It has a face image generator 16 and a presentation unit 17 .
- the storage unit 11 stores various data used or generated by the face image generation device 10 .
- the storage unit 11 stores the standard data for each pattern generated by the standard data generation unit 12, the generated face image of the avatar (including the face image with makeup applied for each pattern), correction information described later, and the like.
- the standard data generation unit 12 generates standard data (first standard data, second standard data) corresponding to each of a plurality of patterns prepared in advance.
- the generated standard data is stored in the storage unit 11 .
- a “pattern” is a predetermined classification based on usage (situation), attributes of a person to whom makeup is applied (for example, gender, age, etc.), and the like. Examples of uses include when going out (shopping, etc.), going to school, working, dating, weddings, funerals, and the like.
- the pattern is not limited to the above-described classification, and classification based on various viewpoints can be used as the pattern.
- patterns may include categories such as climate (temperature, weather, etc.), time of year (eg, season, etc.).
- Standard data corresponding to a certain pattern is data relating to makeup features of a face image with standard makeup (reference makeup) corresponding to the pattern.
- the standard data corresponding to each pattern includes information (for example, information on average, variance, etc.) regarding a predetermined distribution of standard makeup corresponding to each pattern.
- the distribution may be arbitrarily determined by the service provider, or may be determined based on statistical analysis results of a plurality of face images (samples) with makeup applied corresponding to each pattern. good.
- the standard data corresponding to each pattern includes information on distribution of makeup features based on a face of a predetermined standard size.
- the information on the makeup feature distribution included in the standard data corresponding to each pattern includes the mean and variance of the makeup-related parameters of one or more predetermined makeup target regions.
- One or more parameters are associated with each makeup target part, and an average and a variance are set for each parameter.
- FIG. 2 is a diagram showing an example of makeup target parts and parameters.
- the make-up target site is a portion (part) that can be a make-up target.
- mascara, eyeshadow, eyebrows, lips (lipstick), powder applied to the face (makeup powder), makeup base, foundation, cheeks, and eyeliner are set as makeup target areas.
- one or more parameters are set for each make-up target part.
- a parameter is information indicating an attribute that can be represented by a numerical value among makeup features for a makeup target site.
- four parameters of length, thickness, density, and position are set for the makeup target part "mascara".
- one or a plurality of parameters are set for other makeup target parts.
- FIG. 3 is a diagram showing an example of part of the standard data.
- FIG. 3A shows part of the standard data (makeup data) corresponding to the usage "when going out” and the pattern of the attribute "female in her teens” (that is, the pattern corresponding to "female in her teens going out”).
- the data includes information indicating that the mean of the parameter "length” of the make-up target part "eyebrows” for the above pattern is 5 cm, and the variance is 1.0 cm.
- the data includes information indicating the average and variance of the parameters "position” and "thickness" of the makeup target part "eyebrows" with respect to the pattern.
- the parameter "position” is the center of gravity of the left (or right) eyebrow in the XY coordinates (the X-axis direction is the horizontal direction and the Y-axis direction is the vertical direction) with the center of the face as the origin (0, 0). position coordinates. It should be noted that when expressing the face image of the avatar in three dimensions, the parameter "position” may be expressed by XYZ coordinates including the Z-axis direction (depth direction). In this embodiment, XY coordinates are used for the sake of simplicity.
- (B) of FIG. 3 shows a part of the standard data corresponding to a pattern different from the pattern shown in (A) of FIG. Each parameter value of the makeup target part "eyebrows") is shown. In the example of FIG. 3, as the standard eyebrow makeup for going to school, lighter makeup than when going out (that is, eyebrows shorter and thinner than when going out) is set.
- the standard data generation unit 12 generates standard data for each pattern (data including the average and variance of each parameter of each makeup target part) as shown in FIG. 3, for example, by the following method.
- the standard data generation unit 12 generates avatars corresponding to each of a plurality of face images (actually photographed images) to which makeup is applied in the real world, and generates standard data for each pattern based on the makeup features of the generated plurality of avatars.
- data may be generated.
- the standard data generation unit 12 can generate an avatar from the photographed image using a known method.
- the standard data generation unit 12 executes image processing (image analysis) of the photographed image to extract a specific makeup target part (for example, eyebrows) in the face image, and extract features of the extracted makeup target part (e.g., eyebrows). For example, length, thickness, density, position, etc.) are extracted.
- image processing image analysis
- the standard data generation unit 12 executes image processing (image analysis) of the photographed image to extract a specific makeup target part (for example, eyebrows) in the face image, and extract features of the extracted makeup target part (e.g., eyebrows). For example, length, thickness, density, position, etc.) are extracted.
- the standard data generation unit 12
- the standard data generation unit 12 may acquire information about the pattern of makeup in the actual image (for example, a pattern such as "for teenage women when going out"), and associate the pattern with the generated avatar. good.
- Such makeup pattern information may be registered in advance by the owner of the photographed image, for example.
- a person who acquires a photographed image may register a pattern that is inferred from the shooting time of the photographed image, the scenery, or the like.
- the standard data generation unit 12 collects a plurality of avatars with makeup in the real world (that is, avatars generated based on face images with makeup in the real world) for each pattern. can do.
- the standard data generation unit 12 may generate standard data for each pattern based on a plurality of avatars collected for each pattern. For example, the standard data generator 12 converts each face image of a plurality of avatars collected for each pattern into a predetermined standard size, and then converts the features of each makeup target part of each face image (that is, each parameter values) may be calculated to calculate each parameter value (average and variance) of each makeup target part. Through such processing, standard data as shown in FIG. 3 is obtained for each combination of pattern and makeup target part.
- the standard data generator 12 may generate standard data for each pattern based on makeup features of a plurality of avatars generated by a plurality of users in the virtual space. For example, the standard data generator 12 acquires, as a sample, an avatar to which makeup selected by the user is applied in the virtual space. An example of the avatar is an avatar to which makeup is applied by the makeup input unit 13, which will be described later, to the face image of the avatar generated based on the face image of the real face of the user. Subsequently, the standard data generator 12 may generate standard data for each pattern based on a plurality of avatars (samples) collected for each pattern, as in the first example.
- the makeup that is preferred in the real world and the makeup that is preferred in the virtual space do not necessarily match.
- the standard data generation unit 12 generates, at regular intervals (e.g., quarterly, yearly, etc.), based on the makeup of a plurality of avatars generated by a plurality of users within the certain period of time. , the standard data for each pattern may be updated. According to the above configuration, it is possible to prepare standard data corresponding to changes in user's makeup trends in the virtual space.
- the standard data generation unit 12 may generate standard data for each pattern by, for example, receiving a manual operation by a service provider or the like. For example, an expert who is familiar with general makeup, or a person who has gained knowledge from the expert, manually inputs (sets) each parameter value (average and variance) of each makeup target part in each pattern. of standard data may be generated. According to the above configuration, the standard data can be updated in a timely and flexible manner when there is a change in the makeup trend of the user.
- the standard data generation unit 12 may generate standard data for each region (division such as country, region, etc.). That is, the standard data generation unit 12 may generate standard data for users belonging to a certain area based only on the makeup of the avatars of users belonging to the area. According to the above configuration, it is possible to prepare standard data that reflects makeup trends for each region.
- the standard data generation unit 12 may also generate standard data based on makeup features recommended by industries such as cosmetics companies, apparel companies, and the like that create trends in makeup. Further, the standard data generation unit 12 may generate standard data based on makeup features of a particular entertainer, model, or the like.
- the makeup input unit 13 provides the user with a user interface for applying makeup to the avatar.
- the makeup input unit 13 displays the face image of the avatar on the display of the face image generation device 10 (for example, a touch panel display when the face image generation device 10 is a smart phone, a tablet terminal, or the like).
- the makeup input unit 13 displays a plurality of icons representing various makeup tools such as brushes on the display, prompts the user to select a makeup tool, and applies makeup using the makeup tools to the face image of the avatar. Allows you to select an area. Then, the makeup input unit 13 applies makeup using the makeup tool selected by the user to the area selected by the user.
- the make-up input unit 13 may determine the make-up condition (for example, darkness) based on the user's touch operation pressure on the touch panel display, the number of touch operations, the touch time (time during which the touch is continued), and the like. In this case, the user can apply makeup to the avatar in the same way as applying makeup in the real world.
- the makeup input unit 13 receives input from the user of a corresponding pattern (for example, "When a teenage girl goes to school").
- the makeup input unit 13 stores the face image of the avatar to which the makeup is applied in the storage unit 11 as the face image of the avatar to which makeup corresponding to the input pattern is applied.
- the first facial image acquisition unit 14 acquires the first facial image with makeup corresponding to the user's first pattern.
- the first pattern is one pattern arbitrarily selected by the user from among a plurality of predetermined patterns. In the following description, as an example, it is assumed that the user is a "female in her teens" and has selected "when a female in her teens goes out” as the first pattern.
- the first face image acquisition unit 14 acquires the face image of the avatar (face image with makeup corresponding to the first pattern) generated by the above-described processing of the makeup input unit 13 as the first face image. good too. That is, the first facial image acquisition unit 14 may acquire, as the first facial image, the facial image of the avatar to which makeup is applied through the user interface provided by the makeup input unit 13 .
- the first facial image acquisition unit 14 may acquire a facial image (photograph or the like) of the user captured in the real world after the user has applied makeup for going out. Then, the first face image acquiring unit 14 performs the known conversion process as described in the above-mentioned "first example of standard data generation process" on the face image of the user to obtain the face image of the user. A facial image of an avatar corresponding to the facial image may be obtained. The first facial image acquisition unit 14 may acquire the facial image of the avatar thus acquired as the first facial image.
- the user may use the user interface provided by the makeup input unit 13 to modify the makeup for the face image of the avatar. That is, the first facial image acquisition unit 14 obtains the facial image of the avatar acquired in the second acquisition example described above and after applying makeup by the makeup input unit 13, as the first facial image. may be obtained.
- the feature extraction unit 15 extracts features of the user's makeup based on the first face image acquired by the first face image acquisition unit 14 and the first standard data, which is standard data corresponding to the first pattern. do.
- An example of processing of the feature extraction unit 15 will be described with reference to FIG.
- FIG. 4 is a diagram schematically showing an example of processing of the feature extraction unit 15. As shown in FIG. FIG. 4 shows an example of a first face image (first face image 21), an example of a first converted face image (first converted face image 22) to be described later, and first standard data (here, the makeup target part "eyebrows"). ” shows an example (first standard data 23).
- the feature extraction unit 15 converts the first face image 21 using a conversion rate corresponding to the first face image 21 to generate a first converted face image 22 of standard size.
- the "conversion rate corresponding to the first face image” is a conversion rate for making the size of the first face image a standard size that is predetermined as a reference for standard data. Such a conversion ratio can be calculated by comparing the actual size of the first facial image with a standard size.
- the "size of face image” is defined by the vertical and horizontal lengths, for example, when the face image is expressed in two dimensions, and when the face image is expressed in three dimensions, the vertical and horizontal , and depth.
- a conversion rate (enlargement rate or reduction rate) is calculated for each of height, width, and depth.
- the feature extraction unit 15 converts the size of the first face image to the standard size by converting the dimensions of height, width, and depth using each conversion rate. At this time, the feature extraction unit 15 changes not only the size of the face but also the size of each makeup target part (that is, each part included in the face image) according to the conversion rate.
- the length of the eyebrows is "5.5 cm"
- the position of the eyebrows (coordinates of the center of gravity (center) of the eyebrows relative to the origin (center of the face)) is (2 cm, 11 cm)
- the eyebrow thickness is "1 cm”.
- the feature extraction unit 15 compares the first converted face image 22 and the first standard data 23 to extract features of the user's makeup. For example, the feature extraction unit 15 extracts the degree of deviation indicating the degree of deviation of the first converted face image 22 from the first standard data 23 for each make-up target part as a user's make-up feature. The feature extraction unit 15 calculates the degree of divergence based on the value of each parameter in the first converted face image 22 and the average and variance of each parameter in the first standard data 23 .
- FIG. 4 shows a processing example of the feature extraction unit 15 for the makeup target part "eyebrows".
- the eyebrow length “5.5 cm” in the first converted face image 22 is 0.5 cm longer than the average eyebrow length “5 cm” in the first standard data 23 . 5 cm larger. Since the variance ⁇ of the length of the eyebrows in the first standard data 23 is "1.0 cm", the length of the eyebrows in the first converted face image 22 is more than the average length of the eyebrows in the first standard data 23. is also larger by "0.5 ⁇ ”. Therefore, the feature extraction unit 15 acquires "+0.5 ⁇ " (that is, information indicating that the parameter "length” is larger than the standard value by 0.5 ⁇ ) as the degree of divergence.
- the eyebrow position (2 cm, 11 cm) in the first converted face image 22 is 1 cm higher than the average eyebrow position (2 cm, 10 cm) in the first standard data 23 in the height direction. .0 cm larger. Since the variance ⁇ of the eyebrow positions in the first standard data 23 is "0.5 cm", the eyebrow positions in the first converted face image 22 are different from the average of the eyebrow positions in the first standard data 23 in the height direction. is larger by "2 ⁇ ". Therefore, the feature extraction unit 15 acquires "+2 ⁇ in the height direction" (that is, information indicating that the height direction is larger by 2 ⁇ ) as the degree of divergence for the parameter "position".
- the eyebrow thickness "1 cm” in the first converted face image 22 matches the eyebrow thickness "1 cm” in the first standard data 23. Therefore, the feature extraction unit 15 acquires “ ⁇ 0 ⁇ ” (that is, information indicating the sameness) as the degree of divergence for the parameter “thickness”.
- the feature extraction unit 15 can calculate the degree of divergence of each parameter of the makeup target part "eyebrows".
- the feature extraction unit 15 can also calculate the degree of divergence of each parameter of each makeup target part by executing the same processing as described above for makeup target parts other than the eyebrows.
- the degree of divergence of each parameter of each makeup target part obtained in this way is information indicating the degree of difference from standard makeup, in other words, information indicating the user's preference, preference, etc. for standard makeup. It can be said that there is. Therefore, the feature extracting unit 15 extracts the degree of divergence of each parameter of each makeup target part acquired in this way as a feature of the user's makeup.
- the features of the user's makeup extracted by the feature extraction unit 15 (that is, the degree of divergence of each parameter of each makeup target part) are stored in the storage unit 11 .
- the second face image generation unit 16 reflects the makeup features of the user extracted by the feature extraction unit 15 in the second standard data, which is standard data corresponding to a second pattern different from the first pattern, A second facial image with makeup corresponding to the second pattern is generated.
- the second pattern may be arbitrarily selected by the user, for example.
- the second pattern only one may be selected from a plurality of predetermined patterns, or a plurality of patterns may be selected. Also, all patterns other than the first pattern may be selected as the second pattern from among a plurality of predetermined patterns.
- the processing of the second facial image generation unit 16 will be described with attention paid to one second pattern. In the following description, as an example, it is assumed that "when a teenage girl goes to school" is selected as the second pattern. Note that when a plurality of second patterns are selected, the processing described below may be executed for each second pattern.
- FIG. 5 is a diagram schematically showing an example of processing of the second face image generation unit 16.
- FIG. 5 shows an example (second standard data 24) of the second standard data (here, only data corresponding to the makeup target part "eyebrows") and an example of a second converted face image (second converted face image 25), which will be described later. ), and an example of the second face image (second face image 26).
- the second face image generation unit 16 generates a second converted face image 25 of standard size by reflecting the features of the user's makeup extracted by the feature extraction unit 15 in the second standard data 24 .
- the second face image generation unit 16 generates a correction value based on the user's makeup feature (degree of deviation) extracted by the feature extraction unit 15 and the variance of the parameters in the second standard data 24 for each makeup target part.
- the second face image generator 16 determines the parameter values in the second converted face image 25 by correcting the average of the parameters in the second standard data using the correction value.
- the average of the eyebrow parameter "length” in the second standard data 24 is "4.5 cm", and the variance ⁇ is "0.8 cm".
- the feature (degree of divergence) of the user's makeup for the eyebrow parameter "length” is "+0.5 ⁇ " (see FIG. 4).
- the average of the eyebrow parameter "position” in the second standard data 24 is (2 cm, 10 cm), and the variance ⁇ is "0.5 cm".
- the feature (degree of divergence) of the user's makeup for the eyebrow parameter "position” is "+2 ⁇ in the height direction" (see FIG. 4).
- the second face image generation unit 16 adds the correction value to the average of the parameter "position” in the second standard data 24, thereby obtaining the height value "11 cm” of the parameter "position” in the second converted face image 25. to decide.
- the parameter "position" in the second transformed face image 25 is determined to be (2 cm, 11 cm).
- the user's makeup feature (degree of divergence) for the eyebrow parameter "thickness” is " ⁇ 0 ⁇ " (see Fig. 4). Therefore, the second facial image generation unit 16 uses the average "0.8 cm” of the parameter "thickness” of the eyebrows in the second standard data 24 as it is as the value of the parameter "thickness" in the second converted facial image 25. decide.
- the second face image generation unit 16 can determine the values of each parameter of the makeup target part "eyebrows" in the second converted face image 25.
- the second face image generation unit 16 performs the same processing as described above for the makeup target parts other than the eyebrows in the second converted face image 25, so that each parameter of each makeup target part in the second converted face image 25 is changed. value can be determined.
- the second face image generation unit 16 uses the conversion rate (that is, the conversion rate used when the feature extraction unit 15 converts the first face image 21 into the first converted face image 22) to create the second face image.
- a second facial image 26 having a size corresponding to the first facial image 21 is generated by inversely transforming the transformed facial image 25 .
- the presentation unit 17 presents the second face image 26 generated by the second face image generation unit 16 to the user.
- the presentation unit 17 displays the second face image 26 on an output device such as a display provided in the user terminal, thereby The two-face image 26 may be presented to the user.
- the presentation unit 17 causes the user terminal to display the second face image 26.
- the second facial image 26 may be presented to the user by transmitting the data and causing the user terminal to display the second facial image 26 .
- the operation of the facial image generation device 10 (including the facial image generation method according to one embodiment) will be described with reference to the examples of FIGS. 4 and 5 and the flowchart shown in FIG.
- step S1 the standard data generation unit 12 generates standard data for each of a plurality of predetermined patterns.
- the generated standard data is stored in the storage unit 11 .
- step S2 the first facial image acquisition unit 14 acquires the first facial image 21 with makeup corresponding to the user's first pattern (in the example of FIG. 4, "when a teenage girl goes out”). .
- step S3 the feature extraction unit 15 extracts features of the user's makeup based on the first face image 21 acquired in step S2 and the first standard data 23, which is standard data corresponding to the first pattern. Extract. Details of the processing in step S3 will be described with reference to the flowchart shown in FIG.
- step S31 the feature extraction unit 15 converts the first face image 21 using a conversion rate corresponding to the first face image 21 to generate the first converted face image 22 of standard size.
- step S32 the feature extraction unit 15 selects one makeup target part to be processed.
- the feature extraction unit 15 selects "eyebrows" as a makeup target part to be processed.
- step S33 the feature extraction unit 15 selects one parameter included in the selected makeup target part "eyebrows".
- the makeup target part "eyebrows" includes three parameters of length, position, and thickness, and the feature extraction unit 15 selects "length".
- step S34 the feature extraction unit 15 acquires the value of the parameter "length" of the first converted face image 22 ("5.5 cm” in the example of FIG. 4).
- step S35 the feature extraction unit 15 acquires the average and variance ("5 cm” and "1.0 cm” in the example of FIG. 4) of the parameter "length" in the first standard data.
- step S36 the feature extraction unit 15 calculates the degree of divergence for the selected parameter "length" based on the information acquired in steps S34 and S35.
- the length of the eyebrows in the first converted face image 22 is larger than the average length of the eyebrows in the first standard data 23 by "0.5 ⁇ ". “Length” is obtained as the degree of divergence.
- the feature extraction unit 15 repeats the processing of steps S33 to S36 until all the parameters included in the makeup target part to be processed (the makeup target part “eyebrows” selected in step S32) are completed (step S37: NO ).
- step S37 When all the parameters (here, length, position, and thickness) included in the make-up target part "eyebrows" have been processed and the degrees of deviation of all parameters have been obtained (step S37: YES), feature extraction is performed.
- the unit 15 repeats the processes of steps S32 to S37 until all makeup target parts are completed (step S38: NO). For example, when the nine makeup target parts shown in FIG. 2 are defined, the feature extraction unit 15 executes the processes of steps S33 to S37 for each of these nine makeup target parts. When the processing is completed for all makeup target parts (step S38: YES), the processing of the feature extraction unit 15 (step S3 in FIG. 6) is completed.
- step S4 the second facial image generating unit 16 generates a second standard image corresponding to a second pattern different from the first pattern (in the example of FIG. 5, "when a teenage girl goes to school").
- a second face on which makeup corresponding to the second pattern is applied Generate image 26 . Details of the processing in step S4 will be described with reference to the flowchart shown in FIG.
- step S41 the second face image generation unit 16 selects one makeup target part to be processed.
- the second face image generation unit 16 selects "eyebrows" as the makeup target part to be processed.
- step S42 the second face image generation unit 16 selects one parameter included in the selected makeup target part "eyebrows". As an example, the second facial image generator 16 selects "length".
- step S43 the second facial image generation unit 16 calculates the user's makeup feature ( deviation) and the variance ⁇ of the parameter “length” in the second standard data 24 to calculate the correction value.
- the second facial image generator 16 calculates the correction value "0.4 cm".
- step S44 the second face image generation unit 16 corrects the average of the parameter "length” in the second standard data 24 using the correction value calculated in step S43, thereby obtaining the second converted face image 25.
- the parameter "length” of the second converted face image 25 is a value obtained by adding the correction value "0.4 cm” to the average "4.5 cm” of the parameter "length” in the second standard data 24 4.9 cm".
- the second facial image generation unit 16 repeats the processing of steps S42 to S44 until all parameters included in the makeup target region to be processed (the makeup target region "eyebrows" selected in step S41) are completed (step S45: NO).
- step S45 YES
- step S46 NO
- step S46 NO
- step S47 the second face image generation unit 16 inversely transforms the second transformed face image 25 using the conversion rate (that is, the conversion rate used in step S31 of FIG. 7) to obtain the first face image A second facial image 26 having a size corresponding to the image 21 is generated.
- the processing of the second face image generation unit 16 (step S4 in FIG. 6) is completed. That is, a face image (second face image 26) is obtained, which is a face image to which makeup corresponding to the second pattern is applied and in which features of the user's makeup are reflected.
- step S5 the presentation unit 17 presents the second face image 26 generated by the second face image generation unit 16 to the user.
- the second face image generation unit 16 may perform the process of step S4 individually for each of the plurality of second patterns.
- the presentation unit 17 may present a plurality of second face images 26 corresponding to each of the plurality of second patterns (a list of face images of each pattern) to the user.
- the feature of the makeup applied by the user for a certain pattern is the first facial image 21 (see FIG. 4). ) and the first standard data 23 (see FIG. 4). Then, by reflecting the extracted feature of the user's makeup on the second standard data 24 (see FIG. 5) corresponding to the second pattern (for example, "When a teenage girl goes to school"), the second pattern A second face image 26 (see FIG. 5) with makeup corresponding to is generated. Then, the second face image 26 generated in this manner is presented to the user.
- first pattern for example, "teenage female going out
- first standard data 23 see FIG. 4
- the facial image generating apparatus 10 when generating a facial image with makeup applied in accordance with a predetermined pattern, it is possible to automatically generate a facial image that reflects the features of the user's makeup.
- the user does not need to individually create makeup for the avatar for each of a plurality of predetermined patterns. That is, the user creates avatar makeup for one arbitrary pattern, and causes the facial image generating device 10 to automatically generate avatar makeup for other patterns (makeup that reflects the user's makeup features). be able to. As a result, the user's trouble is dramatically reduced.
- the makeup made by the user for one pattern can be reflected not only in the one pattern but also in the makeup of other patterns. User convenience and satisfaction can be improved.
- the standard data prepared for each pattern includes information on the distribution of makeup features based on a face of a predetermined standard size.
- the feature extraction unit 15 converts the first face image 21 using a conversion rate corresponding to the first face image 21 to generate a standard-sized first converted face image 22 .
- the feature extraction unit 15 compares the first converted face image 22 and the first standard data 23 to extract features of the user's makeup.
- the second facial image generation unit 16 generates a standard-sized second converted facial image 25 by reflecting the features of the user's makeup in the second standard data 24 .
- the second facial image generation unit 16 generates a second facial image 26 having a size corresponding to the first facial image 21 by inversely transforming the second transformed facial image 25 using the conversion rate.
- the size of the face image (first face image 21) created by the user and the size of the face image defined as the standard data (standard size) are matched to determine the features of the user's makeup. extracted. Therefore, it is possible to accurately extract the features of the user's makeup.
- the information on the makeup feature distribution included in the standard data prepared for each pattern includes the mean and variance of the makeup-related parameters of one or more predetermined makeup target regions.
- the feature extracting unit 15 converts the first converted face image 22 into the first standard data based on the values of the parameters in the first converted face image 22 and the mean and variance of the parameters in the first standard data 23 for each makeup target part. 23 is extracted as a user's makeup feature.
- the second face image generation unit 16 calculates a correction value for each makeup target part based on the degree of deviation and the variance of the parameters in the second standard data 24, and uses the correction value to calculate the parameters in the second standard data 24.
- the values of the parameters in the second transformed face image 25 are determined by correcting the average of .
- the difference between the average of the parameter in the first pattern and the average of the parameter in the second pattern is large, and the scale of variance of the parameter differs relatively greatly between the first pattern and the second pattern.
- the difference in the scale of the parameter values between the first pattern and the second pattern is large, the difference in the parameter values in the first pattern (difference from the average of the first standard data) is used as it is in the second pattern as described above. If it is used as a pattern correction value, the makeup applied to the second face image 26 may be out of balance.
- the correction value based on the variance ⁇ the scale difference described above can be absorbed, and the problems described above can be avoided.
- the first facial image acquiring section 14 acquires a plurality of first facial images corresponding to the first pattern.
- the feature extraction unit 15 extracts features of the user's makeup for each first face image (for example, the degree of divergence for each parameter described above).
- the second face image generation unit 16 generates a plurality of second face images corresponding to each of the plurality of first face images, based on the makeup features of the user for each of the first face images extracted by the feature extraction unit 15. Generate an image.
- the presentation unit 17 presents the plurality of second face images generated by the second face image generation unit 16 to the user.
- step S101 is the same as the process of step S1 in FIG.
- the first facial image acquisition unit 14 acquires a plurality of first facial images corresponding to the first pattern.
- the first face image acquisition unit 14 acquires a plurality of different first face images corresponding to the pattern (first pattern) of "teenage female going out”.
- step S103 the feature extraction unit 15 performs the same processing as in step S3 of FIG. Extract.
- step S104 the second face image generation unit 16 performs the same processing as in step S4 of FIG. A plurality of second facial images are generated corresponding to each of the images.
- step S105 the presentation unit 17 presents the plurality of second facial images generated in step S104 to the user.
- the user prepares a plurality of avatars with a plurality of variations (types) of make-up for the same pattern (for example, "when a teenage girl goes out”).
- you may want to select the avatar to use ie, the makeup variation to apply to the avatar. That is, even for the same purpose, there are times when it is desired to select slightly modest makeup, and there are times when it is desired to select slightly flashy makeup.
- a second face image corresponding to each of the plurality of variations of makeup created by the user for the first pattern is generated and presented to the user. be able to. As a result, it is possible to widen the user's range of selection for the second pattern other than the first pattern for which the user created makeup.
- the first facial image acquisition unit 14 acquires a plurality of first facial images corresponding to the first pattern, as in the first modified example. Further, the feature extraction unit 15 extracts features of makeup of the user based on weighting according to the frequency of use of each first face image.
- the second modification matches the first modification in that a plurality of first facial images corresponding to the same pattern (first pattern) are prepared, but only one second facial image is generated (that is, , a second face image is not generated for each first face image).
- steps S201 and S202 are the same as the processing of steps S101 and S102 in FIG.
- step S203 the feature extraction unit 15 extracts features of the user's makeup based on weighting according to the frequency of use of each first face image.
- the frequency of use of the first face image is, for example, the number of times or time the user applied the first face image to the face image of the avatar.
- three types of first facial images hereinafter referred to as "first facial image a", “first facial image b", and "first facial image c" are acquired.
- a processing example of step S203 will be described. As an example, it is assumed that the ratio of the frequency of use of the first facial image a, the first facial image b, and the first facial image c is "F1:F2:F3".
- the feature extraction unit 15 converts each of the first face image a, the first face image b, and the first face image c into a standard size first conversion face image A, a first conversion face image B, and a first conversion face image B. Convert to image C.
- the values of the parameter "length" of the makeup target part "eyebrows" of the first converted face image A, the first converted face image B, and the first converted face image C are V1, V2, and V3. shall be In this case, for example, the feature extraction unit 15 weights the parameter values V1, V2, and V3 of each of the first converted face images A, B, and C according to the frequency of use using the following (Equation 1).
- V (F1 ⁇ V1+F2 ⁇ V2+F3 ⁇ V3)/(F1+F2+F3)
- the feature extraction unit 15 executes the same processing as the above-described processing for the parameter "length" of the makeup target area "eyebrows" for all parameters of all makeup target areas, thereby extracting each parameter of each makeup target area. Determine the representative value of
- the feature extraction unit 15 uses each parameter value (representative value) of the first converted face image determined as described above to calculate the degree of divergence of each parameter. That is, the feature extraction unit 15 uses each parameter value (representative value) of the first transformed face image determined as described above to perform the same processing as the processing in steps S35 and S36 in FIG. , to calculate the deviation of each parameter.
- the feature extraction unit 15 calculates the user's makeup feature (degree of divergence) for each parameter of each makeup target site by executing the above-described processing for each parameter of each makeup target site.
- steps S204 and S205 is the same as the processing of steps S4 and S5 in FIG.
- the characteristics of each first face image are comprehensively considered according to the frequency of use.
- the makeup input unit 13 accepts makeup correction by the user for the second face image 26 (see FIG. 5) presented to the user by the presentation unit 17 .
- the storage unit 11 also stores correction information regarding corrections accepted by the makeup input unit 13 . Further, when correction information is stored in the storage unit 11, the second face image generation unit 16 generates the second face image 26 further based on the correction information.
- steps S301 to S304 is the same as the processing of steps S1 to S4 in FIG.
- step S305 the second facial image generation unit 16 determines whether or not the storage unit 11 stores correction information. For example, after a second facial image corresponding to a predetermined pattern was presented to the user in the past, the user corrects the length of the eyebrows in the second facial image to the same length as the first facial image.
- the storage unit 11 may store correction information indicating that "the length of the eyebrows has been corrected to the same length as the first facial image.” Then, if such correction information is stored in the storage unit 11 (step S305: YES), the second face image generation unit 16 generates the second face image generated in step S304 based on the correction information. may be corrected to the eyebrow length of the first facial image (step S306).
- step S305: NO the process of step S306 is skipped.
- the correction information and correction processing based on the correction information are not limited to the above example.
- the correction information may be information indicating the amount of modification (change amount) of the amount of makeup applied to the makeup target area (eg, length, thickness, width, density, etc.), or information indicating the amount of makeup applied to the makeup target area. It may be information indicating the amount of position correction (displacement amount).
- the second facial image generator 16 may correct the amount of makeup or the position of the part to be made up indicated in the correction information based on the amount of correction indicated in the correction information.
- step S307 is the same as the process of step S5 in FIG.
- step S308: YES when the user corrects the makeup of the second face image via the user interface provided by the makeup input unit 13 (step S308: YES), the storage unit 11 saves the correction information about the correction (step S309).
- step S308: NO if the user has not corrected the makeup of the second face image (step S308: NO), the process of step S309 is skipped.
- the second face image presented to the user by the processing of the above embodiment is not necessarily accepted (or liked) by the user as it is.
- the user may correct the second face image via the user interface provided by the makeup input unit 13 .
- the user may select a fixed pattern for the second facial image. is corrected (for example, the length of the eyebrows is made the same as that of the first facial image). In this case, it is inferred from the content of the correction that the user has a strong preference for the length of the eyebrows.
- the correction information indicating that such correction has been made is stored in the storage unit 11 and used for subsequent processing (processing by the second face image generation unit 16). For example, after the correction information is stored in the storage unit 11, a second face image of another second pattern "when a teenage girl goes out” is generated based on the first pattern "when a teenage girl goes out”. Think about the case.
- the corrected second facial image can be presented to the user in step S307. According to the above configuration, since the user does not have to make the same correction each time for the generated and presented second face image, the user's convenience can be effectively improved.
- the presentation unit 17 presents the second face image corrected in step S306 to the user, and presents the second face image immediately after being generated in step S304. (that is, the second face image before the correction in step S306) may also be presented to the user. According to the above process, even if the makeup has been corrected in a direction that the user does not like due to the correction process in step S306, the second face image before correction (immediately after being generated in step S304) is displayed to the user. second face image) can be selected.
- the face image generation device 10 is used to generate the face image of the avatar defined in the virtual space, but the face image generation device 10 is used to generate other face images.
- the face image generation device 10 may be used to generate a user's face image set in a social networking service (SNS) or a membership site such as a matchmaking site.
- the face image generation device 10 may be used to generate a user's face image set in services such as video calls and chats.
- the face image generated by the face image generation device 10 is not limited to the face image of the user (including the avatar).
- the face image generation device 10 may be used to generate a face image of a character operated by a user in virtual space.
- the standard data corresponding to each pattern the information (mean and variance) regarding the distribution of makeup features for each makeup target site was exemplified. It may be information.
- the standard data corresponding to each pattern may be any data that allows comparison of makeup features with the facial image (first facial image) to which makeup has been applied by the user. Information about the distribution illustrated in the form may not be included.
- the face image generation device 10 does not necessarily include all the functional units described above (the storage unit 11, the standard data generation unit 12, the makeup input unit 13, the first face image acquisition unit 14, the feature extraction unit 15, the second face image The generation unit 16 and the presentation unit 17) may not be provided.
- standard data corresponding to each pattern may be prepared in advance by another device different from the face image generation device 10 .
- the standard data of each pattern generated by the other device may be stored in advance in the storage unit 11, or the face image generation device 10 may receive the standard data of each pattern from the other device as necessary. may be downloaded.
- the face image generation device 10 does not have to include the standard data generation section 12 described above.
- step S1 see FIG. 6
- step S101 see FIG. 9
- step S201 see FIG. 10
- step S301 see FIG. 11
- each functional block may be implemented using one device that is physically or logically coupled, or directly or indirectly using two or more devices that are physically or logically separated (e.g. , wired, wireless, etc.) and may be implemented using these multiple devices.
- a functional block may be implemented by combining software in the one device or the plurality of devices.
- Functions include judging, determining, determining, calculating, calculating, processing, deriving, investigating, searching, checking, receiving, transmitting, outputting, accessing, resolving, selecting, choosing, establishing, comparing, assuming, expecting, assuming, Broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, assigning, etc. can't
- the facial image generation device 10 may function as a computer that performs the facial image generation method of the present disclosure.
- FIG. 12 is a diagram showing an example of the hardware configuration of the face image generation device 10 according to one embodiment of the present disclosure.
- the facial image generation device 10 may be physically configured as a computer device including a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like.
- the term "apparatus” can be read as a circuit, device, unit, or the like.
- the hardware configuration of the face image generation device 10 may be configured to include one or more of the devices shown in FIG. 12, or may be configured without some of the devices.
- Each function of the face image generation device 10 is performed by the processor 1001 by loading predetermined software (program) onto hardware such as the processor 1001 and the memory 1002, and the processor 1001 performs calculations, controls communication by the communication device 1004, It is realized by controlling at least one of data reading and writing in the memory 1002 and the storage 1003 .
- the processor 1001 for example, operates an operating system and controls the entire computer.
- the processor 1001 may be configured by a central processing unit (CPU) including an interface with peripheral devices, a control device, an arithmetic device, registers, and the like.
- CPU central processing unit
- the processor 1001 reads programs (program codes), software modules, data, etc. from at least one of the storage 1003 and the communication device 1004 to the memory 1002, and executes various processes according to them.
- programs program codes
- software modules software modules
- data etc.
- the program a program that causes a computer to execute at least part of the operations described in the above embodiments is used.
- each functional unit e.g., feature extraction unit 15, etc.
- each functional unit e.g., feature extraction unit 15, etc.
- FIG. Processor 1001 may be implemented by one or more chips.
- the program may be transmitted from a network via an electric communication line.
- the memory 1002 is a computer-readable recording medium, and is composed of at least one of, for example, ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), RAM (Random Access Memory), etc. may be
- ROM Read Only Memory
- EPROM Erasable Programmable ROM
- EEPROM Electrical Erasable Programmable ROM
- RAM Random Access Memory
- the memory 1002 may also be called a register, cache, main memory (main storage device), or the like.
- the memory 1002 can store executable programs (program code), software modules, etc. for implementing the facial image generation method according to an embodiment of the present disclosure.
- the storage 1003 is a computer-readable recording medium, for example, an optical disc such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disc, a magneto-optical disc (for example, a compact disc, a digital versatile disc, a Blu-ray disk), smart card, flash memory (eg, card, stick, key drive), floppy disk, magnetic strip, and/or the like.
- Storage 1003 may also be called an auxiliary storage device.
- the storage medium described above may be, for example, a database, server, or other suitable medium including at least one of memory 1002 and storage 1003 .
- the communication device 1004 is hardware (transmitting/receiving device) for communicating between computers via at least one of a wired network and a wireless network, and is also called a network device, a network controller, a network card, a communication module, or the like.
- the input device 1005 is an input device (for example, keyboard, mouse, microphone, switch, button, sensor, etc.) that receives input from the outside.
- the output device 1006 is an output device (eg, display, speaker, LED lamp, etc.) that outputs to the outside. Note that the input device 1005 and the output device 1006 may be integrated (for example, a touch panel).
- Each device such as the processor 1001 and the memory 1002 is connected by a bus 1007 for communicating information.
- the bus 1007 may be configured using a single bus, or may be configured using different buses between devices.
- the facial image generation device 10 includes hardware such as a microprocessor, a digital signal processor (DSP), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), and an FPGA (Field Programmable Gate Array). part or all of each functional block may be implemented by the hardware.
- processor 1001 may be implemented using at least one of these pieces of hardware.
- Input/output information may be stored in a specific location (for example, memory) or managed using a management table. Input/output information and the like can be overwritten, updated, or appended. The output information and the like may be deleted. The entered information and the like may be transmitted to another device.
- the determination may be made by a value represented by one bit (0 or 1), by a true/false value (Boolean: true or false), or by numerical comparison (for example, a predetermined value).
- notification of predetermined information is not limited to being performed explicitly, but may be performed implicitly (for example, not notifying the predetermined information). good too.
- Software whether referred to as software, firmware, middleware, microcode, hardware description language or otherwise, includes instructions, instruction sets, code, code segments, program code, programs, subprograms, and software modules. , applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, and the like.
- software, instructions, information, etc. may be transmitted and received via a transmission medium.
- the software uses at least one of wired technology (coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.) and wireless technology (infrared, microwave, etc.) to website, Wired and/or wireless technologies are included within the definition of transmission medium when sent from a server or other remote source.
- wired technology coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.
- wireless technology infrared, microwave, etc.
- data, instructions, commands, information, signals, bits, symbols, chips, etc. may refer to voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. may be represented by a combination of
- information, parameters, etc. described in the present disclosure may be expressed using absolute values, may be expressed using relative values from a predetermined value, or may be expressed using other corresponding information. may be represented.
- any reference to elements using the "first,” “second,” etc. designations used in this disclosure does not generally limit the quantity or order of those elements. These designations may be used in this disclosure as a convenient method of distinguishing between two or more elements. Thus, reference to a first and second element does not imply that only two elements can be employed or that the first element must precede the second element in any way.
- a and B are different may mean “A and B are different from each other.”
- the term may also mean that "A and B are different from C”.
- Terms such as “separate,” “coupled,” etc. may also be interpreted in the same manner as “different.”
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Quality & Reliability (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
標準データ生成部12は、実世界における化粧が施された複数の顔画像(実写画像)の各々に対応するアバターを生成し、生成された複数のアバターの化粧の特徴に基づいてパターン毎の標準データを生成してもよい。例えば、標準データ生成部12は、公知の手法を用いることにより、上記実写画像からアバターを生成することができる。例えば、標準データ生成部12は、上記実写画像の画像処理(画像解析)を実行することにより、顔画像における特定の化粧対象部位(例えば眉毛)を抽出し、抽出された化粧対象部位の特徴(例えば、長さ、太さ、濃さ、位置等)を抽出する。そして、標準データ生成部12は、このような特徴抽出処理を予め定められた全ての化粧対象部位について実行し、抽出された各化粧対象部位の特徴を反映したアバターを生成する。
標準データ生成部12は、仮想空間において複数のユーザによって生成された複数のアバターの化粧の特徴に基づいて、パターン毎の標準データを生成してもよい。例えば、標準データ生成部12は、仮想空間内においてユーザにより選択された化粧が適用されたアバターをサンプルとして取得する。上記アバターの例としては、ユーザの素顔の顔画像に基づいて生成されたアバターの顔画像に対して後述する化粧入力部13による化粧が適用されたアバターが挙げられる。続いて、標準データ生成部12は、第一の例と同様に、パターン毎に収集された複数のアバター(サンプル)に基づいて、パターン毎の標準データを生成してもよい。
標準データ生成部12は、例えば、サービス提供者等による手動操作を受け付けることにより、パターン毎の標準データを生成してもよい。例えば、一般的な化粧について詳しい識者、又は当該識者から知見を得た者が、各パターンにおける各化粧対象部位の各パラメータ値(平均及び分散)を手動で入力(設定)することにより、パターン毎の標準データが生成されてもよい。上記構成によれば、ユーザの化粧のトレンドに変化があった際等において、標準データをタイムリー且つ柔軟に更新することができる。
標準データ生成部12は、地域(国、地方等の区分)単位で、標準データを生成してもよい。すなわち、標準データ生成部12は、ある地域に属するユーザ用の標準データを、当該地域に属するユーザのアバターの化粧のみに基づいて生成してもよい。上記構成によれば、地域毎の化粧の傾向を反映した標準データを用意することができる。また、標準データ生成部12は、化粧品会社、アパレル会社等の化粧の流行を産み出す業界が推薦する化粧の特徴に基づいて、標準データを生成してもよい。また、標準データ生成部12は、特定の芸能人、モデル等の化粧の特徴に基づいて、標準データを生成してもよい。
第一顔画像取得部14は、上述した化粧入力部13の処理によって生成されたアバターの顔画像(第一パターンに対応する化粧が施された顔画像)を、第一顔画像として取得してもよい。すなわち、第一顔画像取得部14は、化粧入力部13が提供するユーザインタフェースによる化粧が施されたアバターの顔画像を、第一顔画像として取得してもよい。
第一顔画像取得部14は、実世界においてユーザが外出時用の化粧を自分の顔に施した後に撮像されたユーザの顔画像(写真等)を取得してもよい。そして、第一顔画像取得部14は、当該ユーザの顔画像に対して、上述した「標準データ生成処理の第一の例」で説明したような公知の変換処理を行うことにより、当該ユーザの顔画像に対応するアバターの顔画像を取得してもよい。第一顔画像取得部14は、このように取得されたアバターの顔画像を、第一顔画像として取得してもよい。
上記第二の取得例においては、ユーザの顔画像(写真)の解像度、光沢等に起因するノイズが生じ、ユーザの意図通りのアバターの顔画像が生成されない場合もあり得る。このような場合において、ユーザは、化粧入力部13が提供するユーザインタフェースを用いて、アバターの顔画像に対する化粧を修正してもよい。すなわち、第一顔画像取得部14は、上記の第二の取得例により取得されたアバターの顔画像に対して化粧入力部13による化粧が加えられた後の顔画像を、第一顔画像として取得してもよい。
第一変形例においては、第一顔画像取得部14は、第一パターンに対応する複数の第一顔画像を取得する。また、特徴抽出部15は、第一顔画像毎のユーザの化粧の特徴(例えば、上述したパラメータ毎の乖離度)を抽出する。また、第二顔画像生成部16は、特徴抽出部15により抽出された第一顔画像毎のユーザの化粧の特徴に基づいて、複数の第一顔画像の各々に対応する複数の第二顔画像を生成する。また、提示部17は、第二顔画像生成部16により生成された複数の第二顔画像をユーザに提示する。
第二変形例においては、第一顔画像取得部14は、第一変形例と同様に、第一パターンに対応する複数の第一顔画像を取得する。また、特徴抽出部15は、各第一顔画像の使用頻度に応じた重み付けに基づいて、ユーザの化粧の特徴を抽出する。第二変形例は、同一のパターン(第一パターン)に対応する複数の第一顔画像が用意される点で第一変形例と一致するが、第二顔画像を一つだけ生成する(すなわち、第一顔画像毎に第二顔画像を生成しない)点で第一変形例と相違する。
(式1)V=(F1×V1+F2×V2+F3×V3)/(F1+F2+F3)
第三変形例においては、化粧入力部13は、提示部17によりユーザに提示された第二顔画像26(図5参照)に対するユーザによる化粧の修正を受け付ける。また、記憶部11は、化粧入力部13により受け付けられた修正に関する修正情報を記憶する。また、第二顔画像生成部16は、記憶部11に修正情報が記憶されている場合、当該修正情報に更に基づいて、第二顔画像26を生成する。
上記実施形態では、顔画像生成装置10は、仮想空間において定義されるアバターの顔画像を生成するために用いられたが、顔画像生成装置10は、上記以外の顔画像を生成するために用いられてもよい。例えば、顔画像生成装置10は、ソーシャルネットワーキングサービス(SNS:Social networking service)、或いは婚活サイト等の会員制サイト等において設定されるユーザの顔画像を生成するために用いられてもよい。或いは、顔画像生成装置10は、ビデオ通話、チャット等のサービスにおいて設定されるユーザの顔画像を生成するために用いられてもよい。また、顔画像生成装置10により生成される顔画像は、ユーザ自身(アバターを含む)の顔画像に限られない。例えば、顔画像生成装置10は、仮想空間においてユーザによって操作されるキャラクタの顔画像を生成するために用いられてもよい。また、上記実施形態では、各パターンに対応する標準データとして、化粧対象部位毎の化粧の特徴の分布に関する情報(平均及び分散)を例示したが、各パターンに対応する標準データは、上記以外の情報であってもよい。すなわち、各パターンに対応する標準データは、ユーザによる化粧が施された顔画像(第一顔画像)との間で、化粧の特徴について比較を行うことが可能なものであればよく、上記実施形態で例示した分布に関する情報を含まなくてもよい。
Claims (10)
- ユーザによる第一パターンに対応する化粧が施された第一顔画像を取得する第一顔画像取得部と、
前記第一顔画像と、前記第一パターンに対応する標準的な化粧の特徴に関する第一標準データと、に基づいて、前記ユーザの化粧の特徴を抽出する特徴抽出部と、
前記第一パターンとは異なる第二パターンに対応する標準的な化粧の特徴に関する第二標準データに、前記特徴抽出部により抽出された前記ユーザの化粧の特徴を反映させることにより、前記第二パターンに対応する化粧が施された第二顔画像を生成する第二顔画像生成部と、
前記第二顔画像生成部により生成された前記第二顔画像を前記ユーザに提示する提示部と、
を備える顔画像生成装置。 - 前記第一標準データは、前記第一パターンに対応する標準的な化粧の特徴について予め定められた分布に関する情報を含み、
前記第二標準データは、前記第二パターンに対応する標準的な化粧の特徴について予め定められた分布に関する情報を含む、請求項1に記載の顔画像生成装置。 - 前記第一標準データ及び前記第二標準データの各々は、予め定められた標準サイズの顔を基準とした化粧の特徴の分布に関する情報を含み、
前記特徴抽出部は、
前記第一顔画像に応じた変換率を用いて前記第一顔画像を変換することにより、前記標準サイズの第一変換顔画像を生成し、
前記第一変換顔画像と前記第一標準データとを比較することにより、前記ユーザの化粧の特徴を抽出し、
前記第二顔画像生成部は、
前記第二標準データに前記ユーザの化粧の特徴を反映することにより、前記標準サイズの第二変換顔画像を生成し、
前記変換率を用いて前記第二変換顔画像を逆変換することにより、前記第一顔画像に対応するサイズの前記第二顔画像を生成する、
請求項2に記載の顔画像生成装置。 - 前記第一標準データ及び前記第二標準データの各々に含まれる前記化粧の特徴の分布に関する情報は、予め定められた一以上の化粧対象部位の化粧に関するパラメータの平均及び分散を含み、
前記特徴抽出部は、各前記化粧対象部位について、前記第一変換顔画像における前記パラメータの値と前記第一標準データにおける前記パラメータの平均及び分散とに基づいて、前記第一変換顔画像が前記第一標準データから乖離している度合いを示す乖離度を前記ユーザの化粧の特徴として抽出し、
前記第二顔画像生成部は、各前記化粧対象部位について、前記乖離度と前記第二標準データにおける前記パラメータの分散とに基づいて補正値を算出し、前記補正値を用いて前記第二標準データにおける前記パラメータの平均を補正することにより、前記第二変換顔画像における前記パラメータの値を決定する、
請求項3に記載の顔画像生成装置。 - 前記第一顔画像取得部は、前記第一パターンに対応する複数の前記第一顔画像を取得し、
前記特徴抽出部は、前記第一顔画像毎の前記ユーザの化粧の特徴を抽出し、
前記第二顔画像生成部は、前記特徴抽出部により抽出された前記第一顔画像毎の前記ユーザの化粧の特徴に基づいて、前記複数の前記第一顔画像の各々に対応する複数の前記第二顔画像を生成し、
前記提示部は、前記第二顔画像生成部により生成された前記複数の前記第二顔画像を前記ユーザに提示する、
請求項1~4のいずれか一項に記載の顔画像生成装置。 - 前記第一顔画像取得部は、前記第一パターンに対応する複数の前記第一顔画像を取得し、
前記特徴抽出部は、各前記第一顔画像の使用頻度に応じた重み付けに基づいて、前記ユーザの化粧の特徴を抽出する、請求項1~4のいずれか一項に記載の顔画像生成装置。 - 前記提示部により前記ユーザに提示された前記第二顔画像に対する前記ユーザによる化粧の修正を受け付ける化粧入力部と、
前記化粧入力部により受け付けられた修正に関する修正情報を記憶する記憶部と、を更に備え、
前記第二顔画像生成部は、前記記憶部に前記修正情報が記憶されている場合、前記修正情報に更に基づいて、前記第二顔画像を生成する、請求項1~6のいずれか一項に記載の顔画像生成装置。 - 前記第一顔画像及び前記第二顔画像は、複数のユーザにより共有される仮想空間における前記ユーザのアバターに適用される顔画像であり、
前記第一標準データ及び前記第二標準データを生成する標準データ生成部を更に備え、
前記標準データ生成部は、
実世界における化粧が施された複数の顔画像の各々に対応する前記アバターを生成し、
生成された複数の前記アバターの化粧の特徴に基づいて、前記第一標準データ及び前記第二標準データを生成する、
請求項1~7のいずれか一項に記載の顔画像生成装置。 - 前記第一顔画像及び前記第二顔画像は、複数のユーザにより共有される仮想空間における前記ユーザのアバターに適用される顔画像であり、
前記第一標準データ及び前記第二標準データを生成する標準データ生成部を更に備え、
前記標準データ生成部は、前記仮想空間において複数のユーザによって生成された複数の前記アバターの化粧の特徴に基づいて、前記第一標準データ及び前記第二標準データを生成する、
請求項1~7のいずれか一項に記載の顔画像生成装置。 - 前記標準データ生成部は、一定期間毎に、前記一定期間内に複数のユーザによって生成された複数の前記アバターの化粧に基づいて、前記第一標準データ及び前記第二標準データを更新する、
請求項9に記載の顔画像生成装置。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023517070A JPWO2022230298A1 (ja) | 2021-04-30 | 2022-02-09 | |
US18/557,877 US20240221238A1 (en) | 2021-04-30 | 2022-02-09 | Face image generation device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-077491 | 2021-04-30 | ||
JP2021077491 | 2021-04-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022230298A1 true WO2022230298A1 (ja) | 2022-11-03 |
Family
ID=83846865
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/005194 WO2022230298A1 (ja) | 2021-04-30 | 2022-02-09 | 顔画像生成装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240221238A1 (ja) |
JP (1) | JPWO2022230298A1 (ja) |
WO (1) | WO2022230298A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014149696A (ja) * | 2013-02-01 | 2014-08-21 | Panasonic Corp | メイクアップ支援装置、メイクアップ支援方法、およびメイクアップ支援プログラム |
JP2018151769A (ja) * | 2017-03-10 | 2018-09-27 | 富士通株式会社 | 化粧支援プログラム、化粧支援装置、及び化粧支援方法 |
US20200082158A1 (en) * | 2018-09-10 | 2020-03-12 | Algomus, Inc. | Facial image makeup transfer system |
WO2020261531A1 (ja) * | 2019-06-28 | 2020-12-30 | 株式会社 資生堂 | 情報処理装置、メーキャップシミュレーションの学習済モデルの生成方法、メーキャップシミュレーションの実行方法、及び、プログラム |
-
2022
- 2022-02-09 WO PCT/JP2022/005194 patent/WO2022230298A1/ja active Application Filing
- 2022-02-09 JP JP2023517070A patent/JPWO2022230298A1/ja active Pending
- 2022-02-09 US US18/557,877 patent/US20240221238A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014149696A (ja) * | 2013-02-01 | 2014-08-21 | Panasonic Corp | メイクアップ支援装置、メイクアップ支援方法、およびメイクアップ支援プログラム |
JP2018151769A (ja) * | 2017-03-10 | 2018-09-27 | 富士通株式会社 | 化粧支援プログラム、化粧支援装置、及び化粧支援方法 |
US20200082158A1 (en) * | 2018-09-10 | 2020-03-12 | Algomus, Inc. | Facial image makeup transfer system |
WO2020261531A1 (ja) * | 2019-06-28 | 2020-12-30 | 株式会社 資生堂 | 情報処理装置、メーキャップシミュレーションの学習済モデルの生成方法、メーキャップシミュレーションの実行方法、及び、プログラム |
Also Published As
Publication number | Publication date |
---|---|
US20240221238A1 (en) | 2024-07-04 |
JPWO2022230298A1 (ja) | 2022-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200234034A1 (en) | Systems and methods for face reenactment | |
CN109389427B (zh) | 问卷推送方法、装置、计算机设备和存储介质 | |
US11089238B2 (en) | Personalized videos featuring multiple persons | |
US10874196B2 (en) | Makeup part generating apparatus and makeup part generating method | |
JP2010055424A (ja) | 画像を処理する装置、方法およびプログラム | |
CN110599393A (zh) | 图片风格转换方法、装置、设备及计算机可读存储介质 | |
CN108038760B (zh) | 一种基于ar技术的商品展示控制*** | |
CN111580788A (zh) | 模板搭配信息推荐方法、装置及电子设备 | |
JP2020112907A (ja) | 画像スタイル変換装置、画像スタイル変換方法、及びプログラム | |
WO2020150690A2 (en) | Systems and methods for providing personalized videos | |
JP2019537397A (ja) | 映像のための効果共有方法およびシステム | |
US20170024918A1 (en) | Server and method of providing data | |
CN113453027A (zh) | 直播视频、虚拟上妆的图像处理方法、装置及电子设备 | |
WO2022230298A1 (ja) | 顔画像生成装置 | |
CN113284229A (zh) | 三维人脸模型生成方法、装置、设备及存储介质 | |
CN116664781A (zh) | 基于标签选择的虚拟ip形象生成***、方法及存储介质 | |
KR20220069390A (ko) | 화장품 추천장치, 화장품 추천 장치를 이용한 화장품 제공방법 및 화장품 추천 시스템 | |
JP6534168B1 (ja) | メイクアップ支援システム、及びメイクアップ支援方法 | |
WO2024024727A1 (ja) | 画像処理装置、画像表示装置、画像処理方法、画像表示方法及びプログラム | |
KR102408325B1 (ko) | 콘텐츠를 제작 및 관리하는 시스템 및 그의 동작 방법 | |
JP2020048945A (ja) | サーバ、端末及び肌色診断方法 | |
WO2023139961A1 (ja) | 情報処理装置 | |
KR20180092159A (ko) | 메이크업 제품 검색 장치 및 방법 | |
US20240177225A1 (en) | Methods and systems to generate customized virtual try on (vto) components providing model-based and user-based experiences | |
KR20230106809A (ko) | 얼굴 이미지를 합성하여 가상 인물의 얼굴 이미지를 생성하는 서비스 제공 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22795221 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023517070 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18557877 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22795221 Country of ref document: EP Kind code of ref document: A1 |