WO2019100282A1 - 一种人脸肤色识别方法、装置和智能终端 - Google Patents

一种人脸肤色识别方法、装置和智能终端 Download PDF

Info

Publication number
WO2019100282A1
WO2019100282A1 PCT/CN2017/112533 CN2017112533W WO2019100282A1 WO 2019100282 A1 WO2019100282 A1 WO 2019100282A1 CN 2017112533 W CN2017112533 W CN 2017112533W WO 2019100282 A1 WO2019100282 A1 WO 2019100282A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
image
skin color
face
avgcr
Prior art date
Application number
PCT/CN2017/112533
Other languages
English (en)
French (fr)
Inventor
林丽梅
Original Assignee
深圳和而泰智能控制股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳和而泰智能控制股份有限公司 filed Critical 深圳和而泰智能控制股份有限公司
Priority to CN201780009028.3A priority Critical patent/CN108701217A/zh
Priority to PCT/CN2017/112533 priority patent/WO2019100282A1/zh
Publication of WO2019100282A1 publication Critical patent/WO2019100282A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition

Definitions

  • the present application relates to the field of face recognition technologies, and in particular, to a face color recognition method, apparatus, and smart terminal.
  • Face recognition technology is a technology for identifying and comparing facial visual feature information. Its research fields include: identification, expression recognition, gender recognition, nationality recognition, and skin care.
  • mainstream skin color detection methods include fixed skin color distribution detection methods, and joint detection methods of skin color probability distribution and Bayesian decision.
  • these methods can only determine which areas of the image belong to the skin area, and cannot accurately identify the specific color of the human face skin, thus making it difficult to provide an effective reference for people's personal image design.
  • the embodiment of the present application provides a method, a device, and a smart terminal for recognizing facial skin color, which can solve the problem of how to accurately identify the specific color of the human face skin.
  • an embodiment of the present application provides a method for recognizing a facial skin color, including:
  • avgY represents the mean of the image of the area in the Y color channel
  • avgCr represents the mean of the area image in the Cr color channel
  • avgCb represents the mean of the area image in the Cb color channel
  • a facial skin color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, wherein the skin color template includes a plurality of skin color patches, the human skin color is One of a variety of skin color blocks.
  • the method before the step of acquiring the mean vector [avgY, avgCr, avgCb] of the region image under the YCrCb color space, the method further includes:
  • the color space of the area image is converted to the YCrCb color space.
  • the method before the step of converting the color space of the area image to the YCrCb color space, the method further includes:
  • the area image includes n, the n is a positive integer, and the obtaining the mean vector [avgY, avgCr, avgCb] of the area image in the YCrCb color space includes:
  • the mean vector [avgY, avgCr, avgCb] of the n region images in the YCrCb color space is obtained by the following formula:
  • sumY_i represents the sum of the pixel values of the i-th region image in the Y color channel
  • sumCr_i represents the sum of the pixel values of the i-th region image in the Cr color channel
  • sumCb_i represents the i-th region image in the Cb color
  • S_i represents the area of the image of the i-th region
  • the determining according to the mean vector [avgY, avgCr, avgCb] and a preset skin color template, a face color of the face image, including:
  • avgY_j represents the mean value of the jth skin color patch in the skin color template in the Y color channel
  • avgCr_j represents the mean value of the jth skin color patch in the skin color template in the Cr color channel
  • avgCb_j represents the skin color template The mean value of the j skin patches in the Cb color channel.
  • the area image includes any one or more of a left cheek area image, a nose area image, and a right cheek area image.
  • an embodiment of the present application provides a face color recognition device, including:
  • a face image obtaining unit configured to acquire a face image
  • An intercepting unit configured to intercept an image of the area to be detected from the face image
  • a data processing unit configured to obtain an average vector [avgY, avgCr, avgCb] of the region image under the YCrCb color space, wherein avgY represents an average value of the region image in the Y color channel, and avgCr represents that the region image is The mean value of the Cr color channel, avgCb represents the mean value of the area image in the Cb color channel;
  • An analyzing unit configured to determine a face color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, wherein the skin color template includes a plurality of skin color blocks,
  • the human skin color is one of the plurality of skin color patches.
  • the area image includes n, and the n is a positive integer greater than 0, and the data processing unit is specifically configured to:
  • the mean vector [avgY, avgCr, avgCb] of the n region images in the YCrCb color space is obtained by the following formula:
  • sumY_i represents the sum of the pixel values of the i-th region image in the Y color channel
  • sumCr_i represents the sum of the pixel values of the i-th region image in the Cr color channel
  • sumCb_i represents the i-th region image in the Cb color
  • S_i represents the area of the image of the i-th region
  • the analyzing unit is specifically configured to:
  • avgY_j represents the mean value of the jth skin color patch in the skin color template in the Y color channel
  • avgCr_j represents the mean value of the jth skin color patch in the Cr color channel in the skin color template
  • avgCb_j table The mean value of the jth color patch in the Cb color channel in the skin color template is shown.
  • the area image includes any one or more of a left cheek area image, a nose area image, and a right cheek area image.
  • an intelligent terminal including:
  • At least one processor and,
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform a facial skin color recognition method as described above.
  • the embodiment of the present application provides a storage medium, where the storage medium stores executable instructions, and when the executable instructions are executed by the smart terminal, the smart terminal is executed as described above. Face color recognition method.
  • the embodiment of the present application further provides a program product, where the program product includes a program stored on a storage medium, where the program includes program instructions, when the program instruction is used by the smart terminal.
  • the smart terminal When executed, the smart terminal is caused to perform the face color recognition method as described above.
  • the beneficial effects of the embodiment of the present application are as follows: the method, device, and smart terminal for recognizing a facial skin color provided by the embodiment of the present application, when the facial image is acquired, the image of the area to be detected is intercepted from the facial image; Then, the color space of the area image is converted into a YCrCb color space, and the mean vector of the area image is acquired [avgY, avgCr, avgCb]; finally, one of the preset skin color templates including a plurality of skin color blocks is selected.
  • the skin color patch matching the mean vector [avgY, avgCr, avgCb] is used as the face color of the face image, and the color of the face skin can be accurately recognized, which is convenient for providing an effective reference for people's personal image design.
  • the face color recognition method determines the face color of the face image based on the mean vector [avgY, avgCr, avgCb] and the preset skin color template, and does not require a large amount of training data, and the recognition process is simple and feasible.
  • FIG. 1 is a schematic flowchart diagram of a face color recognition method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for obtaining an average vector [avgY, avgCr, avgCb] of a region image in a YCrCb color space according to an embodiment of the present application;
  • FIG. 3 is a diagram showing an example of gray scale of a skin color template according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for determining a face color of a face image based on a mean vector [avgY, avgCr, avgCb] and a preset skin color template according to an embodiment of the present application;
  • FIG. 5 is a schematic flowchart diagram of another method for recognizing facial skin color according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a face color recognition device according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of hardware of an intelligent terminal according to an embodiment of the present application.
  • Embodiments of the present application provide a method, an apparatus, a smart terminal, and a storage medium for recognizing facial skin color.
  • the face color recognition method is a recognition scheme for face skin color matching based on a preset skin color template, and the image of the area to be detected is intercepted from the face image when the face image is acquired; Then, obtaining the mean vector [avgY, avgCr, avgCb] of the region image in the YCrCb color space; finally selecting one of the preset skin color templates including the plurality of skin patches and the mean vector [avgY, avgCr, avgCb] matching skin color patch as the face color of the face image, can be accurate Identifying the color of the human face to provide an effective reference for people's personal image design.
  • the facial skin color recognition method determines the above based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template.
  • the face color of the face image does not require a large amount of training data, and the recognition process is simple and feasible.
  • the face color recognition method, the smart terminal, and the storage medium provided by the embodiments of the present application can be applied to any technical field related to face recognition, for example, portrait nationality recognition, and the like, and is particularly suitable for the fields of beauty application, personal image design, and the like.
  • a beauty-like application can be developed based on the inventive concept of the face color recognition method provided by the embodiment of the present application, and the application can automatically recognize the face color of the face image when the user inputs the face image, and further Based on the face color, the user recommends a suitable foundation color, makeup, accessories, skin care products, and the like.
  • the face color recognition method provided by the embodiment of the present application may be performed by any type of smart terminal having an image processing function, and the smart terminal may include any suitable type of storage medium for storing data, such as a magnetic disk or a compact disk (CD). -ROM), read-only memory or random storage memory.
  • the smart terminal may also include one or more logical computing modules that perform any suitable type of function or operation in parallel, such as viewing a database, image processing, etc., in a single thread or multiple threads.
  • the logic operation module may be any suitable type of electronic circuit or chip-type electronic device capable of performing logical operation operations, such as a single core processor, a multi-core processor, a graphics processing unit (GPU), or the like.
  • the smart terminal may include, but is not limited to, a beauty authentication instrument, a personal computer, a tablet computer, a smart phone, a server, and the like.
  • FIG. 1 is a schematic flowchart of a face color recognition method according to an embodiment of the present application. Referring to FIG. 1 , the method includes but is not limited to the following steps:
  • Step 110 Acquire a face image.
  • the "face image” refers to an image including a face of a detected person, by which the face feature of the detected person can be acquired.
  • the specific implementation manner of acquiring the face image may be: collecting the face image of the detected person in real time; or: directly acquiring the existing included in the smart terminal locally or in the cloud. Detects an image of a person's face. For different application scenarios or the selection of the detected person, different ways of acquiring the face image may be selected. For example, it is assumed that a smart terminal for recommending suitable cosmetics for a user is provided in a cosmetics store, and the smart terminal is recommended in order to timely recommend a color of cosmetics such as foundation, concealer, lipstick, etc. based on the skin color of the user.
  • the manner of acquiring the face image may be that the face image of the detected person is collected in real time by the camera device.
  • the user wants to design a suitable makeup for himself through his own smart terminal, for example, a smart phone. Since the smart terminal generally stores a personal face image, the smart terminal acquires the face in the application scenario. The image may also be obtained by directly capturing the existing image of the face including the detected person directly in the smart terminal or in the cloud.
  • the manner of obtaining the face image is not limited to the above description, and the comparison of the embodiments of the present application is not specifically limited.
  • Step 120 Intercept the image of the area to be detected from the face image.
  • the "area image” refers to an image as a reference area for detecting a skin color of a human face, and thus, in the present embodiment, the skin color represented by the image of the area represents the skin color of the entire face.
  • the area image may be an image of any one or more areas located in the contour of the face of the face image, such as a forehead area image, a nose area image, a left cheek area image, a right cheek area image, a chin area image, etc. Wait. Among them, considering some areas of the face may have some noise points, for example, there may be bangs in the forehead area of the woman, there may be a beard in the chin area of the male, etc.
  • the left cheek may be The image corresponding to the three relatively "clean" regions of the right cheek or the nose as the image of the region to be detected, that is, in this embodiment, the region image includes the image of the left cheek region, the image of the nose region, and the right cheek Any one or more of the area images. Wherein, when the area image includes a plurality of pieces, the reliability of the recognition result can be enhanced.
  • a specific implementation manner of extracting an image of the area to be detected from the face image may be: when acquiring a face image, first performing a face key point on the face image. Positioning, for example, applying a third-party toolkit dlib or face++ to perform face key point positioning on the face image; and then, based on the position of the positioned face key point, the image of the area to be detected is intercepted, for example, based on the positioned The coordinates of the face key point intercept the left cheek region image in the face image, and/or the nose region image, and/or the right cheek region image.
  • Step 130 Acquire an average vector [avgY, avgCr, avgCb] of the area image in the YCrCb color space.
  • the color space of the acquired face image/area image is also the RGB color space.
  • the RGB color space is a color space that is not intuitive and perceptually very uneven.
  • the change of the illumination environment easily causes the RGB value to change, which in turn causes a large error in the face color recognition result. Therefore, in the present embodiment, in order to reduce the influence of the illumination environment on the recognition result, the color represented by the captured region image is represented by the mean vector under the YCrCb color space.
  • the Y color channel is used to characterize the brightness of the pixel point, that is, the gray scale value can be obtained by superimposing a specific portion of the RGB signal on the pixel point; and the Cr color channel and the Cb color are obtained.
  • the channel is used to characterize the chromaticity of the pixel, describe the hue and saturation of the pixel, and is used to specify the color of the pixel.
  • Cr reflects the difference between the red portion of the RGB input signal and the luminance value Y of the RGB signal.
  • Cb reflects the difference between the blue portion of the RGB input signal and the luminance value Y of the RGB signal, used to characterize the saturation of the color.
  • the "mean vector" is composed of the mean values avgY, avgCr, and avgCb of the three color channels of Y, Cr, and Cb, respectively, and is denoted as [avgY, avgCr, avgCb], and is used to represent the area image.
  • the color of the skin presented in it where avgY represents the mean of the region image in the Y color channel, avgCr represents the mean of the region image in the Cr color channel, and avgCb represents the mean of the region image in the Cb color channel.
  • the area image may include n, and n may be any positive integer greater than 0. Then, after n areas of the area to be detected are intercepted, the area may be acquired by using the method shown in FIG.
  • the method may include, but is not limited to, the following steps:
  • Step 131 Calculate the sum of the pixel values sumY_i of the Y-color channel of each area image, the sum of the pixel values sumCr_i of the Cr color channel, the sum of the pixel values sumCb_i of the Cb color channel, and the area S_i of each area image.
  • Y, Cr, and Cb color channel division is performed for each area image, that is, for each of the area images.
  • the pixel points are divided into three color channels of Y, Cr and Cb.
  • the sum of the pixel values of the image of each area in the Y color channel sumY_i, the sum of the pixel values of the Cr color channel sumCr_i, and the sum of the pixel values of the Cb color channel sumCb_i are calculated.
  • sumY_i represents the sum of the pixel values of the i-th area image in the Y color channel
  • sumCr_i represents the sum of the pixel values of the i-th area image in the Cr color channel
  • sumCb_i represents the sum of the pixel values of the i-th area image in the Cb color channel
  • S_i represents the area of the i-th area image.
  • the “area” is the area of the image space, and refers to the total number of pixels of a single color channel in the area image.
  • the region image extracted from the acquired face image includes a left cheek region image, a nose region image, and a right cheek region image.
  • the sum of the pixel values of the left cheek region image in the three color channels of Y, Cr, and Cb is: sumY_1 , sumCr_1 and sumCb_1, the area is S_1; calculate the sum of the pixel values of the nose area image in the three color channels of Y, Cr and Cb: sumY_2, sumCr_2 and sumCb_2, the area is S_2; calculate the image of the right cheek area in Y, Cr and
  • the sum of the pixel values of the Cb three color channels are: sumY_3, sumCr_3, and sumCb_3, and the area is S_3, thereby obtaining parameters: sumY_1, sumY_2, sumY_3,
  • Step 132 Acquire an average vector [avgY, avgCr, avgCb] of the n area images in the YCrCb color space.
  • the mean vector [avgY, avgCr, avgCb] of the n region images in the YCrCb color space may be obtained by the following formula:
  • the images of the three regions can be obtained according to the above formula.
  • the mean vector [avgY, avgCr, avgCb] of the n area images may also be acquired in other manners.
  • the mean vector [avgY_i, avgCr_i, avgCb_i] of each area image in the YCrCb color space wherein avgY_i represents the ith The average of the area image in the Y color channel, avgCr_i represents the mean of the i-th area image in the Cr color channel, avgCb_i represents the mean of the i-th area image in the Cb color channel; then the n mean vectors [avgY_i, avgCr_i, avgC_i] averages (or, weighted average) to obtain the mean vector [avgY, avgCr_i]
  • the method further comprises: converting the color space of the area image to the YCrCb color space.
  • the color space of the acquired face image is an RGB color space
  • the color space of the area image extracted from the face image is also an RGB color space, and thus, according to the RGB color space and the YCrCb color space.
  • the conversion algorithm converts the color space of the area image into a YCrCb color space.
  • the conversion algorithm of the RGB color space and the YCrCb color space is not specifically limited.
  • the color space of the acquired face image is another color space, such as an HSV color space or a CMY color space, etc.
  • the area may also be converted by a corresponding conversion algorithm.
  • the color space of the image is converted to the YCrCb color space.
  • Step 140 Determine a face color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template.
  • the “pre-set skin color template” may be any skin color template commonly used in people's daily life, and the skin color template includes a plurality of skin color patches, and each skin color patch represents a person. Face color.
  • an example of grayscale of a skin color template provided by an embodiment of the present application includes 66 skin color patches in the skin color template.
  • a color template is generally set corresponding to the color number of the foundation (each skin color patch on the skin color template has a corresponding foundation color number) for the purchaser to compare Observing the skin color of the customer and the skin color template to determine the foundation color number suitable for the customer. Therefore, in the present embodiment, the mean vector [avgY, avgCr, avgCb] and the preset of the skin color representing the face image actually obtained may be used.
  • the skin color template determines the face skin color of the face image, that is, selects the skin color patch that best matches the mean vector [avgY, avgCr, avgCb] from the plurality of skin color patches of the skin color template as the face The face color of the image. Therefore, in the embodiment, the face color recognition can be performed only by selecting an appropriate skin color template, and it is not necessary to train a large amount of sample data, thereby saving time and cost of face color recognition.
  • the face skin color of the face image can be determined based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template by the method as shown in FIG. 4.
  • the method may include but is not limited to the following steps:
  • Step 141 Acquire a standard vector [avgY_j, avgCr_j, avgCb_j] of each skin color patch in the preset skin color template.
  • the “standard vector” refers to the mean vector of the skin color patch in the skin color template, which is a standard for matching the skin color of the face image actually acquired, and one skin color patch corresponds to a standard vector.
  • avgY_j, avgCr_j, avgCb_j represents the mean value of the jth skin color patch in the skin color template in the Y color channel
  • avgCr_j represents the mean value of the jth skin color patch in the color channel template in the Cr color channel
  • avgCb_j represents the mean value of the jth skin color patch in the Cb color channel in the skin color template.
  • the specific implementation manner of obtaining the standard vector [avgY_j, avgCr_j, avgCb_j] of each skin color patch in the preset skin color template may be: first converting the color space of each skin color block into the YCrCb color space, and then performing The division of the three color channels of Y, Cr and Cb, and obtaining the mean value of each color channel, thereby calculating the standard vector corresponding to each skin color patch [avgY_j, avgCr_j, avgCb_j].
  • the standard vectors [avgY_j, avgCr_j, avgCb_j] corresponding to each of the known skin color patches may be stored on the smart terminal, so that the skin color template is directly retrieved from the smart terminal when performing face color recognition.
  • the standard vector [avgY_j, avgCr_j, avgCb_j] for each skin patch saves time and data throughput.
  • Step 142 Select, as the face image, a skin color patch corresponding to the standard vector [avgY_j, avgCr_j, avgCb_j] having the smallest Euclidean distance of the mean vector [avgY, avgCr, avgCb] among the skin color templates. Face color.
  • the Euclidean distance of [avgY_j, avgCr_j, avgCb_j] represents the degree of similarity between the face skin color of the actually acquired face image and the skin color block in the skin color template, and the smaller the Euclidean distance, the greater the degree of similarity.
  • the skin color patch corresponding to the standard vector [avgY_j, avgCr_j, avgCb_j] having the smallest Euclidean distance of the mean vector [avgY, avgCr, avgCb] can be selected as the face in the skin color template.
  • the face color of the image can be selected as the face in the skin color template.
  • the face color recognition method provided by the embodiment of the present application may be expanded according to an actual application scenario, for example, after determining the face color of the face image, Cosmetic color numbers, decorations, and the like that match the facial skin color can be recommended to the user. I will not list them here.
  • the method for recognizing the face color in the embodiment of the present application is to intercept the image of the area to be detected from the face image when the face image is acquired; Then, the color space of the area image is converted into a YCrCb color space, and the mean vector of the area image is acquired [avgY, avgCr, avgCb]; finally, one of the preset skin color templates including a plurality of skin color blocks is selected.
  • the skin color patch matching the mean vector [avgY, avgCr, avgCb] is used as the face color of the face image, and can accurately recognize the specific situation of the face color, and is convenient for providing effective image design for people.
  • the face color recognition method determines the face color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, and does not require a large amount of training data, and the recognition process is simple and feasible. .
  • the color of the face image is greatly affected by the illumination environment when the face image is acquired, that is, the face image acquired under different illumination environments, especially under different color sources. Different degrees of color shift will occur, and the color shift of the face image will cause a large error in the final skin color recognition result.
  • the second embodiment of the present application proposes another method for recognizing facial skin color based on the first embodiment.
  • the method for recognizing the facial skin color is different from that of the first embodiment in the present embodiment.
  • the color shift of the image of these regions is first eliminated.
  • FIG. 5 a flow chart of another method for recognizing facial skin color according to an embodiment of the present application is shown in FIG. 5.
  • the method for recognizing facial skin color includes but is not limited to the following steps:
  • Step 210 Acquire a face image.
  • Step 220 Intercept the image of the area to be detected from the face image.
  • Step 230 Eliminate the color shift of the area image.
  • the color shift of the area image to be detected is first eliminated before the color space conversion of the area image to be detected is performed.
  • any color equalization method such as a Gray World Algorithm or a White Patch Retinex algorithm, may be used to eliminate the color shift of the image of the area.
  • Step 240 Convert the color space of the area image after the color shift is removed to the YCrCb color space, and obtain the mean vector [avgY', avgCr', avgCb'] of the area image after the color shift is eliminated.
  • Step 250 Determine a face color of the face image based on the mean vector [avgY', avgCr', avgCb'] and a preset skin color template.
  • the steps 210, 220, 240, and 250 have the same or similar technical features as the steps 110, 120, 130, and 140 described in the first embodiment. Therefore, the specific implementation manners can refer to the foregoing steps. Corresponding descriptions of 110, 120, 130, and 140 are not described in detail in this embodiment.
  • the image of the area to be detected is first taken out from the face image, and the color shift of the area image is removed, in order to reduce the data processing of the system. the amount.
  • the color offset of the face image may be first removed when the face image is acquired, and then the area image to be detected is intercepted from the face image after the color offset is removed.
  • the human skin color recognition method provided by the embodiment of the present application first removes the image of the regional image by converting the color space of the segmented region image into the YCrCb color space.
  • the color shift can reduce the influence of the illumination environment on the color of the face image, thereby further improving the accuracy of the face color recognition.
  • the facial skin color recognition device 6 includes, but is not limited to, a face image acquisition unit 61, an intercepting unit 62, and data processing. Unit 63 and analysis unit 64.
  • the face image obtaining unit 61 is configured to acquire a face image.
  • the intercepting unit 62 is configured to extract an area image to be detected from the face image, wherein, in some embodiments, the area image includes any one of a left cheek area image, a nose area image, and a right cheek area image. Or multiple.
  • the data processing unit 63 is configured to acquire an average vector [avgY, avgCr, avgCb] of the region image under the YCrCb color space, wherein avgY represents the mean value of the region image in the Y color channel, and avgCr represents that the region image is The mean of the Cr color channels, avgCb, represents the mean of the area image in the Cb color channel.
  • the analyzing unit 64 is configured to determine a face color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, wherein the skin color template includes a plurality of skin color patches, the person The skin color of the face is one of the plurality of skin color patches.
  • the face image acquiring unit 61 acquires the face image
  • the image of the area to be detected is intercepted from the face image by the intercepting unit 62; and the area is acquired by the data processing unit 63.
  • the analysis unit 64 determines the face skin color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, wherein the skin color template includes A plurality of skin color patches, the human skin color being one of the plurality of skin color patches.
  • the facial skin color recognition device further includes:
  • the converting unit 65 is configured to convert the color space of the area image into the YCrCb color space.
  • the area image includes n, and the n is a positive integer greater than 0.
  • the data processing unit 63 is specifically configured to: calculate a sum of pixel values of the Y-color channel of each region image sumY_i, a sum of pixel values sumCr_i in the Cr color channel, a sum of pixel values sumCb_i in the Cb color channel, and each region
  • the area of the image S_i; the mean vector [avgY, avgCr, avgCb] of the n area images in the YCrCb color space is obtained by the following formula:
  • sumY_i represents the sum of the pixel values of the i-th region image in the Y color channel
  • sumCr_i represents the sum of the pixel values of the i-th region image in the Cr color channel
  • sumCb_i represents the i-th region image in the Cb color
  • S_i represents the area of the image of the i-th region
  • the analyzing unit 64 is specifically configured to: acquire a standard vector [avgY_j, avgCr_j, avgCb_j] of each skin color patch in the preset skin color template; and select the average value in the skin color template.
  • avgCr_j represents the mean of the j-th skin patch in the skin color template in the Cr color channel
  • avgCb_j represents the j-th color patch in the skin color template in the Cb color channel Mean.
  • the human face color recognition device 6 further includes an image pre-processing unit 66.
  • the image pre-processing unit 66 eliminates the color shift of the area image, and can reduce the influence of the illumination environment on the color of the face image, thereby further improving the accuracy of the face color recognition.
  • the human skin color recognition device uses the intercepting unit 62 to obtain the human face image by acquiring the human face image by the facial image acquiring unit 61.
  • the image of the area to be detected is cut out in the image; then, the mean vector [avgY, avgCr, avgCb] of the area image in the YCrCb color space is acquired by the data processing unit 63; finally, the plurality of presets are included by the analyzing unit 64.
  • the analyzing unit 64 determines the face color of the face image based on the mean vector [avgY, avgCr, avgCb] and a preset skin color template, without requiring a large amount of Training data, the identification process is simple and feasible.
  • FIG. 7 is a schematic structural diagram of an intelligent terminal according to an embodiment of the present application.
  • the smart terminal 700 can be any type of smart terminal, such as a mobile phone, a tablet computer, a beauty authentication device, etc., and can execute any of the embodiments provided in this application.
  • a method for recognizing facial skin color can be any type of smart terminal, such as a mobile phone, a tablet computer, a beauty authentication device, etc., and can execute any of the embodiments provided in this application.
  • a method for recognizing facial skin color can be any type of smart terminal, such as a mobile phone, a tablet computer, a beauty authentication device, etc.
  • the smart terminal 700 includes:
  • processors 701 and a memory 702 are exemplified by a processor 701 in FIG.
  • the processor 701 and the memory 702 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 702 is used as a non-transitory computer readable storage medium, and can be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions corresponding to the face color recognition method in the embodiment of the present application.
  • / Module for example, the face image acquisition unit 61, the clipping unit 62, the data processing unit 63, the analysis unit 64, the conversion unit 65, and the image pre-processing unit 66 shown in Fig. 6).
  • the processor 701 executes various functional applications and data processing of the facial skin color recognition device by running the non-transitory software programs, instructions, and modules stored in the memory 702, that is, the face color recognition of any of the above method embodiments is implemented. method.
  • the memory 702 can include a storage program area and a storage data area, wherein the storage program area can be stored The operating system, an application required for at least one function; the storage data area may store data created according to the use of the smart terminal 700, and the like.
  • memory 702 can include high speed random access memory, and can also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device.
  • memory 702 can optionally include memory remotely located relative to processor 701 that can be connected to smart terminal 700 over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 702, and when executed by the one or more processors 701, perform a facial skin color recognition method in any of the above method embodiments, for example, performing the above described map Method steps 110 to 140 in 1 , method steps 131 to 132 in FIG. 2, method steps 141 to 142 in FIG. 4, and method steps 210 to 250 in FIG. 5 implement the functions of units 61-66 in FIG. .
  • the embodiment of the present application further provides a storage medium storing executable instructions executed by one or more processors, for example, by one processor 701 in FIG. 7, which may be
  • the one or more processors described above perform the face color recognition method in any of the above method embodiments, for example, performing the method steps 110 to 140 in FIG. 1 described above, the method steps 131 to 132 in FIG. 2, in FIG. Method steps 141 through 142, method steps 210 through 250 of FIG. 5, implement the functions of units 61-66 of FIG.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the various embodiments can be implemented by means of software plus a general hardware platform, and of course, by hardware.
  • a person skilled in the art can understand that all or part of the process of implementing the above embodiments can be completed by a computer program to instruct related hardware, and the program can be stored in a non-transitory computer readable storage medium.
  • the program when executed, may include the flow of an embodiment of the methods as described above. Its
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例提供了一种人脸肤色识别方法、装置和智能终端。其中,所述人脸肤色识别方法包括:获取人脸图像;从所述人脸图像中截取出待检测的区域图像;获取所述区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb];基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,其中,所述肤色模板包括多种肤色色块,所述人脸肤色为所述多种肤色色块中的一种。通过上述技术方案,本申请实施例能够准确地识别出人脸皮肤的颜色,为人们的个人形象设计提供有效的参考。

Description

一种人脸肤色识别方法、装置和智能终端 技术领域
本申请涉及人脸识别技术领域,尤其涉及一种人脸肤色识别方法、装置和智能终端。
背景技术
人脸识别技术是一种通过分析比较人脸视觉特征信息进行身份鉴定的技术,其研究领域包括:身份识别、表情识别、性别识别、国籍识别以及美容护肤等。
近年来,随着人们物质生活水平的日益提高,人们在个人形象设计方面的需求迅速增长。而为用户提供个人形象设计通常需要首先确定用户的人脸肤色,继而根据用户的人脸肤色选择合适的粉底色号、妆容、配饰等等。
当前,主流的肤色检测方法包括固定肤色分布检测方法,以及,肤色概率分布与贝叶斯决策联合检测方法。然而,这些方法均只能判断出图像中哪些区域属于皮肤区域,无法准确地识别出人脸皮肤具体的颜色,从而难以为人们的个人形象设计提供有效的参考。
因此,如何准确地识别出人脸皮肤具体的颜色是当前亟待解决的问题。
发明内容
本申请实施例提供一种人脸肤色识别方法、装置和智能终端,能够解决如何准确地识别出人脸皮肤具体的颜色的问题。
为解决上述技术问题,第一方面,本申请实施例提供了一种人脸肤色识别方法,包括:
获取人脸图像;
从所述人脸图像中截取出待检测的区域图像;
获取所述区域图像在YCrCb颜色空间下的的均值向量[avgY,avgCr, avgCb],其中,avgY表示所述区域图像在Y颜色通道的均值,avgCr表示所述区域图像在Cr颜色通道的均值,avgCb表示所述区域图像在Cb颜色通道的均值;
基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,其中,所述肤色模板包括多种肤色色块,所述人脸肤色为所述多种肤色色块中的一种。
可选地,所述获取所述区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb]的步骤之前,还包括:
将所述区域图像的颜色空间转换到YCrCb颜色空间。
可选地,所述将所述区域图像的颜色空间转换到YCrCb颜色空间的步骤之前,还包括:
消除所述区域图像的色彩偏移。
可选地,所述区域图像包括n个,所述n为正整数,所述获取所述区域图像在YCrCb颜色空间下的的均值向量[avgY,avgCr,avgCb],包括:
计算每一区域图像在Y颜色通道的像素值总和sumY_i、在Cr颜色通道的像素值总和sumCr_i、在Cb颜色通道的像素值总和sumCb_i,以及,每一区域图像的面积S_i;
通过如下公式获取所述n个区域图像在YCrCb颜色空间下的的均值向量[avgY,avgCr,avgCb]:
Figure PCTCN2017112533-appb-000001
其中,1≤i≤n,sumY_i表示第i个区域图像在Y颜色通道的像素值总和,sumCr_i表示第i个区域图像在Cr颜色通道的像素值总和,sumCb_i表示第i个区域图像在Cb颜色通道的像素值总和,S_i表示第i个区域图像的面积;
Figure PCTCN2017112533-appb-000002
表示所述n个区域图像在Y颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000003
表示 所述n个区域图像在Cr颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000004
表示所述n个区域图像在Cb颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000005
表示所述n个区域图像的面积总和。
可选地,所述基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,包括:
获取预设的肤色模板中每一种肤色色块的标准向量[avgY_j,avgCr_j,avgCb_j];
在所述肤色模板中选择与所述均值向量[avgY,avgCr,avgCb]的欧几里得距离最小的标准向量[avgY_j,avgCr_j,avgCb_j]对应的肤色色块作为所述人脸图像的人脸肤色;
其中,avgY_j表示所述肤色模板中第j种肤色色块在Y颜色通道的均值,avgCr_j表示所述肤色模板中第j种肤色色块在Cr颜色通道的均值,avgCb_j表示所述肤色模板中第j个肤色色块在Cb颜色通道的均值。
可选地,所述区域图像包括左脸颊区域图像、鼻头区域图像以及右脸颊区域图像中的任一个或者多个。
为解决上述技术问题,第二方面,本申请实施例提供了一种人脸肤色识别装置,包括:
人脸图像获取单元,用于获取人脸图像;
截取单元,用于从所述人脸图像中截取出待检测的区域图像;
数据处理单元,用于获取所述区域图像在YCrCb颜色空间下的的均值向量[avgY,avgCr,avgCb],其中,avgY表示所述区域图像在Y颜色通道的均值,avgCr表示所述区域图像在Cr颜色通道的均值,avgCb表示所述区域图像在Cb颜色通道的均值;
分析单元,用于基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,其中,所述肤色模板包括多种肤色色块,所 述人脸肤色为所述多种肤色色块中的一种。
可选地,所述区域图像包括n个,所述n为大于0的正整数,所述数据处理单元具体用于:
计算每一区域图像在Y颜色通道的像素值总和sumY_i、在Cr颜色通道的像素值总和sumCr_i、在Cb颜色通道的像素值总和sumCb_i,以及,每一区域图像的面积S_i;
通过如下公式获取所述n个区域图像在YCrCb颜色空间下的的均值向量[avgY,avgCr,avgCb]:
Figure PCTCN2017112533-appb-000006
其中,1≤i≤n,sumY_i表示第i个区域图像在Y颜色通道的像素值总和,sumCr_i表示第i个区域图像在Cr颜色通道的像素值总和,sumCb_i表示第i个区域图像在Cb颜色通道的像素值总和,S_i表示第i个区域图像的面积;
Figure PCTCN2017112533-appb-000007
表示所述n个区域图像在Y颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000008
表示所述n个区域图像在Cr颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000009
表示所述n个区域图像在Cb颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000010
表示所述n个区域图像的面积总和。
可选地,所述分析单元具体用于:
获取预设的肤色模板中每一种肤色色块的标准向量[avgY_j,avgCr_j,avgCb_j];
在所述肤色模板中选择与所述均值向量[avgY,avgCr,avgCb]的欧几里得距离最小的标准向量[avgY_j,avgCr_j,avgCb_j]对应的肤色色块作为所述人脸图像的人脸肤色;
其中,avgY_j表示所述肤色模板中第j种肤色色块在Y颜色通道的均值,avgCr_j表示所述肤色模板中第j种肤色色块在Cr颜色通道的均值,avgCb_j表 示所述肤色模板中第j个肤色色块在Cb颜色通道的均值。
可选地,所述区域图像包括左脸颊区域图像、鼻头区域图像以及右脸颊区域图像中的任一个或者多个。
为解决上述技术问题,第三方面,本申请实施例提供一种智能终端,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如上所述的人脸肤色识别方法。
为解决上述技术问题,第四方面,本申请实施例提供一种存储介质,所述存储介质存储有可执行指令,所述可执行指令被智能终端执行时,使所述智能终端执行如上所述的人脸肤色识别方法。
为解决上述技术问题,第五方面,本申请实施例还提供了一种程序产品,所述程序产品包括存储在存储介质上的程序,所述程序包括程序指令,当所述程序指令被智能终端执行时,使所述智能终端执行如上所述的人脸肤色识别方法。
本申请实施例的有益效果在于:本申请实施例提供的人脸肤色识别方法、装置和智能终端,通过在获取到人脸图像时,从所述人脸图像中截取出待检测的区域图像;然后,将所述区域图像的颜色空间转换为YCrCb颜色空间,并获取所述区域图像的均值向量[avgY,avgCr,avgCb];最后在预设的包括多种肤色色块的肤色模板中选择一种与该均值向量[avgY,avgCr,avgCb]匹配的肤色色块作为所述人脸图像的人脸肤色,能够准确地识别出人脸皮肤的颜色,方便为人们的个人形象设计提供有效的参考,同时,该人脸肤色识别方法基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,不需要大量的训练数据,识别过程简单可行。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请实施例提供的一种人脸肤色识别方法的流程示意图;
图2是本申请实施例提供的一种获取所述区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb]的方法的流程示意图;
图3是本申请实施例提供的一种肤色模板的灰度示例图;
图4是本申请实施例提供的一种基于均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定人脸图像的人脸肤色的方法的流程示意图;
图5是本申请实施例提供的另一种人脸肤色识别方法的流程示意图;
图6是本申请实施例提供的一种人脸肤色识别装置的结构示意图;
图7是本申请实施例提供的一种智能终端的硬件结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
需要说明的是,如果不冲突,本申请实施例中的各个特征可以相互结合,均在本申请的保护范围之内。另外,虽然在装置示意图中进行了功能模块划分,在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于装置中的模块划分,或流程图中的顺序执行所示出或描述的步骤。
本申请实施例提供了一种人脸肤色识别方法、装置、智能终端和存储介质。其中,该人脸肤色识别方法是一种基于预设的肤色模板进行人脸肤色匹配的识别方案,通过在获取到人脸图像时,从所述人脸图像中截取出待检测的区域图像;然后,获取所述区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb];最后在预设的包括多种肤色色块的肤色模板中选择一种与该均值向量[avgY,avgCr,avgCb]匹配的肤色色块作为所述人脸图像的人脸肤色,能够准确 地识别出人脸皮肤的颜色,方便为人们的个人形象设计提供有效的参考,同时,该人脸肤色识别方法基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,不需要大量的训练数据,识别过程简单可行。
本申请实施例提供的人脸肤色识别方法、智能终端和存储介质能够应用于任意人脸识别相关的技术领域,比如,人像国籍识别等,尤其适用于美容应用、个人形象设计等领域。例如,可以基于本申请实施例提供的人脸肤色识别方法的发明构思开发美容类的应用程序,该应用程序可以在用户输入人脸图像时自动地识别出该人脸图像的人脸肤色,进而基于该人脸肤色为用户推荐合适的粉底色号、妆容、配饰、护肤品等等。
本申请实施例提供的人脸肤色识别方法可以由任意类型的具有图像处理功能的智能终端执行,该智能终端可以包括任何合适类型的,用以存储数据的存储介质,例如磁碟、光盘(CD-ROM)、只读存储记忆体或随机存储记忆体等。该智能终端还可以包括一个或者多个逻辑运算模块,单线程或者多线程并行执行任何合适类型的功能或者操作,例如查看数据库、图像处理等。所述逻辑运算模块可以是任何合适类型的,能够执行逻辑运算操作的电子电路或者贴片式电子器件,例如:单核心处理器、多核心处理器、图形处理器(GPU)等。举例来说,该智能终端可以包括但不限于:美容鉴定仪器、个人电脑、平板电脑、智能手机、服务器等。
具体地,下面结合附图,对本申请实施例作进一步阐述。
实施例一
图1是本申请实施例提供的一种人脸肤色识别方法的流程示意图,请参阅图1,该方法包括但不限于以下步骤:
步骤110:获取人脸图像。
在本实施例中,所述“人脸图像”是指包括被检测人的人脸的图像,通过该人脸图像能够获取到该被检测人的面部特征。
在本实施例中,获取人脸图像的具体实施方式可以是:实时采集被检测人的人脸图像;或者,也可以是:直接在智能终端本地或云端调取已有的包括被 检测人的人脸的图像。针对不同的应用场景或者被检测人的选择,可以选择不同的获取人脸图像的方式。例如:假设在化妆品专卖店中设置有用于为用户推荐合适的化妆品的智能终端,为了能够及时地基于用户的人脸肤色为其推荐合适的粉底、遮瑕、口红等化妆品的色号,该智能终端获取人脸图像的方式可以是通过摄像装置实时采集被检测人的人脸图像。又如,用户希望通过自己的智能终端,比如,智能手机,为自己设计合适的妆容,由于其智能终端上一般存储有个人的人脸图像,因此,在该应用场景中,智能终端获取人脸图像的方式也可以是直接在智能终端本地或者云端调取已有的包括被检测人的人脸的图像。当然,在实际应用中,获取人脸图像的方式也可以不限于以上所描述的方式,本申请实施例对比不作具体限定。
步骤120:从所述人脸图像中截取出待检测的区域图像。
在本实施例中,所述“区域图像”是指作为检测人脸肤色的参考区域的图像,从而,在本实施例中,以该区域图像呈现的肤色代表整张脸的肤色。该区域图像可以是该人脸图像中位于人脸面部轮廓内的任意一个或者多个区域的图像,比如,额头区域图像、鼻子区域图像、左脸颊区域图像、右脸颊区域图像、下巴区域图像等等。其中,考虑到人脸的一些区域有可能会存在一些噪音点,比如,女性的额头区域有可能会有刘海,男性的下巴区域有可能会有胡须等,在一些实施例中,可以将左脸颊、右脸颊或者鼻子这三个相对“干净”的区域对应的图像作为待检测的区域图像,也就是说,在该实施例中,所述区域图像包括左脸颊区域图像、鼻头区域图像以及右脸颊区域图像中的任一个或者多个。其中,当区域图像包括多个时,可以增强识别结果的可靠性。
具体地,在本实施例中,从所述人脸图像中截取出待检测的区域图像的具体实施方式可以是:在获取到某一人脸图像时,首先对该人脸图像进行人脸关键点定位,比如,应用第三方工具包dlib或者face++对该人脸图像进行人脸关键点定位;然后,基于定位出的人脸关键点的位置截取出待检测的区域图像,比如,基于定位出的人脸关键点的坐标截取出人脸图像中的左脸颊区域图像,和/或,鼻头区域图像,和/或,右脸颊区域图像。
步骤130:获取所述区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb]。
在图像处理中,最常用、最基础的颜色空间为RGB颜色空间,并且现有的图像采集设备最终采集的也是RGB值。因此,在一般情况下,获取到的人脸图像/区域图像的颜色空间也是RGB颜色空间。然而,RGB颜色空间是不直观并且感知上非常不均匀的颜色空间,光照环境的变化很容易导致RGB值发生变化,进而使得人脸肤色识别结果产生较大的误差。因此,在本实施例中,为了降低光照环境对识别结果的影响,以YCrCb颜色空间下的均值向量表征截取的区域图像所呈现的颜色。其中,在YCrCb颜色空间中,Y颜色通道用于表征像素点的亮度,即,灰阶值,可以通过将该像素点上的RGB信号的特定部分叠加到一起获得;而Cr颜色通道和Cb颜色通道则用于表征像素点的色度,描述该像素点的色调及饱和度,用于指定像素的颜色,其中,Cr反映了RGB输入信号红色部分与RGB信号亮度值Y之间的差异,用于表征颜色的色调,而Cb则反映了RGB输入信号蓝色部分与RGB信号亮度值Y之间的差异,用于表征颜色的饱和度。
在本实施例中,所述“均值向量”由区域图像分别在Y、Cr、Cb三个颜色通道的均值avgY、avgCr以及avgCb构成,记为[avgY,avgCr,avgCb],用于表征区域图像中所呈现的皮肤颜色。其中,avgY表示所述区域图像在Y颜色通道的均值,avgCr表示所述区域图像在Cr颜色通道的均值,avgCb表示所述区域图像在Cb颜色通道的均值。
在本实施例中,区域图像可以包括n个,n可以为任意大于0的正整数,则,在截取到n个待检测的区域图像后,可以采用如图2所示的方法获取所述区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb]。
具体地,请参阅图2,该方法可以包括但不限于如下步骤:
步骤131:计算每一区域图像在Y颜色通道的像素值总和sumY_i、在Cr颜色通道的像素值总和sumCr_i、在Cb颜色通道的像素值总和sumCb_i,以及,每一区域图像的面积S_i。
在本实施例中,分别将所述n个区域图像的颜色空间转换为YCrCb颜色空间之后,对每一个区域图像进行Y、Cr和Cb颜色通道分割,即,针对每一个区域图像中的每一个像素点分割出Y、Cr和Cb三个颜色通道,同时,计算每一区域图像在Y颜色通道的像素值总和sumY_i、在Cr颜色通道的像素值总和sumCr_i、在Cb颜色通道的像素值总和sumCb_i,以及,每一区域图像的面积S_i,其中,1≤i≤n,sumY_i表示第i个区域图像在Y颜色通道的像素值总和,sumCr_i表示第i个区域图像在Cr颜色通道的像素值总和,sumCb_i表示第i个区域图像在Cb颜色通道的像素值总和,S_i表示第i个区域图像的面积。其中,需要说明的是,在本实施例中所述的“面积”为图像空间的面积,是指区域图像中单个颜色通道的像素点的总个数。
举例来说,假设在其中一个应用实例中,从获取到的人脸图像中截取出的区域图像包括左脸颊区域图像、鼻子区域图像和右脸颊区域图像。则,分别将左脸颊区域图像、鼻子区域图像和右脸颊区域图像的颜色空间转换为YCrCb颜色空间之后,计算左脸颊区域图像在Y、Cr和Cb三个颜色通道的像素值总和分别为:sumY_1、sumCr_1和sumCb_1,面积为S_1;计算鼻子区域图像在Y、Cr和Cb三个颜色通道的像素值总和分别为:sumY_2、sumCr_2和sumCb_2,面积为S_2;计算右脸颊区域图像在Y、Cr和Cb三个颜色通道的像素值总和分别为:sumY_3、sumCr_3和sumCb_3,面积为S_3,从而获得参数:sumY_1、sumY_2、sumY_3、sumCr_1、sumCr_2、sumCr_3、sumCb_1、sumCb_2、sumCb_3、S_1、S_2以及S_3。
步骤132:获取所述n个区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb]。
在本实施例中,基于步骤131获得的参数,可以通过如下公式获取所述n个区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb]:
Figure PCTCN2017112533-appb-000011
其中,
Figure PCTCN2017112533-appb-000012
表示所述n个区域图像在Y颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000013
表示所述n个区域图像在Cr颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000014
表示所述n个区域图像在Cb颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000015
表示所述n个区域图像的面积总和。
举例来说,假设步骤131获得的参数包括:sumY_1、sumY_2、sumY_3、sumCr_1、sumCr_2、sumCr_3、sumCb_1、sumCb_2、sumCb_3、S_1、S_2以及S_3,则,根据上述公式即可得到这三个区域图像的均值向量[avgY,avgCr,avgCb]:
Figure PCTCN2017112533-appb-000016
Figure PCTCN2017112533-appb-000017
Figure PCTCN2017112533-appb-000018
当然,在实际应用中,当n≥2时,也可以采用其他的方式获取所述n个区域图像的均值向量[avgY,avgCr,avgCb]。比如,在将所述n个区域图像的颜色空间转换为YCrCb颜色空间之后,首先分别获取每一个区域图像在YCrCb颜色空间下的均值向量[avgY_i,avgCr_i,avgCb_i],其中,avgY_i表示第i个区域图像在Y颜色通道的均值,avgCr_i表示第i个区域图像在Cr颜色通道的均值,avgCb_i表示第i个区域图像在Cb颜色通道的均值;然后再对这n个均值向量[avgY_i,avgCr_i,avgC_i]求平均(或者,加权平均),从而获得所述n个区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb]。
此外,可以理解的是,在一些实施例中,若截取到的区域图像的颜色空间不是YCrCb颜色空间,则,获取所述区域图像在YCrCb颜色空间下的均值向量 [avgY,avgCr,avgCb]的步骤之前,还包括:将所述区域图像的颜色空间转换到YCrCb颜色空间。
比如,假设获取到的人脸图像的颜色空间为RGB颜色空间,则,从所述人脸图像中截取出的区域图像的颜色空间也为RGB颜色空间,从而,根据RGB颜色空间与YCrCb颜色空间的转换算法可以将区域图像的颜色空间转换为YCrCb颜色空间。其中,在本实施例中,不对RGB颜色空间与YCrCb颜色空间的转换算法作具体限定。此外,还可以理解的是,如果在实际应用中,获取到的人脸图像的颜色空间为其他颜色空间,比如,HSV颜色空间或者CMY颜色空间等,也可以通过相应的转换算法将所述区域图像的颜色空间转换为YCrCb颜色空间。
步骤140:基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色。
在本实施例中,所述“预设的肤色模板”可以是任意一种人们日常生活中通用的肤色样板,该肤色模板中包括多种肤色色块,每一种肤色色块代表一种人脸肤色。如图3所示,为本申请实施例提供的一种肤色模板的灰度示例图,在该肤色模板中包括66种肤色色块。
其中,在实际应用场景中,比如,在化妆品实体店中,一般会对应粉底的色号设置有肤色模板(肤色模板上的每一个肤色色块都有对应的粉底色号)以便导购员通过对比观察顾客的肤色和该肤色样板确定适合该顾客的粉底色号,因此,在本实施例中,可以基于实际获取到的表征人脸图像的肤色的均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定该人脸图像的人脸肤色,即,从该肤色模板的多种肤色色块中选出与所述均值向量[avgY,avgCr,avgCb]最匹配的肤色色块作为该人脸图像的人脸肤色。从而,在本实施例中,只需选择合适的肤色模板即可进行人脸肤色识别,无需训练大量的样本数据,节省人脸肤色识别的时间和成本。
具体地,在本实施例中,可以通过如图4所示的方法基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色。
请参阅图4,该方法可以包括但不限于如下步骤:
步骤141:获取预设的肤色模板中每一种肤色色块的标准向量[avgY_j,avgCr_j,avgCb_j]。
在本实施例中,所述“标准向量”是指肤色模板中的肤色色块的均值向量,是匹配实际获取到的人脸图像的人脸肤色的标准,一种肤色色块对应一个标准向量[avgY_j,avgCr_j,avgCb_j],其中,avgY_j表示所述肤色模板中第j种肤色色块在Y颜色通道的均值,avgCr_j表示所述肤色模板中第j种肤色色块在Cr颜色通道的均值,avgCb_j表示所述肤色模板中第j个肤色色块在Cb颜色通道的均值。
在本实施例中,通过对比实际获取到的人脸图像的均值向量[avgY,avgCr,avgCb]与肤色模板中每一种肤色色块的标准向量[avgY_j,avgCr_j,avgCb_j]的相似程度确定该肤色模板中最接近该人脸图像的人脸肤色的肤色色块,进而确定该肤色色块为该人脸图像的人脸肤色。因此,在进行匹配的过程中,需要首先获取预设的肤色模板中每一种肤色色块的标准向量[avgY_j,avgCr_j,avgCb_j]。
而获取预设的肤色模板中每一种肤色色块的标准向量[avgY_j,avgCr_j,avgCb_j]的具体实施方式可以是:首先将每一种肤色色块的颜色空间转换为YCrCb颜色空间,然后进行Y、Cr和Cb三个颜色通道的分割,并获取每一颜色通道的均值,从而计算得到每一种肤色色块对应的标准向量[avgY_j,avgCr_j,avgCb_j]。或者,也可以将已知的每一种肤色色块对应的标准向量[avgY_j,avgCr_j,avgCb_j]存储在智能终端上,以便在进行人脸肤色识别时,直接从智能终端本地调取肤色模板中每一种肤色色块的标准向量[avgY_j,avgCr_j,avgCb_j],节省时间和数据处理量。
步骤142:在所述肤色模板中选择与所述均值向量[avgY,avgCr,avgCb]的欧几里得距离最小的标准向量[avgY_j,avgCr_j,avgCb_j]对应的肤色色块作为所述人脸图像的人脸肤色。
在本实施例中,以所述均值向量[avgY,avgCr,avgCb]与每一所述标准向量 [avgY_j,avgCr_j,avgCb_j]的欧几里得距离表征实际获取到的人脸图像的人脸肤色与肤色模板中的肤色色块的相似程度,欧几里得距离越小,相似程度越大。因此,在本实施例中,可以在肤色模板中选择与均值向量[avgY,avgCr,avgCb]的欧几里得距离最小的标准向量[avgY_j,avgCr_j,avgCb_j]对应的肤色色块作为该人脸图像的人脸肤色。
进一步地,可以理解的是,在实际应用中,还可以根据实际应用场景对本申请实施例提供的人脸肤色识别方法进行拓展,比如,在确定了所述人脸图像的人脸肤色之后,还可以向用户推荐与该人脸肤色匹配的化妆品色号、装饰品等等。此处便不一一列举。
通过上述技术方案可知,本申请实施例的有益效果在于:本申请实施例提供的人脸肤色识别方法通过在获取到人脸图像时,从所述人脸图像中截取出待检测的区域图像;然后,将所述区域图像的颜色空间转换为YCrCb颜色空间,并获取所述区域图像的均值向量[avgY,avgCr,avgCb];最后在预设的包括多种肤色色块的肤色模板中选择一种与该均值向量[avgY,avgCr,avgCb]匹配的肤色色块作为所述人脸图像的人脸肤色,能够准确地识别出人脸肤色的具体情况,方便为人们的个人形象设计提供有效的参考,同时,该人脸肤色识别方法基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,不需要大量的训练数据,识别过程简单可行。
实施例二
在实际应用中,人脸图像的色彩受采集该人脸图像时的光照环境的影响较大,也就是说,在不同的光照环境下,尤其在不同颜色的光源下,采集到的人脸图像会产生不同程度的色彩偏移,而人脸图像的色彩偏移会使得最终的肤色识别结果有较大的误差。基于此,本申请第二实施例在上述实施例一的基础上提出了另一种人脸肤色识别方法,该人脸肤色识别方法与上述实施例一的不同之处在于:在本实施例中,在将截取出的区域图像的颜色空间转换为YCrCb颜色空间之前,首先消除这些区域图像的色彩偏移。
具体地,如图5所示,为本申请实施例提供的另一种人脸肤色识别方法的流程示意图,请参阅图5,该人脸肤色识别方法包括但不限于如下步骤:
步骤210:获取人脸图像。
步骤220:从所述人脸图像中截取出待检测的区域图像。
步骤230:消除所述区域图像的色彩偏移。
在本实施例中,为了减少人脸肤色识别的误差,在对待检测的区域图像进行颜色空间转换之前,首先消除待检测的区域图像的色彩偏移。其中,在本实施例中,可以采用任意一种色彩均衡方法,比如,灰度世界算法(Gray World Algorithm)或者White Patch Retinex算法等,消除所述区域图像的色彩偏移。
步骤240:将消除色彩偏移后的区域图像的颜色空间转换为YCrCb颜色空间,并获取消除色彩偏移后的区域图像的均值向量[avgY’,avgCr’,avgCb’]。
步骤250:基于所述均值向量[avgY’,avgCr’,avgCb’]和预设的肤色模板确定所述人脸图像的人脸肤色。
在本实施例中,上述步骤210、220、240和250分别与实施例一中所述的步骤110、120、130和140具有相同或相似的技术特征,因此,其具体实施方式可以参见上述步骤110、120、130和140中相应的描述,在本实施例中便不再详述。
此外,可以理解的是,在本实施例中,在获取到人脸图像时,先从人脸图像中截取出待检测的区域图像再消除区域图像的色彩偏移,是为了减少***的数据处理量。在其他的一些实施例中,也可以在获取到人脸图像时首先消除该人脸图像的色彩偏移,然后再从消除色彩偏移后的人脸图像中截取出待检测的区域图像。
通过上述技术方案可知,本申请实施例的有益效果在于:本申请实施例提供的人脸肤色识别方法通过在将截取出的区域图像的颜色空间转换为YCrCb颜色空间之前,首先消除这些区域图像的色彩偏移,能够降低光照环境对人脸图像的色彩的影响,从而进一步提升人脸肤色识别的准确性。
实施例三
图6是本申请实施例提供的一种人脸肤色识别装置的结构示意图,请参阅图6,该人脸肤色识别装置6包括但不限于:人脸图像获取单元61、截取单元62、数据处理单元63以及分析单元64。
其中,人脸图像获取单元61用于获取人脸图像。
截取单元62用于从所述人脸图像中截取出待检测的区域图像,其中,在一些实施例中,所述区域图像包括左脸颊区域图像、鼻头区域图像以及右脸颊区域图像中的任一个或者多个。
数据处理单元63用于获取所述区域图像在YCrCb颜色空间下的的均值向量[avgY,avgCr,avgCb],其中,avgY表示所述区域图像在Y颜色通道的均值,avgCr表示所述区域图像在Cr颜色通道的均值,avgCb表示所述区域图像在Cb颜色通道的均值。
分析单元64用于基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,其中,所述肤色模板包括多种肤色色块,所述人脸肤色为所述多种肤色色块中的一种。
在本申请实施例中,当人脸图像获取单元61获取到人脸图像时,利用截取单元62从所述人脸图像中截取出待检测的区域图像;并通过数据处理单元63获取所述区域图像的在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb],其中,avgY表示所述区域图像在Y颜色通道的均值,avgCr表示所述区域图像在Cr颜色通道的均值,avgCb表示所述区域图像在Cb颜色通道的均值;最后通过分析单元64基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,其中,所述肤色模板包括多种肤色色块,所述人脸肤色为所述多种肤色色块中的一种。
其中,在一些实施例中,所述人脸肤色识别装置,还包括:
转换单元65,用于将所述区域图像的颜色空间转换到YCrCb颜色空间。
其中,在一些实施例中,所述区域图像包括n个,所述n为大于0的正整 数,数据处理单元63具体用于:计算每一区域图像在Y颜色通道的像素值总和sumY_i、在Cr颜色通道的像素值总和sumCr_i、在Cb颜色通道的像素值总和sumCb_i,以及,每一区域图像的面积S_i;通过如下公式获取所述n个区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb]:
Figure PCTCN2017112533-appb-000019
其中,1≤i≤n,sumY_i表示第i个区域图像在Y颜色通道的像素值总和,sumCr_i表示第i个区域图像在Cr颜色通道的像素值总和,sumCb_i表示第i个区域图像在Cb颜色通道的像素值总和,S_i表示第i个区域图像的面积;
Figure PCTCN2017112533-appb-000020
表示所述n个区域图像在Y颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000021
表示所述n个区域图像在Cr颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000022
表示所述n个区域图像在Cb颜色通道的像素值总和;
Figure PCTCN2017112533-appb-000023
表示所述n个区域图像的面积总和。
其中,在一些实施例中,分析单元64具体用于:获取预设的肤色模板中每一种肤色色块的标准向量[avgY_j,avgCr_j,avgCb_j];在所述肤色模板中选择与所述均值向量[avgY,avgCr,avgCb]的欧几里得距离最小的标准向量[avgY_j,avgCr_j,avgCb_j]对应的肤色色块作为所述人脸图像的人脸肤色;其中,avgY_j表示所述肤色模板中第j种肤色色块在Y颜色通道的均值,avgCr_j表示所述肤色模板中第j种肤色色块在Cr颜色通道的均值,avgCb_j表示所述肤色模板中第j个肤色色块在Cb颜色通道的均值。
此外,在又一些实施例中,人脸肤色识别装置6还包括:图像预处理单元66。在该实施例中,通过该图像预处理单元66消除所述区域图像的色彩偏移,能够降低光照环境对人脸图像的色彩的影响,从而进一步提升人脸肤色识别的准确性。
还需要说明的是,由于所述人脸肤色识别装置与上述方法实施例一和二中的人脸肤色识别方法基于相同的发明构思,因此,上述方法实施例一和二的相应内容同样适用于本装置实施例,此处不再详述。
通过上述技术方案可知,本申请实施例的有益效果在于:本申请实施例提供的人脸肤色识别装置通过在人脸图像获取单元61获取到人脸图像时,利用截取单元62从所述人脸图像中截取出待检测的区域图像;然后,通过数据处理单元63获取所述区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb];最后通过分析单元64在预设的包括多种肤色色块的肤色模板中选择一种与该均值向量[avgY,avgCr,avgCb]匹配的肤色色块作为所述人脸图像的人脸肤色,能够准确地识别出人脸肤色的具体情况,方便为人们的个人形象设计提供有效的参考,同时,该分析单元64基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,不需要大量的训练数据,识别过程简单可行。
实施例四
图7是本申请实施例提供的一种智能终端的结构示意图,该智能终端700可以是任意类型的智能终端,如:手机、平板电脑、美容鉴定仪器等,能够执行本申请实施例提供的任意一种人脸肤色识别方法。
具体地,请参阅图7,该智能终端700包括:
一个或多个处理器701以及存储器702,图7中以一个处理器701为例。
处理器701和存储器702可以通过总线或者其他方式连接,图7中以通过总线连接为例。
存储器702作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序、非暂态性计算机可执行程序以及模块,如本申请实施例中的人脸肤色识别方法对应的程序指令/模块(例如,附图6所示的人脸图像获取单元61、截取单元62、数据处理单元63、分析单元64、转换单元65以及图像预处理单元66)。处理器701通过运行存储在存储器702中的非暂态软件程序、指令以及模块,从而执行人脸肤色识别装置的各种功能应用以及数据处理,即实现上述任一方法实施例的人脸肤色识别方法。
存储器702可以包括存储程序区和存储数据区,其中,存储程序区可存储 操作***、至少一个功能所需要的应用程序;存储数据区可存储根据智能终端700的使用所创建的数据等。此外,存储器702可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施例中,存储器702可选包括相对于处理器701远程设置的存储器,这些远程存储器可以通过网络连接至智能终端700。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述一个或者多个模块存储在所述存储器702中,当被所述一个或者多个处理器701执行时,执行上述任意方法实施例中的人脸肤色识别方法,例如,执行以上描述的图1中的方法步骤110至140,图2中的方法步骤131至132,图4中的方法步骤141至142,图5中的方法步骤210至250,实现图6中的单元61-66的功能。
本申请实施例还提供了一种存储介质,所述存储介质存储有可执行指令,该可执行指令被一个或多个处理器执行,例如:被图7中的一个处理器701执行,可使得上述一个或多个处理器执行上述任意方法实施例中的人脸肤色识别方法,例如,执行以上描述的图1中的方法步骤110至140,图2中的方法步骤131至132,图4中的方法步骤141至142,图5中的方法步骤210至250,实现图6中的单元61-66的功能。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非暂态计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其 中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (12)

  1. 一种人脸肤色识别方法,其特征在于,包括:
    获取人脸图像;
    从所述人脸图像中截取出待检测的区域图像;
    获取所述区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb],其中,avgY表示所述区域图像在Y颜色通道的均值,avgCr表示所述区域图像在Cr颜色通道的均值,avgCb表示所述区域图像在Cb颜色通道的均值;
    基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,其中,所述肤色模板包括多种肤色色块,所述人脸肤色为所述多种肤色色块中的一种。
  2. 根据权利要求1所述的人脸肤色识别方法,其特征在于,所述获取所述区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb]的步骤之前,还包括:
    将所述区域图像的颜色空间转换到YCrCb颜色空间。
  3. 根据权利要求2所述的人脸肤色识别方法,其特征在于,所述将所述区域图像的颜色空间转换到YCrCb颜色空间的步骤之前,还包括:
    消除所述区域图像的色彩偏移。
  4. 根据权利要求1所述的人脸肤色识别方法,其特征在于,所述区域图像包括n个,所述n为正整数,所述获取所述区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb],包括:
    计算每一区域图像在Y颜色通道的像素值总和sumY_i、在Cr颜色通道的像素值总和sumCr_i、在Cb颜色通道的像素值总和sumCb_i,以及,每一区域图像的面积S_i;
    通过如下公式获取所述n个区域图像在YCrCb颜色空间下的均值向量 [avgY,avgCr,avgCb]:
    Figure PCTCN2017112533-appb-100001
    Figure PCTCN2017112533-appb-100002
    Figure PCTCN2017112533-appb-100003
    其中,1≤i≤n,sumY_i表示第i个区域图像在Y颜色通道的像素值总和,sumCr_i表示第i个区域图像在Cr颜色通道的像素值总和,sumCb_i表示第i个区域图像在Cb颜色通道的像素值总和,S_i表示第i个区域图像的面积;
    Figure PCTCN2017112533-appb-100004
    表示所述n个区域图像在Y颜色通道的像素值总和;
    Figure PCTCN2017112533-appb-100005
    表示所述n个区域图像在Cr颜色通道的像素值总和;
    Figure PCTCN2017112533-appb-100006
    表示所述n个区域图像在Cb颜色通道的像素值总和;
    Figure PCTCN2017112533-appb-100007
    表示所述n个区域图像的面积总和。
  5. 根据权利要求1所述的人脸肤色识别方法,其特征在于,所述基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,包括:
    获取预设的肤色模板中每一种肤色色块的标准向量[avgY_j,avgCr_j,avgCb_j];
    在所述肤色模板中选择与所述均值向量[avgY,avgCr,avgCb]的欧几里得距离最小的标准向量[avgY_j,avgCr_j,avgCb_j]对应的肤色色块作为所述人脸图像的人脸肤色;
    其中,avgY_j表示所述肤色模板中第j种肤色色块在Y颜色通道的均值,avgCr_j表示所述肤色模板中第j种肤色色块在Cr颜色通道的均值,avgCb_j表示所述肤色模板中第j个肤色色块在Cb颜色通道的均值。
  6. 根据权利要求1-5任一项所述的人脸肤色识别方法,其特征在于,所述区域图像包括左脸颊区域图像、鼻头区域图像以及右脸颊区域图像中的任一个或者多个。
  7. 一种人脸肤色识别装置,其特征在于,包括:
    人脸图像获取单元,用于获取人脸图像;
    截取单元,用于从所述人脸图像中截取出待检测的区域图像;
    数据处理单元,用于获取所述区域图像在YCrCb颜色空间下的的均值向量[avgY,avgCr,avgCb],其中,avgY表示所述区域图像在Y颜色通道的均值,avgCr表示所述区域图像在Cr颜色通道的均值,avgCb表示所述区域图像在Cb颜色通道的均值;
    分析单元,用于基于所述均值向量[avgY,avgCr,avgCb]和预设的肤色模板确定所述人脸图像的人脸肤色,其中,所述肤色模板包括多种肤色色块,所述人脸肤色为所述多种肤色色块中的一种。
  8. 根据权利要求7所述的人脸肤色识别装置,其特征在于,所述区域图像包括n个,所述n为大于0的正整数,所述数据处理单元具体用于:
    计算每一区域图像在Y颜色通道的像素值总和sumY_i、在Cr颜色通道的像素值总和sumCr_i、在Cb颜色通道的像素值总和sumCb_i,以及,每一区域图像的面积S_i;
    通过如下公式获取所述n个区域图像在YCrCb颜色空间下的均值向量[avgY,avgCr,avgCb]:
    Figure PCTCN2017112533-appb-100008
    Figure PCTCN2017112533-appb-100009
    Figure PCTCN2017112533-appb-100010
    其中,1≤i≤n,sumY_i表示第i个区域图像在Y颜色通道的像素值总和,sumCr_i表示第i个区域图像在Cr颜色通道的像素值总和,sumCb_i表示第i个区域图像在Cb颜色通道的像素值总和,S_i表示第i个区域图像的面积;
    Figure PCTCN2017112533-appb-100011
    表示所述n个区域图像在Y颜色通道的像素值总和;
    Figure PCTCN2017112533-appb-100012
    表示所述n个区域图像在Cr颜色通道的像素值总和;
    Figure PCTCN2017112533-appb-100013
    表示所述n个区域图像在Cb颜色通道的像素值总和;
    Figure PCTCN2017112533-appb-100014
    表示所述n个区域图像的面积总和。
  9. 根据权利要求7所述的人脸肤色识别装置,其特征在于,所述分析单元具体用于:
    获取预设的肤色模板中每一种肤色色块的标准向量[avgY_j,avgCr_j,avgCb_j];
    在所述肤色模板中选择与所述均值向量[avgY,avgCr,avgCb]的欧几里得距离最小的标准向量[avgY_j,avgCr_j,avgCb_j]对应的肤色色块作为所述人脸图像的人脸肤色;
    其中,avgY_j表示所述肤色模板中第j种肤色色块在Y颜色通道的均值,avgCr_j表示所述肤色模板中第j种肤色色块在Cr颜色通道的均值,avgCb_j表示所述肤色模板中第j个肤色色块在Cb颜色通道的均值。
  10. 根据权利要求7-9任一项所述的人脸肤色识别装置,其特征在于,所述区域图像包括左脸颊区域图像、鼻头区域图像以及右脸颊区域图像中的任一个或者多个。
  11. 一种智能终端,其特征在于,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1-6任一项所述的人脸肤色识别方法。
  12. 一种存储介质,其特征在于,所述存储介质存储有可执行指令,所述可执行指令被智能终端执行时,使所述智能终端执行如权利要求1-6任一项所述的人脸肤色识别方法。
PCT/CN2017/112533 2017-11-23 2017-11-23 一种人脸肤色识别方法、装置和智能终端 WO2019100282A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780009028.3A CN108701217A (zh) 2017-11-23 2017-11-23 一种人脸肤色识别方法、装置和智能终端
PCT/CN2017/112533 WO2019100282A1 (zh) 2017-11-23 2017-11-23 一种人脸肤色识别方法、装置和智能终端

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/112533 WO2019100282A1 (zh) 2017-11-23 2017-11-23 一种人脸肤色识别方法、装置和智能终端

Publications (1)

Publication Number Publication Date
WO2019100282A1 true WO2019100282A1 (zh) 2019-05-31

Family

ID=63844123

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/112533 WO2019100282A1 (zh) 2017-11-23 2017-11-23 一种人脸肤色识别方法、装置和智能终端

Country Status (2)

Country Link
CN (1) CN108701217A (zh)
WO (1) WO2019100282A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599554A (zh) * 2019-09-16 2019-12-20 腾讯科技(深圳)有限公司 人脸肤色的识别方法和装置、存储介质及电子装置
CN111815653A (zh) * 2020-07-08 2020-10-23 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、***和设备
CN111815651A (zh) * 2020-07-08 2020-10-23 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、***及设备
CN111950390A (zh) * 2020-07-22 2020-11-17 深圳数联天下智能科技有限公司 皮肤敏感度的确定方法及装置、存储介质及设备
CN112102154A (zh) * 2020-08-20 2020-12-18 北京百度网讯科技有限公司 图像处理方法、装置、电子设备和存储介质
CN113505674A (zh) * 2021-06-30 2021-10-15 上海商汤临港智能科技有限公司 人脸图像处理方法及装置、电子设备和存储介质
CN113762010A (zh) * 2020-11-18 2021-12-07 北京沃东天骏信息技术有限公司 图像处理方法、装置、设备和存储介质
CN113749642A (zh) * 2021-07-07 2021-12-07 上海耐欣科技有限公司 量化皮肤潮红反应程度的方法、***、介质及终端
CN113938672A (zh) * 2021-09-16 2022-01-14 青岛信芯微电子科技股份有限公司 信号源的信号识别方法及终端设备

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109413508A (zh) * 2018-10-26 2019-03-01 广州虎牙信息科技有限公司 图像混合的方法、装置、设备、推流方法及直播***
CN109712090A (zh) * 2018-12-18 2019-05-03 维沃移动通信有限公司 一种图像处理方法、装置和移动终端
CN109934092A (zh) * 2019-01-18 2019-06-25 深圳壹账通智能科技有限公司 识别色号方法、装置、计算机设备及存储介质
JP2022529230A (ja) * 2019-04-09 2022-06-20 株式会社 資生堂 画像取り込みが改善された局所用剤を作成するためのシステムおよび方法
CN110245590B (zh) * 2019-05-29 2023-04-28 广东技术师范大学 一种基于皮肤图像检测的产品推荐方法及***
CN111507944B (zh) * 2020-03-31 2023-07-04 北京百度网讯科技有限公司 皮肤光滑度的确定方法、装置和电子设备
CN113642358B (zh) * 2020-04-27 2023-10-10 华为技术有限公司 肤色检测方法、装置、终端和存储介质
CN112102349B (zh) * 2020-08-21 2023-12-08 深圳数联天下智能科技有限公司 肤色识别的方法、装置及计算机可读存储介质
CN113115085A (zh) * 2021-04-16 2021-07-13 海信电子科技(武汉)有限公司 一种视频播放方法及显示设备
CN113128416A (zh) * 2021-04-23 2021-07-16 领途智造科技(北京)有限公司 一种能识别肤色的面部识别方法及装置
CN113674366A (zh) * 2021-07-08 2021-11-19 北京旷视科技有限公司 皮肤颜色的识别方法、装置和电子设备
CN113933293A (zh) * 2021-11-08 2022-01-14 中国联合网络通信集团有限公司 浓度检测方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706874A (zh) * 2009-12-25 2010-05-12 青岛朗讯科技通讯设备有限公司 基于肤色特征的人脸检测方法
CN104050455A (zh) * 2014-06-24 2014-09-17 深圳先进技术研究院 一种肤色检测方法及***
CN104732200A (zh) * 2015-01-28 2015-06-24 广州远信网络科技发展有限公司 一种皮肤类型和皮肤问题的识别方法
CN105496414A (zh) * 2014-10-13 2016-04-20 株式会社爱茉莉太平洋 通过肤色定制的妆色诊断方法及通过肤色定制的妆色诊断装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7190829B2 (en) * 2003-06-30 2007-03-13 Microsoft Corporation Speedup of face detection in digital images
CN104156915A (zh) * 2014-07-23 2014-11-19 小米科技有限责任公司 肤色调整方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706874A (zh) * 2009-12-25 2010-05-12 青岛朗讯科技通讯设备有限公司 基于肤色特征的人脸检测方法
CN104050455A (zh) * 2014-06-24 2014-09-17 深圳先进技术研究院 一种肤色检测方法及***
CN105496414A (zh) * 2014-10-13 2016-04-20 株式会社爱茉莉太平洋 通过肤色定制的妆色诊断方法及通过肤色定制的妆色诊断装置
CN104732200A (zh) * 2015-01-28 2015-06-24 广州远信网络科技发展有限公司 一种皮肤类型和皮肤问题的识别方法

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599554A (zh) * 2019-09-16 2019-12-20 腾讯科技(深圳)有限公司 人脸肤色的识别方法和装置、存储介质及电子装置
CN111815653B (zh) * 2020-07-08 2024-01-30 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、***和设备
CN111815653A (zh) * 2020-07-08 2020-10-23 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、***和设备
CN111815651A (zh) * 2020-07-08 2020-10-23 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、***及设备
CN111815651B (zh) * 2020-07-08 2024-01-30 深圳市梦网视讯有限公司 一种人脸与身体肤色区域的分割方法、***及设备
CN111950390A (zh) * 2020-07-22 2020-11-17 深圳数联天下智能科技有限公司 皮肤敏感度的确定方法及装置、存储介质及设备
CN111950390B (zh) * 2020-07-22 2024-04-26 深圳数联天下智能科技有限公司 皮肤敏感度的确定方法及装置、存储介质及设备
CN112102154A (zh) * 2020-08-20 2020-12-18 北京百度网讯科技有限公司 图像处理方法、装置、电子设备和存储介质
CN112102154B (zh) * 2020-08-20 2024-04-26 北京百度网讯科技有限公司 图像处理方法、装置、电子设备和存储介质
CN113762010A (zh) * 2020-11-18 2021-12-07 北京沃东天骏信息技术有限公司 图像处理方法、装置、设备和存储介质
CN113505674B (zh) * 2021-06-30 2023-04-18 上海商汤临港智能科技有限公司 人脸图像处理方法及装置、电子设备和存储介质
CN113505674A (zh) * 2021-06-30 2021-10-15 上海商汤临港智能科技有限公司 人脸图像处理方法及装置、电子设备和存储介质
CN113749642A (zh) * 2021-07-07 2021-12-07 上海耐欣科技有限公司 量化皮肤潮红反应程度的方法、***、介质及终端
CN113938672A (zh) * 2021-09-16 2022-01-14 青岛信芯微电子科技股份有限公司 信号源的信号识别方法及终端设备
CN113938672B (zh) * 2021-09-16 2024-05-10 青岛信芯微电子科技股份有限公司 信号源的信号识别方法及终端设备

Also Published As

Publication number Publication date
CN108701217A (zh) 2018-10-23

Similar Documents

Publication Publication Date Title
WO2019100282A1 (zh) 一种人脸肤色识别方法、装置和智能终端
JP7413400B2 (ja) 肌質測定方法、肌質等級分類方法、肌質測定装置、電子機器及び記憶媒体
CN108701216B (zh) 一种人脸脸型识别方法、装置和智能终端
Kumar et al. Face detection in still images under occlusion and non-uniform illumination
Marciniak et al. Influence of low resolution of images on reliability of face detection and recognition
CN111881913A (zh) 图像识别方法及装置、存储介质和处理器
CN107507144B (zh) 肤色增强的处理方法、装置及图像处理装置
US20130202159A1 (en) Apparatus for real-time face recognition
Aiping et al. Face detection technology based on skin color segmentation and template matching
US11010894B1 (en) Deriving a skin profile from an image
Emeršič et al. Pixel-wise ear detection with convolutional encoder-decoder networks
KR20190076288A (ko) 중요도 맵을 이용한 지능형 주관적 화질 평가 시스템, 방법, 및 상기 방법을 실행시키기 위한 컴퓨터 판독 가능한 프로그램을 기록한 기록 매체
Gritzman et al. Comparison of colour transforms used in lip segmentation algorithms
CN110598574A (zh) 智能人脸监控识别方法及***
Paul et al. PCA based geometric modeling for automatic face detection
WO2023273247A1 (zh) 人脸图像处理方法及装置、计算机可读存储介质、终端
CN112102348A (zh) 图像处理设备
Rahman et al. An automatic face detection and gender classification from color images using support vector machine
Gangopadhyay et al. FACE DETECTION AND RECOGNITION USING HAAR CLASSIFIER AND LBP HISTOGRAM.
US20190347469A1 (en) Method of improving image analysis
CN111814738A (zh) 基于人工智能的人脸识别方法、装置、计算机设备及介质
Shih et al. Multiskin color segmentation through morphological model refinement
Jyothi et al. Computational color naming for human-machine interaction
Hsiao et al. An intelligent skin‐color capture method based on fuzzy C‐means with applications
Kryszczuk et al. Color correction for face detection based on human visual perception metaphor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17933104

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17933104

Country of ref document: EP

Kind code of ref document: A1