EP2176830B1 - Face and skin sensitive image enhancement - Google Patents

Face and skin sensitive image enhancement Download PDF

Info

Publication number
EP2176830B1
EP2176830B1 EP08754589.3A EP08754589A EP2176830B1 EP 2176830 B1 EP2176830 B1 EP 2176830B1 EP 08754589 A EP08754589 A EP 08754589A EP 2176830 B1 EP2176830 B1 EP 2176830B1
Authority
EP
European Patent Office
Prior art keywords
face
skin
pixel
input image
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP08754589.3A
Other languages
German (de)
French (fr)
Other versions
EP2176830A4 (en
EP2176830A1 (en
Inventor
Hila Nachlieli
Darryl Greig
Doron Shaked
Ruth Bergman
Carl Staelin
Shlomo Harush
Mani Fischer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Publication of EP2176830A1 publication Critical patent/EP2176830A1/en
Publication of EP2176830A4 publication Critical patent/EP2176830A4/en
Application granted granted Critical
Publication of EP2176830B1 publication Critical patent/EP2176830B1/en
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • Image enhancement involves one or more processes that are designed to improve the visual appearance of an image.
  • Typical image enhancement processes include sharpening, noise reduction, scene balancing, and tone-correction.
  • Automatic image enhancement approaches for avoiding the undesirable effects of global or unguided image enhancement also have been developed. Some of these approaches automatically create maps of local attributes such as noise or sharpness estimations, and gain local control over enhancement parameters based on those maps. Other approaches typically rely on automatic region-based image enhancement.
  • the amount of image enhancement is determined automatically for each region in an image based on the subject matter content in the region. In this process, a respective probability distribution is computed for each type of targeted subject matter content across the image. Each probability distribution indicates for each pixel the probability that the pixel belongs to the target subject matter. Each probability distribution is thresholded to identify candidate subject matter regions.
  • Each of the candidate subject matter regions is subjected to a unique characteristics analysis that determines the probability that the region belongs to the target subject matter.
  • the unique characteristics analysis process generates a belief map of detected target subject matter regions and assigns to each of the detected regions an associated probability indicating the probability that the region belongs to the target subject matter.
  • a given belief map only relates to one type of subject matter and the pixels in any given one of the detected subject matter regions in a belief map are assigned the same probability value. Since region detection, as good as it may be, is prone to some errors, the probability discontinuities between the detected subject matter regions and other regions in each belief map necessarily produce artifacts in the resulting enhanced image.
  • Document EP1453002 discloses a batch processing method of enhancing faces in digital images.
  • the invention features a method in accordance with which a face map is calculated.
  • the face map includes for each pixel of an input image a respective face probability value indicating a degree to which the pixel corresponds to a human face, where variations in the face probability values are continuous across the face map.
  • a skin map is ascertained.
  • the skin map includes for each pixel of the input image a respective skin probability value indicating a degree to which the pixel corresponds to human skin.
  • the process of ascertaining the skin map includes mapping all pixels of the input image having similar values to similar respective skin probability values in the skin map.
  • the input image is enhanced with an enhancement level that varies pixel-by-pixel in accordance with the respective face probability values and the respective skin probability values.
  • the invention features a method in accordance with which a facial content measurement value indicating a degree to which an input image contains human face content is ascertained.
  • a tone-correction process is tuned in accordance with the facial content measurement value.
  • the input image is enhanced in accordance with the tuned tone-correction process.
  • the invention also features apparatus and machine readable media storing machine-readable instructions causing a machine to implement the methods described above.
  • At least some of the embodiments that are described in detail below enhance images in ways that avoid artifacts and other undesirable problems that arise as a result of enhancing an image based on segmentations of the image into discrete subject matter regions.
  • These embodiments leverage the inherent fuzziness of pixel-wise face and skin probability maps to implement face and skin sensitive enhancements to an image in a flexible way that more accurately coincides with a typical observer's expectations of the image enhancements that should be applied to different contents of the image.
  • a global contrast enhancement process is tuned in accordance with a determination of the degree to which an image contains human face content in order to achieve automatic face-sensitive enhancement of the image.
  • These embodiments are capable of providing image-dependent enhancements within the framework of existing enhancement processes and, therefore, can be incorporated readily into existing image enhancement systems and processes.
  • FIG. 1 shows an embodiment of an image enhancement system 10 that includes a set 12 of attribute extraction modules, including a face map module 14, a skin map module 16, and possibly one or more other attribute extraction modules 18.
  • the image enhancement system 10 also includes a control parameter module 20 and an image enhancement module 22.
  • the image enhancement system 10 performs one or more image enhancement operations on an input image 24 to produce an enhanced image 26.
  • the attribute extraction modules 14-18 determine respective measurement values 28 based on values of pixels of the input image 24 (e.g., directly from the values of the input image 24 or from pixel values derived from the input image 24).
  • the control parameter module 20 processes the measurement values 28 to produce control parameter values 30, which are used by the image enhancement module 22 to produce the enhanced image 26 from the input image 24.
  • the input image 24 may correspond to any type of digital image, including an original image (e.g., a video frame, a still image, or a scanned image) that was captured by an image sensor (e.g., a digital video camera, a digital still image camera, or an optical scanner) or a processed (e.g., sub-sampled, filtered, reformatted, scene-balanced or otherwise enhanced or modified) version of such an original image.
  • the input image 24 is an original full-sized image
  • the attribute extraction modules 12 process a sub sampled version of the original full-sized image
  • the image enhancement module 22 processes the original full image 24.
  • FIG. 2 shows an embodiment of a method that is implemented by the image processing system 10.
  • the face map module 14 calculates a face map that includes for each pixel of the input image 24 a respective face probability value indicating a degree to which the pixel corresponds to a human face ( FIG. 2 , block 34).
  • a respective face probability value indicating a degree to which the pixel corresponds to a human face.
  • variations in the face probability values are continuous across the face map.
  • the values of neighboring face map pixels are similar. This approach avoids the artifacts and other discontinuities that otherwise might result from a segmentation of the input image 24 into discrete facial regions and non-facial regions.
  • the skin map module 16 ascertains a skin map that includes for each pixel of the input image a respective skin probability value indicating a degree to which the pixel corresponds to human skin ( FIG. 2 , block 36). In this process, the skin map module 16 maps all pixels of the input image 24 having similar values to similar respective skin probability values in the skin map. This approach avoids both (i) the artifacts and other discontinuities that otherwise might result from a segmentation of the input image 24 into discrete human-skin toned regions and non-human-skin toned regions, and (ii) artifacts and other discontinuities that otherwise might result from undetected faces or partly detected faces.
  • the face map module 14 and the skin map module 16 determine the face map and the skin map in parallel.
  • the face map module 14 determines the face map before the skin map module 16 determines the skin map.
  • the skin map module 16 determines the skin map before the face map module 14 determines the face map.
  • the face map module 14 and the skin map module 16 determine the face map and the skin map in parallel.
  • the control parameter module 20 and the image enhancement module 22 cooperatively enhance the input image 24 with an enhancement level that varies pixel-by-pixel in accordance with the respective face probability values and the respective skin probability values ( FIG. 2 , block 38).
  • the measurement values 28 that are generated by the attribute extraction modules 14-18 are permitted to vary continuously from pixel-to-pixel across the image. This feature allows the control parameter module 20 to flexibly produce the control parameter values 30 in ways that more accurately coincide with a typical observer's expectations of the image enhancements that should be applied to different contents of the image. In this way, the image enhancement module 22 can enhance the input image 24 with an enhancement level that is both face and skin sensitive.
  • FIG. 3 shows an example 32 of the input image 24 that contains two human faces.
  • the exemplary image 32 and the various image data derived therefrom are used for illustrative purposes only to explain one or more aspects of one or more embodiments of the invention.
  • the control parameter module 20 produces control parameter values 30 that selectively sharpen non-human-skin toned areas of the input image 32 more than human-skin toned areas and that selectively sharpen human-skin toned areas outside of human faces more than human-skin toned areas inside human faces.
  • non-human-skin toned features e.g., eyes, lips, hairline, trees, and grass
  • human-skin toned features e.g., autumn leaves and dry grass
  • human-skin toned features within human faces can be sharpened still less aggressively or not at all.
  • the face map module 14. calculates a face map that includes for each pixel of the input image 24 a respective face probability value indicating a degree to which the pixel corresponds to a human face ( FIG. 2 , block 34).
  • a characteristic feature of the face map is that the variations in the face probability values are continuous across the face map.
  • continuous means having the property that the absolute value of the numerical difference between the value at a given point and the value at any point in a neighborhood of the given point can be made as close to zero as desired by choosing the neighborhood small enough.
  • the face map module 14 may calculate the face probability values indicating the degrees to which the input image pixels correspond to human face content in a wide variety of different ways, including but not limited to template-matching techniques, normalized correlation techniques, and eigenspace decomposition techniques.
  • the face map module 14 initially calculates the probabilities that patches (i.e., areas of pixels) of the input image 24 correspond to a human face and then calculates a pixel-wise face map from the patch probabilities (e.g., by assigning to each pixel the highest probability of any patch that contains the pixel).
  • the embodiments that are described in detail below include a face detection module that rejects patches partway through an image patch evaluation process in which the population of patches classified as "faces" are progressively more and more likely to correspond to facial areas of the input image as the evaluation continues.
  • the face probability generator 44 uses the exit point of the evaluation process as a measure of certainty that a patch is a face.
  • FIG. 4 shows an embodiment 40 of the face map module 14 that includes a cascade 42 of classification stages (C 1 , C 2 , ..., C n , where n has an integer value greater than 1) (also referred to herein as "classifiers”) and a face probability generator 44.
  • each of the classification stages performs a binary discrimination function that classifies a patch 46 that is derived from the input image 24 into a face class ("Yes") or a non-face class ("No") based on a discrimination measure that is computed from one or more attributes of the image patch 44.
  • the discrimination function of each classification stage typically is designed to detect faces in a single pose or facial view (e.g., frontal upright faces).
  • the face probability generator 44 assigns a respective face probability value to each pixel of the input image 24 and stores the assigned face probability value in a face map 48.
  • the value of the computed discrimination measure relative to the corresponding threshold determines the class into which the image patch 46 will be classified by each classification stage. For example, if the discrimination measure that is computed for the image patch 46 is above the threshold for a classification stage, the image patch 46 is classified into the face class (Yes) whereas, if the computed discrimination measure is below the threshold, the image patch 46 is classified into the non-face class (No).
  • FIG. 5 shows an exemplary embodiment of a single classification stage 50 in an embodiment of the classifier cascade 42.
  • the image patch 46 is projected into a feature space in accordance with a set of feature definitions 52.
  • the image patch 46 includes any information relating to an area of an input image, including color values of input image pixels and other information derived from the input image needed to compute feature weights.
  • Each feature is defined by a rule that describes how to compute or measure a respective weight (w 0 , w 1 , ..., w L ) for an image patch that corresponds to the contribution of the feature to the representation of the image patch in the feature space spanned by the set of features 52.
  • the set of weights (w 0 , w 1 , ..., w L ) that is computed for an image patch constitutes a feature vector 54.
  • the feature vector 54 is input into the classification stage 50.
  • the classification stage 50 classifies the image patch 46 into a set 56 of candidate face areas or a set 58 of non-faces. If the image patch is classification as a face, it is passed to the next classification stage, which implements a different discrimination function.
  • the variable p 6 has a value of +1 or -1 and the function w(u) is an evaluation function for computing the features of the feature vector 54.
  • the classifier cascade 42 processes the patches 46 of the input image 24 through the classification stages (C 1 , C 2 , ..., C n ), where each image patch is processed through a respective number of the classification stages depending on a per-classifier evaluation of the likelihood that the patch corresponds to human face content.
  • the face probability generator 44 calculates the probabilities that the patches 46 of the input image 24 correspond to human face content (i.e., the face probability values) in accordance with the respective numbers of classifiers through which corresponding ones of the patches were processed. For example, in one exemplary embodiment, the face probability generator 44 maps the number of unevaluated stages to a respective face probability value, where large numbers are mapped to low probability values and small numbers are mapped to high probability values. The face probability generator 44 calculates the pixel-wise face map 48 from the patch probabilities (e.g., by assigning to each pixel the highest probability of any patch that contains the pixel).
  • the pixel-wise face probability values typically are processed to ensure that variations in the face probability values are continuous across the face map.
  • the face probability values in each detected face are smoothly reduced down to zero as the distance from the center of the detected face increases.
  • the face probability of any pixel is given by the original face probability value multiplied by a smooth monotonically decreasing function of the distance from the center of the face, where the function has a value of one at the face center and value of zero a specified distance from the face center.
  • a respective line segment through the center of each of the detected faces and oriented according to the in-plane rotation of the detected face is determined.
  • the probability attached to any pixel in a given one of the detected face regions surrounding the respective line segment is then given by the face probability value multiplied by a clipped Gaussian function of the distance from that pixel to the respective line segment.
  • the clipped Gaussian function has values of one on the respective line segment and on a small oval region around the respective line segment; in other regions of the detected face, the values of the clipped Gaussian function decays to zero as the distance from the respective line segment increases.
  • each of the image patches 46 is passed through at least two parallel classifier cascades that are configured to evaluate different respective facial views.
  • Some of these embodiments are implemented in accordance with one or more of the multi-view face detection methods described in Jones and Viola, “Fast Multi-view Face Detection, " Mitsubishi Electric Research Laboratories, MERL-TR2003-96, July 2003 (also published in IEEE Conference on Computer Vision and Pattern Recognition, June 18, 2003 ).
  • the face probability generator 44 determines the face probability values from the respective numbers of classifiers of each cascade through which corresponding ones of the patches were evaluated. For example, in one exemplary embodiment, the face probability generator 44 maps the number of unevaluated classification stages in the most successful one of the parallel classifier cascades, where large numbers are mapped to low probability values and small numbers are mapped to high probability values. In some embodiments, the classifier cascade with the fewest number of unevaluated stages for the given patch is selected as the most successful classifier cascade. In other embodiments, the numbers of unevaluated stages in the parallel stages are normalized for each view before comparing them.
  • the face probability generator 44 calculates the pixel-wise face map 48 from the patch probabilities (e.g., by assigning to each pixel the highest probability of any patch that contains the pixel).
  • the pixel-wise face probability values typically are processed in the manner described in the preceding section to ensure that variations in the face probability values are continuous across the face map.
  • face patches have a smooth (e.g. Gaussian) profile descending smoothly from the nominal patch value to the nominal background value across a large number of pixels.
  • FIG. 6 shows an example of a face map 60 that is generated by the face map module 14 ( FIG. 1 ) from the input image 32 ( FIG. 3 ) in accordance with the multi-view based face map generation process described above.
  • this face map 60 darker values correspond to higher probabilities that the pixels correspond to human face content and lighter values correspond to lower probabilities that the pixels correspond to human face content.
  • the skin map module 16 ascertains a skin map that includes for each pixel of the input image a respective skin probability value indicating a degree to which the pixel corresponds to human skin ( FIG. 2 , block 36).
  • a characteristic feature of the skin map is that all pixels of the input image 24 having similar values are mapped to similar respective skin probability values in the skin map.
  • the term "similar” means that the pixel values are the same or nearly the same and appear visually indistinguishable from one another. This feature of the skin map is important in, for example, pixels of certain human-skin image patches that have colors outside of the standard human-skin tone range.
  • the skin map values vary continuously without artificial boundaries even in skin patches trailing far away from the standard human-skin tone range.
  • the skin map module 16 may ascertain the skin probability values indicating the degrees to which the input image pixels correspond to human skin in a wide variety of different ways.
  • the skin map module 16 ascertains the per-pixel human-skin probability values from human-skin tone probability distributions in respective channels of a color space.
  • the skin map module 16 ascertains the per-pixel human-skin tone probability values from human-skin tone probability distributions in the CIE LCH color space (i.e., P(skin
  • G(p, ⁇ , ) Gaussian normal distributions
  • Exemplary human-skin tone probability distributions for the L, C, and H color channels are shown in FIGS. 7A, 7B, and 7C , respectively.
  • the skin map module 16 ascertains a respective skin probability value for each pixel of the input image 24 by converting the input image 24 into the CIE LCH color space (if necessary), determining the respective skin-tone probability value for each of the L, C, and H color channels based on the corresponding human-skin tone probability distributions, and computing the product of the color channel probabilities, as shown in equation (3):
  • the skin map values are computed by applying to the probability function P(skin
  • L i , C i , H i 1 / ⁇ where ⁇ > 0 and M SKIN ( x,y ) are the skin map 62 values at location (x,y). In one exemplary embodiment, ⁇ 32.
  • the skin map function defined in equation (4) attaches high probabilities to a large spectrum of skin tones, while non-skin features typically attain lower probabilities.
  • FIG. 8 shows an example of a skin map that is generated by the skin map module 16 ( FIG. 1 ) from the input image 32 ( FIG. 3 ) in accordance with the skin map generation process described in connection with equations (3) and (4).
  • this skin map 62 darker values correspond to lower probabilities that the pixels .correspond to human skin content and lighter values correspond to higher probabilities that the pixels correspond to human skin content.
  • the skin map 62 distinguishes the pixels of the input image 32 depicting skin from the pixels depicting the pearls and the lace; and within the faces appearing in the input image 32, the skin map 62 distinguishes skin from non-skin features (e.g., the eyes and lips).
  • control parameter module 20 and the image enhancement module 22 cooperatively enhance the input image 24 with an enhancement level that varies pixel-by-pixel in accordance with the respective face probability values and the respective skin probability values ( FIG. 2 , block 38).
  • control parameter module 20 produces control parameter values 30 from the measurement values 28 that are output by the attribute extraction modules 14-18.
  • the image enhancement module 22 produces the enhanced image 26 by performing on the input image 24 one or more image processing operations that are controlled by the control parameter values 30.
  • control parameter module 20 and the image enhancement module 22 may.be configured to perform a wide variety of face and skin sensitive enhancements on the input image 24.
  • the embodiments that are described in detail below, for example, are configured to perform face and skin sensitive sharpening of the input image 24 and face-sensitive contrast enhancement of the input image.
  • control parameter module 20 determines a global control parameter value based on global features of the input image 24.
  • the control parameter module 20 determines for each pixel of the input image 24 a respective local control parameter value that corresponds to a respective modification of the global parameter value in accordance with the respective face probability values and the respective skin probability values.
  • the image enhancement module 22 enhances the input image 24 in accordance with the local control parameter values.
  • control parameter module 20 and the image enhancement module 22 are configured to sharpen the input image 24 with a sharpening level that varies pixel-by-pixel in accordance with the respective face probability values in the face map and the respective skin probability values in the skin map.
  • control parameter module 20 computes for each pixel of the input image a respective sharpening parameter value that controls the amount of sharpening that is applied to the pixel.
  • the sharpening parameter values that are computed by the control parameter module 20 are configured to cause the image enhancement module 22 to sharpen the input image 24 with sharpening levels in a high sharpening level range at pixel locations associated with skin probability values in a low skin probability range.
  • This type of sharpening satisfies the typical desire to aggressively sharpen non-human-skin toned features (e.g., eyes, lips, hairline, trees, and grass) inside and outside human faces.
  • the sharpening parameter values that are computed by the control parameter module 20 also are configured to cause the image enhancement module 22 to sharpen the input image with sharpening levels in a low sharpening level range at pixel locations associated with skin probability values in a high skin probability range and face probability values in a high face probability range.
  • the low sharpening level range is below the high sharpening level range and the high skin probability range is above the low skin probability range.
  • This type of sharpening satisfies the typical desire to less aggressively sharpen (or not sharpen at all) human-skin toned features within human faces.
  • the sharpening parameter values that are computed by the control parameter module 20 also are configured to cause the image enhancement module 22 to sharpen the input image 24 with sharpening levels in an intermediate sharpening level range at pixel locations associated with skin probability values in the high skin probability range and face probability values in a low face probability range.
  • the intermediate sharpening level range is between the high and low sharpening level ranges, and the low face probability range is below the high face probability range. This type of sharpening gives flexibility in sharpening features outside of human faces.
  • FIG. 9 shows an embodiment of a method in accordance with which the control parameter module 20 computes a sharpening factor map that contains for each pixel of the input image 24 a respective sharpening factor that controls the sharpening levels applied by the image enhancement module 22 in the manner described in the preceding paragraphs.
  • the control parameter module 20 determines the sharpening factor map ( FIG. 9 , block 64), determines a global sharpening parameter ( FIG. 9 , block 66), and generates pixel sharpening parameters from a combination of the sharpening factor map and the global sharpening parameter value ( FIG. 9 , block 68).
  • the upper skin threshold is set to an empirically determined constant value. In one exemplary embodiment, the upper skin threshold is set of 0.69.
  • the sharpening of features in detected faces e.g. eyes, lips etc
  • the lower skin threshold Lth(f(x,y)) is a decreasing function of the face probability value f(x,y), which adapts the lower skin threshold to more conservatively sharpening of facial pixels.
  • FIG. 10 shows an example of a sharpening factor map 70 that is generated by the control parameter module 20 from the face map 60 shown in FIG. 6 and the skin map 62 shown in FIG. 8 in accordance with the sharpening factor generation process described above in connection with equations (5) and (6) .
  • the sharpening factor values in the sharpening factor map are combined with a global sharpening parameter, which is used to control or tune the sharpening levels that are applied to the pixels of the input image 24 to produce the enhanced image 26.
  • the sharpening enhancement process to be tuned may be, for example, any type of spatial sharpening filter, including but not limited to an unsharp-masking filter, a high-pass filter, a high boost filter, a differences of Gaussians based filter, and a Laplacian of Gaussian filter.
  • the image enhancement process to be tuned is a non-linear smoothing and sharpening filter that combines bilateral filtering and generalized unsharp-masking.
  • the function ⁇ (z In ( x-i,y-j ) -In ( x,y ) ) usually a non-linear function.
  • equation (9) becomes equation (8).
  • Th is a noise threshold that determines the local deviation level that will be smoothed (should be high in case of strong noise).
  • the sharpness parameter determines the extent of sharpness that is introduced into significant edges (higher values result in sharper images).
  • the sharpness measure is determined in accordance with the method described in Application No. 10/835,910, which was filed on April 29, 2004 .
  • the sharpness measure amounts to averaging frequency band ratios over regions more likely to be sharp edges.
  • the embodiments that are described in detail below are configured to perform face-sensitive contrast enhancement of the input image.
  • the contrast enhancement may be performed instead or in addition to the sharpening enhancement described in the preceding section.
  • FIG. 11 shows an embodiment 72 of the image enhancement system 10 in which one of the other attribute extraction modules 18 is a tone mask extraction module 74, and the image enhancement module 22 includes a tone mapping module 76 and a enhanced image generator 78.
  • FIG. 12 shows an embodiment of a method in accordance with which the control parameter module 20 and the image enhancement module 22 cooperatively enhance the input image 24 to produce the enhanced image 26.
  • the control parameter module 20 ascertains a facial content measurement value indicating a degree to which the input image 24 contains human face content ( FIG. 12 , block 80).
  • the control parameter module 20 may ascertain the facial content measurement value in any of a wide variety of different ways.
  • the control parameter module 20 ascertains the facial measurement value by calculating the average of the values in the face map that is generated by the face map module 14. The resulting average value represents an estimate of the proportion of the input image 24 occupied by facial content.
  • the control parameter module 20 then tunes a tone-correction process in accordance with the facial content measurement value ( FIG. 12 , block 82).
  • the control parameter module 20 determines a weight that is used by the image enhancement module 22 to control the amount of an image-dependent (though not necessarily face-sensitive) default contrast enhancement 86 that should be applied to the input image 24.
  • the amount of the applied default contrast enhancement decreases with the value of the weight determined by the control parameter value.
  • the weight may be any function that maps facial content measurement values ⁇ to weight values ⁇ ( ⁇ ).
  • the weight values may take on any positive or negative value.
  • the image enhancement module 22 enhances the input image 24 with an image-dependent default tone mapping module 76 that depends on a tone mask 77 that is extracted by the tone extraction module 74.
  • the enhanced image generator 78 then enhances the input image 24 based on the image-dependent weight received from the control parameter module 20 and the tone-corrected pixel values received from the tone mapping module 76.
  • the tone mask extraction module 74 generates a tone mask of the input image 24.
  • the tone mask extraction module 74 generates the tone mask by performing an iterative operation on the input image 74. For instance, some embodiments generate the tone mask by using an iterative process (such as a Retinex process) that makes assumptions about the human visual system. Other embodiments generate this mask by performing a non-iterative operation on the input image 74. For example, some embodiments perform a low-pass filtering operation on the input image 74 to generate the tone mask (see, e.g., the embodiment described in connection with FIG. 2 of U.S. Patent No. 6,813,041 ).
  • the tone mapping module 76 modifies the pixel color values in the input image 24 through linear or non-linear combinations of the tone mask values and the pixel color values in the input image 24. In other embodiments, the tone mapping module 76 modifies the pixel color values in the input image 24 non-linearly.
  • non-linear operations that may be used to modify the input image pixel color values are rotated scaled sinusoidal functions, sigmoidal functions, and exponential functions that have input image pixel color values as part of their base (see, e.g., the embodiment described in connection with FIG. 3 of U.S. Patent No. 6,813,041 ).
  • the enhanced image generator 78 enhances the input image 24 based on the face-sensitive weight received from the control parameter module 20 and the tone-corrected pixel values received from the tone mapping module 76.
  • Embodiments of the image enhancement system 10 may be implemented by one or more discrete modules (or data processing components) that are not limited to any particular hardware, firmware, or software configuration.
  • the modules of the image enhancement system 10 may be implemented in any computing or data processing environment, including in digital electronic circuitry (e.g., an application-specific integrated circuit, such as a digital signal processor (DSP)) or in computer hardware, firmware, device driver, or software.
  • DSP digital signal processor
  • the functionalities of the modules of the image enhancement system 10 are combined into a single data processing component.
  • the respective functionalities of each of one or more of the modules of the image enhancement system 10 are performed by a respective set of multiple data processing components.
  • process instructions for implementing the methods that are executed by the embodiments of the image enhancement system 10, as well as the data it generates, are stored in one or more machine-readable media.
  • Storage devices suitable for tangibly embodying these instructions and data include all forms of non-volatile computer-readable memory, including, for example, semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices, magnetic disks such as internal hard disks and removable hard disks, magnetooptical disks, DVD-ROM/RAM, and CD-ROM/RAM.
  • embodiments of the image enhancement system 10 may be implemented in any one of a wide variety of electronic devices, including desktop and workstation computers, television monitors, projection displays, or video cards connected to such large screens displays, video recording devices (e.g., VCRs and DVRs), cable or satellite set-top boxes capable of decoding and playing paid video programming, and digital camera devices. Due to its efficient use of processing and memory resources, some embodiments of the image enhancement system 10 may be implemented with relatively small and inexpensive components that have modest processing power and modest memory capacity.
  • these embodiments are highly suitable for incorporation in application environments that have significant size, processing, and memory constraints, including but not limited to printers, handheld electronic devices (e.g., a mobile telephone, a cordless telephone, a portable memory device such as a smart card, a personal digital assistant (PDA), a solid state digital audio player, a CD player, an MCD player, a game controller, a pager, and a miniature still image or video camera), digital cameras, and other embedded environments.
  • handheld electronic devices e.g., a mobile telephone, a cordless telephone, a portable memory device such as a smart card, a personal digital assistant (PDA), a solid state digital audio player, a CD player, an MCD player, a game controller, a pager, and a miniature still image or video camera
  • PDA personal digital assistant
  • solid state digital audio player e.g., a CD player, an MCD player, a game controller, a pager, and a miniature still image or video camera
  • digital cameras e.
  • FIG. 13 shows an embodiment of a computer system 140 that incorporates an embodiment of the image enhancement system 10.
  • the computer system 140 may be implemented by a standalone computer (e.g., a desktop computer, workstation computer, or a portable computer) or it may be incorporated in another electronic device (e.g., a printer or a digital camera).
  • the computer system 140 includes a processing unit 142 (CPU), a system memory 144, and a system bus 146 that couples processing unit 142 to the various components of the computer system 140.
  • the processing unit 142 typically includes one or more processors, each of which may be in the form of any one of various commercially available processors.
  • the system memory 144 typically includes a read only memory (ROM) that stores a basic input/output system (BIOS) that contains start-up routines for the computer system 140 and a random access memory (RAM).
  • ROM read only memory
  • BIOS basic input/output system
  • RAM random access memory
  • the system bus 146 may be a memory bus, a peripheral bus or a local bus, and may be compatible with any of a variety of bus protocols, including PCI, VESA, Microchannel, ISA, and EISA.
  • the computer system 140 also includes a persistent storage memory 148 (e.g., a hard drive, a floppy drive, a CD ROM drive, magnetic tape drives, flash memory devices, and digital video disks) that is connected to the system bus 146 and contains one or more computer-readable media disks that provide non-volatile or persistent storage for data, data structures and computer-executable instructions.
  • a persistent storage memory 148 e.g., a hard drive, a floppy drive, a CD ROM drive, magnetic tape drives, flash memory devices, and digital video disks
  • a user may interact (e.g., enter commands or data) with the computer system 140 using one or more input devices 150 (e.g., a keyboard, a computer mouse, a microphone, joystick, and touch pad). Information may be presented through a graphical user interface (GUI) that is displayed to the user on a display monitor 152, which is controlled by a display controller 154.
  • GUI graphical user interface
  • the computer system 140 also typically includes peripheral output devices, such as speakers and a printer.
  • One or more remote computers may be connected to the computer system 140 through a network interface card (NIC) 156.
  • NIC network interface card
  • the system memory 144 also stores an embodiment of the image enhancement system 10, a GUI driver 158, and a database 160 containing image files corresponding to the input and enhancement images 24, 26, intermediate processing data, and output data.
  • the image enhancement system 10 interfaces with the GUI driver 158 and the user input 150 to control the creation of the enhanced image 26.
  • the computer system 140 additionally includes a graphics application program that is configured to render image data on the display monitor 152 and to perform various image processing operations on the enhanced image 26.
  • the embodiments that are described in detail herein are capable of enhancing images in face and skin sensitive ways. At least some of these embodiments enhance images in ways that avoid artifacts and other undesirable problems that arise as a result of enhancing an image based on segmentations of the image into discrete subject matter regions. At least some of these embodiments are capable of adjusting default control parameters in image-dependent ways.
  • a sequence of images may be processed by the image processing system 10.
  • a respective face map and a respective skin map are calculated for each of the images in the sequence in accordance with the methods described above.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Description

    BACKGROUND
  • Image enhancement involves one or more processes that are designed to improve the visual appearance of an image. Typical image enhancement processes include sharpening, noise reduction, scene balancing, and tone-correction.
  • Global or otherwise unguided application of image enhancement processes typically does not produce satisfactory results because the enhancements that are desired by most observers of an image typically vary with image content. For example, it generally is desirable to sharpen image content, such as trees, grass, and other objects containing interesting texture, edge or boundary features, but it is undesirable to sharpen certain features of human faces (e.g., wrinkles and blemishes). Conversely, it typically is desirable to smooth wrinkles and blemishes on human faces, but it is undesirable to smooth other types of image content, such as trees, grass, and other objects containing interesting texture, edge or boundary features.
  • In an effort to avoid the undesirable effects of global or unguided image enhancement, various manual image enhancement approaches have been developed to enable selective application of image enhancement processes to various image content regions in an image. Some of these approaches rely on manual selection of the image content regions and the desired image enhancement process. Most manual image enhancement systems of this type, however, require a substantial investment of money, time, and effort before they can be used to manually enhance images. Even after a user has become proficient at using a manual image enhancement system, the process of enhancing images is typically time-consuming and labor-intensive.
  • Automatic image enhancement approaches for avoiding the undesirable effects of global or unguided image enhancement also have been developed. Some of these approaches automatically create maps of local attributes such as noise or sharpness estimations, and gain local control over enhancement parameters based on those maps. Other approaches typically rely on automatic region-based image enhancement. In accordance with one automatic region-based image enhancement approach, the amount of image enhancement is determined automatically for each region in an image based on the subject matter content in the region. In this process, a respective probability distribution is computed for each type of targeted subject matter content across the image. Each probability distribution indicates for each pixel the probability that the pixel belongs to the target subject matter. Each probability distribution is thresholded to identify candidate subject matter regions. Each of the candidate subject matter regions is subjected to a unique characteristics analysis that determines the probability that the region belongs to the target subject matter. The unique characteristics analysis process generates a belief map of detected target subject matter regions and assigns to each of the detected regions an associated probability indicating the probability that the region belongs to the target subject matter. A given belief map only relates to one type of subject matter and the pixels in any given one of the detected subject matter regions in a belief map are assigned the same probability value. Since region detection, as good as it may be, is prone to some errors, the probability discontinuities between the detected subject matter regions and other regions in each belief map necessarily produce artifacts in the resulting enhanced image. For example, if a face region is detected and enhanced using a parameter that is different from the parameter used for the rest of the body, artifacts will most probably be created in border regions. In addition, reality does not always support subject matter regions but rather a continuous change from region to region. This is most evident and most important in terms of avoiding artifacts for image sub-regions requiring drastically different enhancement types, for example, it typically is desirable to smooth facial skin areas and sharpen facial features, such as eyes and lips.
  • What are needed are image enhancement approaches that avoid artifacts and other undesirable problems that arise as a result of enhancing an image based on segmentations of the image into discrete subject matter regions. Image enhancement systems and methods that are capable of enhancing images in face and skin sensitive ways also are needed.
    Document EP1453002 discloses a batch processing method of enhancing faces in digital images.
  • SUMMARY
  • There is provided a method as set out in claim 1 and a system as set out in claim 9.
  • In one aspect, the invention features a method in accordance with which a face map is calculated. The face map includes for each pixel of an input image a respective face probability value indicating a degree to which the pixel corresponds to a human face, where variations in the face probability values are continuous across the face map. A skin map is ascertained. The skin map includes for each pixel of the input image a respective skin probability value indicating a degree to which the pixel corresponds to human skin. The process of ascertaining the skin map includes mapping all pixels of the input image having similar values to similar respective skin probability values in the skin map. The input image is enhanced with an enhancement level that varies pixel-by-pixel in accordance with the respective face probability values and the respective skin probability values.
  • In another aspect, the invention features a method in accordance with which a facial content measurement value indicating a degree to which an input image contains human face content is ascertained. A tone-correction process is tuned in accordance with the facial content measurement value. The input image is enhanced in accordance with the tuned tone-correction process.
  • The invention also features apparatus and machine readable media storing machine-readable instructions causing a machine to implement the methods described above.
  • Other features and advantages of the invention will become apparent from the following description, including the drawings and the claims.
  • DESCRIPTION OF DRAWINGS
    • FIG. 1 is a block diagram of an embodiment of an image enhancement system.
    • FIG. 2 is a flow diagram of an embodiment of an image enhancement method.
    • FIG. 3 is an example of an input image.
    • FIG. 4 is a block diagram of an embodiment of the face map module shown in FIG. 1.
    • FIG. 5 is a block diagram of an embodiment of a single classification stage in an implementation of the face map module shown in FIG. 4 that is designed to evaluate candidate face patches in an image.
    • FIG. 6 is a graphical representation of an example of a face map generated by the face map module of FIG. 4 from the input image shown in FIG. 3.
    • FIGS. 7A, 7B, and 7C show exemplary graphs of human skin tone probabilities for respective channels of the CIE LCH color space.
    • FIG. 8 is a graphical representation of an example of a skin map generated by the skin map module shown in FIG. 1 from the input image shown in FIG. 3.
    • FIG. 9 is a flow diagram of an embodiment of a method by which the control parameter module shown in FIG. 1 generates control parameter values for enhancing the input image of FIG. 1.
    • FIG. 10 is a graphical representation of an example of a sharpening factor map generated by the control parameter module shown in FIG. 1 from the face map of FIG. 6 and the skin map of FIG. 8 in accordance with the method of FIG. 9.
    • FIG. 11 is a block diagram of an embodiment of the image enhancement system of FIG. 1.
    • FIG. 12 is a flow diagram of an embodiment of an image enhancement method.
    • FIG. 13 is a block diagram of an embodiment of a computer system that is programmable to implement an embodiment of the image enhancement system shown in FIG. 1.
    DETAILED DESCRIPTION
  • In the following description, like reference numbers are used to identify like elements. Furthermore, the drawings are intended to illustrate major features of exemplary embodiments in a diagrammatic manner. The drawings are not intended to depict every feature of actual embodiments nor relative dimensions of the depicted elements, and are not drawn to scale. Elements shown with dashed lines are optional elements in the illustrated embodiments incorporating such elements.
  • I. INTRODUCTION
  • The embodiments that are described in detail below are capable of enhancing images in face and skin sensitive ways.
  • At least some of the embodiments that are described in detail below enhance images in ways that avoid artifacts and other undesirable problems that arise as a result of enhancing an image based on segmentations of the image into discrete subject matter regions. These embodiments leverage the inherent fuzziness of pixel-wise face and skin probability maps to implement face and skin sensitive enhancements to an image in a flexible way that more accurately coincides with a typical observer's expectations of the image enhancements that should be applied to different contents of the image.
  • At least some of the embodiments that are described in detail below are capable of adjusting default control parameters in image-dependent ways. In some of these embodiments, for example, a global contrast enhancement process is tuned in accordance with a determination of the degree to which an image contains human face content in order to achieve automatic face-sensitive enhancement of the image. These embodiments are capable of providing image-dependent enhancements within the framework of existing enhancement processes and, therefore, can be incorporated readily into existing image enhancement systems and processes.
  • II. A FIRST EXEMPLARY IMAGE ENHANCEMENT SYSTEM EMBODIMENT A. OVERVIEW
  • FIG. 1 shows an embodiment of an image enhancement system 10 that includes a set 12 of attribute extraction modules, including a face map module 14, a skin map module 16, and possibly one or more other attribute extraction modules 18. The image enhancement system 10 also includes a control parameter module 20 and an image enhancement module 22. In operation, the image enhancement system 10 performs one or more image enhancement operations on an input image 24 to produce an enhanced image 26. In the process of enhancing the input image 24, the attribute extraction modules 14-18 determine respective measurement values 28 based on values of pixels of the input image 24 (e.g., directly from the values of the input image 24 or from pixel values derived from the input image 24). The control parameter module 20 processes the measurement values 28 to produce control parameter values 30, which are used by the image enhancement module 22 to produce the enhanced image 26 from the input image 24.
  • The input image 24 may correspond to any type of digital image, including an original image (e.g., a video frame, a still image, or a scanned image) that was captured by an image sensor (e.g., a digital video camera, a digital still image camera, or an optical scanner) or a processed (e.g., sub-sampled, filtered, reformatted, scene-balanced or otherwise enhanced or modified) version of such an original image. In some embodiments, the input image 24 is an original full-sized image, the attribute extraction modules 12 process a sub sampled version of the original full-sized image, and the image enhancement module 22 processes the original full image 24.
  • FIG. 2 shows an embodiment of a method that is implemented by the image processing system 10.
  • In accordance with the method of FIG. 2, the face map module 14 calculates a face map that includes for each pixel of the input image 24 a respective face probability value indicating a degree to which the pixel corresponds to a human face (FIG. 2, block 34). As explained in detail below, variations in the face probability values are continuous across the face map. Thus, the values of neighboring face map pixels are similar. This approach avoids the artifacts and other discontinuities that otherwise might result from a segmentation of the input image 24 into discrete facial regions and non-facial regions.
  • The skin map module 16 ascertains a skin map that includes for each pixel of the input image a respective skin probability value indicating a degree to which the pixel corresponds to human skin (FIG. 2, block 36). In this process, the skin map module 16 maps all pixels of the input image 24 having similar values to similar respective skin probability values in the skin map. This approach avoids both (i) the artifacts and other discontinuities that otherwise might result from a segmentation of the input image 24 into discrete human-skin toned regions and non-human-skin toned regions, and (ii) artifacts and other discontinuities that otherwise might result from undetected faces or partly detected faces.
  • It is noted that the order in which the face map module 14 and the skin map module 16 determine the face map and the skin map is immaterial. In some embodiments, the face map module 14 determines the face map before the skin map module 16 determines the skin map. In other embodiments, the skin map module 16 determines the skin map before the face map module 14 determines the face map. In still other embodiments, the face map module 14 and the skin map module 16 determine the face map and the skin map in parallel.
  • The control parameter module 20 and the image enhancement module 22 cooperatively enhance the input image 24 with an enhancement level that varies pixel-by-pixel in accordance with the respective face probability values and the respective skin probability values (FIG. 2, block 38). The measurement values 28 that are generated by the attribute extraction modules 14-18 are permitted to vary continuously from pixel-to-pixel across the image. This feature allows the control parameter module 20 to flexibly produce the control parameter values 30 in ways that more accurately coincide with a typical observer's expectations of the image enhancements that should be applied to different contents of the image. In this way, the image enhancement module 22 can enhance the input image 24 with an enhancement level that is both face and skin sensitive.
  • FIG. 3 shows an example 32 of the input image 24 that contains two human faces. In the following detailed description, the exemplary image 32 and the various image data derived therefrom are used for illustrative purposes only to explain one or more aspects of one or more embodiments of the invention. In some embodiments, the control parameter module 20 produces control parameter values 30 that selectively sharpen non-human-skin toned areas of the input image 32 more than human-skin toned areas and that selectively sharpen human-skin toned areas outside of human faces more than human-skin toned areas inside human faces. Thus, in accordance with a typical observer's desired results, non-human-skin toned features (e.g., eyes, lips, hairline, trees, and grass) inside and outside human faces can be sharpened aggressively, human-skin toned features (e.g., autumn leaves and dry grass) outside of human faces can be sharpened less aggressively, and human-skin toned features within human faces can be sharpened still less aggressively or not at all.
  • B. CALCULATING FACE MAPS 1. OVERVIEW
  • As explained above, the face map module 14.calculates a face map that includes for each pixel of the input image 24 a respective face probability value indicating a degree to which the pixel corresponds to a human face (FIG. 2, block 34). A characteristic feature of the face map is that the variations in the face probability values are continuous across the face map. As used herein, the term "continuous" means having the property that the absolute value of the numerical difference between the value at a given point and the value at any point in a neighborhood of the given point can be made as close to zero as desired by choosing the neighborhood small enough.
  • In general, the face map module 14 may calculate the face probability values indicating the degrees to which the input image pixels correspond to human face content in a wide variety of different ways, including but not limited to template-matching techniques, normalized correlation techniques, and eigenspace decomposition techniques. In some embodiments, the face map module 14 initially calculates the probabilities that patches (i.e., areas of pixels) of the input image 24 correspond to a human face and then calculates a pixel-wise face map from the patch probabilities (e.g., by assigning to each pixel the highest probability of any patch that contains the pixel).
  • The embodiments that are described in detail below include a face detection module that rejects patches partway through an image patch evaluation process in which the population of patches classified as "faces" are progressively more and more likely to correspond to facial areas of the input image as the evaluation continues. The face probability generator 44 uses the exit point of the evaluation process as a measure of certainty that a patch is a face.
  • 2. SINGLE-VIEW BASED FACE MAP GENERATION
  • FIG. 4 shows an embodiment 40 of the face map module 14 that includes a cascade 42 of classification stages (C1, C2, ..., Cn, where n has an integer value greater than 1) (also referred to herein as "classifiers") and a face probability generator 44. In operation, each of the classification stages performs a binary discrimination function that classifies a patch 46 that is derived from the input image 24 into a face class ("Yes") or a non-face class ("No") based on a discrimination measure that is computed from one or more attributes of the image patch 44. The discrimination function of each classification stage typically is designed to detect faces in a single pose or facial view (e.g., frontal upright faces). Depending on the evaluation results produced by the cascade 42, the face probability generator 44 assigns a respective face probability value to each pixel of the input image 24 and stores the assigned face probability value in a face map 48.
  • Each classification stage Ci of the cascade 42 has a respective classification boundary that is controlled by a respective threshold ti, where i=1, ..., n. The value of the computed discrimination measure relative to the corresponding threshold determines the class into which the image patch 46 will be classified by each classification stage. For example, if the discrimination measure that is computed for the image patch 46 is above the threshold for a classification stage, the image patch 46 is classified into the face class (Yes) whereas, if the computed discrimination measure is below the threshold, the image patch 46 is classified into the non-face class (No).
  • FIG. 5 shows an exemplary embodiment of a single classification stage 50 in an embodiment of the classifier cascade 42. In this embodiment, the image patch 46 is projected into a feature space in accordance with a set of feature definitions 52. The image patch 46 includes any information relating to an area of an input image, including color values of input image pixels and other information derived from the input image needed to compute feature weights. Each feature is defined by a rule that describes how to compute or measure a respective weight (w0, w1, ..., wL) for an image patch that corresponds to the contribution of the feature to the representation of the image patch in the feature space spanned by the set of features 52. The set of weights (w0, w1, ..., wL) that is computed for an image patch constitutes a feature vector 54. The feature vector 54 is input into the classification stage 50. The classification stage 50 classifies the image patch 46 into a set 56 of candidate face areas or a set 58 of non-faces. If the image patch is classification as a face, it is passed to the next classification stage, which implements a different discrimination function.
  • In some implementations, the classification stage 50 implements a discrimination function that is defined in equation (1): λ = 1 L g λ h λ u > 0
    Figure imgb0001
    where u contains values corresponding to the image patch 46 and g6 are weights that the classification stage 50 applies to the corresponding threshold function h6(u), which is defined by: h λ u = { 1, 0, i f p λ w λ u > p λ t λ otherwise
    Figure imgb0002
    The variable p6 has a value of +1 or -1 and the function w(u) is an evaluation function for computing the features of the feature vector 54.
  • Additional details regarding the construction and operation of the classifier cascade 42 can be obtained from U.S. Patent No. 7,099,510 .
  • In summary, the classifier cascade 42 processes the patches 46 of the input image 24 through the classification stages (C1, C2, ..., Cn), where each image patch is processed through a respective number of the classification stages depending on a per-classifier evaluation of the likelihood that the patch corresponds to human face content.
  • The face probability generator 44 calculates the probabilities that the patches 46 of the input image 24 correspond to human face content (i.e., the face probability values) in accordance with the respective numbers of classifiers through which corresponding ones of the patches were processed. For example, in one exemplary embodiment, the face probability generator 44 maps the number of unevaluated stages to a respective face probability value, where large numbers are mapped to low probability values and small numbers are mapped to high probability values. The face probability generator 44 calculates the pixel-wise face map 48 from the patch probabilities (e.g., by assigning to each pixel the highest probability of any patch that contains the pixel).
  • The pixel-wise face probability values typically are processed to ensure that variations in the face probability values are continuous across the face map. In some embodiments, the face probability values in each detected face are smoothly reduced down to zero as the distance from the center of the detected face increases. In some of these embodiments, the face probability of any pixel is given by the original face probability value multiplied by a smooth monotonically decreasing function of the distance from the center of the face, where the function has a value of one at the face center and value of zero a specified distance from the face center. In one exemplary embodiment, a respective line segment through the center of each of the detected faces and oriented according to the in-plane rotation of the detected face is determined. The probability attached to any pixel in a given one of the detected face regions surrounding the respective line segment is then given by the face probability value multiplied by a clipped Gaussian function of the distance from that pixel to the respective line segment. The clipped Gaussian function has values of one on the respective line segment and on a small oval region around the respective line segment; in other regions of the detected face, the values of the clipped Gaussian function decays to zero as the distance from the respective line segment increases.
  • 3. MULTI-VIEW BASED FACE MAP GENERATION
  • In some embodiments, each of the image patches 46 is passed through at least two parallel classifier cascades that are configured to evaluate different respective facial views. Some of these embodiments are implemented in accordance with one or more of the multi-view face detection methods described in Jones and Viola, "Fast Multi-view Face Detection, " Mitsubishi Electric Research Laboratories, MERL-TR2003-96, July 2003 (also published in IEEE Conference on Computer Vision and Pattern Recognition, June 18, 2003).
  • In these embodiments, the face probability generator 44 determines the face probability values from the respective numbers of classifiers of each cascade through which corresponding ones of the patches were evaluated. For example, in one exemplary embodiment, the face probability generator 44 maps the number of unevaluated classification stages in the most successful one of the parallel classifier cascades, where large numbers are mapped to low probability values and small numbers are mapped to high probability values. In some embodiments, the classifier cascade with the fewest number of unevaluated stages for the given patch is selected as the most successful classifier cascade. In other embodiments, the numbers of unevaluated stages in the parallel stages are normalized for each view before comparing them. The face probability generator 44 calculates the pixel-wise face map 48 from the patch probabilities (e.g., by assigning to each pixel the highest probability of any patch that contains the pixel). The pixel-wise face probability values typically are processed in the manner described in the preceding section to ensure that variations in the face probability values are continuous across the face map. In the resulting face map, face patches have a smooth (e.g. Gaussian) profile descending smoothly from the nominal patch value to the nominal background value across a large number of pixels.
  • FIG. 6 shows an example of a face map 60 that is generated by the face map module 14 (FIG. 1) from the input image 32 (FIG. 3) in accordance with the multi-view based face map generation process described above. In this face map 60, darker values correspond to higher probabilities that the pixels correspond to human face content and lighter values correspond to lower probabilities that the pixels correspond to human face content.
  • C. ASCERTAINING SKIN MAPS
  • As explained above, the skin map module 16 ascertains a skin map that includes for each pixel of the input image a respective skin probability value indicating a degree to which the pixel corresponds to human skin (FIG. 2, block 36). A characteristic feature of the skin map is that all pixels of the input image 24 having similar values are mapped to similar respective skin probability values in the skin map. As used herein with respect to pixel values, the term "similar" means that the pixel values are the same or nearly the same and appear visually indistinguishable from one another. This feature of the skin map is important in, for example, pixels of certain human-skin image patches that have colors outside of the standard human-skin tone range. This may happen, for example, in shaded face-patches or alternatively in face highlights, where skin segments may sometimes have a false boundary between skin and non skin regions. The skin map values vary continuously without artificial boundaries even in skin patches trailing far away from the standard human-skin tone range.
  • In general, the skin map module 16 may ascertain the skin probability values indicating the degrees to which the input image pixels correspond to human skin in a wide variety of different ways. In a typical approach, the skin map module 16 ascertains the per-pixel human-skin probability values from human-skin tone probability distributions in respective channels of a color space. For example, in some embodiments, the skin map module 16 ascertains the per-pixel human-skin tone probability values from human-skin tone probability distributions in the CIE LCH color space (i.e., P(skin|L), P(skin|C), and P(skin|H)). These human-skin tone probability distributions are approximated by Gaussian normal distributions (i.e., G(p, µ,
    Figure imgb0003
    )) that are obtained from mean (µ) and standard deviation (
    Figure imgb0004
    ) values for each of the p=L, C, and H color channels. In some embodiments, the mean (µ) and standard deviation (
    Figure imgb0004
    ) values for each of the p=L, C, and H color channels are obtained from O. Martinez Bailac, "Semantic retrieval of memory color content", PhD Thesis, Universitat Autonoma de Barcelona, 2004. Exemplary human-skin tone probability distributions for the L, C, and H color channels are shown in FIGS. 7A, 7B, and 7C, respectively.
  • The skin map module 16 ascertains a respective skin probability value for each pixel of the input image 24 by converting the input image 24 into the CIE LCH color space (if necessary), determining the respective skin-tone probability value for each of the L, C, and H color channels based on the corresponding human-skin tone probability distributions, and computing the product of the color channel probabilities, as shown in equation (3):
    Figure imgb0006
  • In some embodiments, the skin map values, are computed by applying to the probability function P(skin | L,C,H) a range adaptation function that provides a clearer distinction between skin and non-skin pixels. In some of these embodiments, the range adaptation function is a power function of the type defined in equation (4): M SKIN x , y = P skin | L i , C i , H i 1 / γ
    Figure imgb0007
    where γ > 0 and MSKIN (x,y) are the skin map 62 values at location (x,y). In one exemplary embodiment, γ = 32. The skin map function defined in equation (4) attaches high probabilities to a large spectrum of skin tones, while non-skin features typically attain lower probabilities.
  • FIG. 8 shows an example of a skin map that is generated by the skin map module 16 (FIG. 1) from the input image 32 (FIG. 3) in accordance with the skin map generation process described in connection with equations (3) and (4). In this skin map 62, darker values correspond to lower probabilities that the pixels .correspond to human skin content and lighter values correspond to higher probabilities that the pixels correspond to human skin content. As shown by a comparison of FIGS. 3 and 8, the skin map 62 distinguishes the pixels of the input image 32 depicting skin from the pixels depicting the pearls and the lace; and within the faces appearing in the input image 32, the skin map 62 distinguishes skin from non-skin features (e.g., the eyes and lips).
  • D. ENHANCING THE INPUT IMAGE
  • As explained above, the control parameter module 20 and the image enhancement module 22 cooperatively enhance the input image 24 with an enhancement level that varies pixel-by-pixel in accordance with the respective face probability values and the respective skin probability values (FIG. 2, block 38). In this process, the control parameter module 20 produces control parameter values 30 from the measurement values 28 that are output by the attribute extraction modules 14-18. The image enhancement module 22 produces the enhanced image 26 by performing on the input image 24 one or more image processing operations that are controlled by the control parameter values 30.
  • In general, the control parameter module 20 and the image enhancement module 22 may.be configured to perform a wide variety of face and skin sensitive enhancements on the input image 24. The embodiments that are described in detail below, for example, are configured to perform face and skin sensitive sharpening of the input image 24 and face-sensitive contrast enhancement of the input image.
  • In some embodiments, the control parameter module 20 determines a global control parameter value based on global features of the input image 24. The control parameter module 20 then determines for each pixel of the input image 24 a respective local control parameter value that corresponds to a respective modification of the global parameter value in accordance with the respective face probability values and the respective skin probability values. The image enhancement module 22 enhances the input image 24 in accordance with the local control parameter values.
  • 1. SHARPENING THE INPUT IMAGE
  • The embodiments of the control parameter module 20 and the image enhancement module 22 that are described in detail below are configured to sharpen the input image 24 with a sharpening level that varies pixel-by-pixel in accordance with the respective face probability values in the face map and the respective skin probability values in the skin map. In some embodiments, the control parameter module 20 computes for each pixel of the input image a respective sharpening parameter value that controls the amount of sharpening that is applied to the pixel.
  • The sharpening parameter values that are computed by the control parameter module 20 are configured to cause the image enhancement module 22 to sharpen the input image 24 with sharpening levels in a high sharpening level range at pixel locations associated with skin probability values in a low skin probability range. This type of sharpening satisfies the typical desire to aggressively sharpen non-human-skin toned features (e.g., eyes, lips, hairline, trees, and grass) inside and outside human faces.
  • The sharpening parameter values that are computed by the control parameter module 20 also are configured to cause the image enhancement module 22 to sharpen the input image with sharpening levels in a low sharpening level range at pixel locations associated with skin probability values in a high skin probability range and face probability values in a high face probability range. In this case, the low sharpening level range is below the high sharpening level range and the high skin probability range is above the low skin probability range. This type of sharpening satisfies the typical desire to less aggressively sharpen (or not sharpen at all) human-skin toned features within human faces.
  • The sharpening parameter values that are computed by the control parameter module 20 also are configured to cause the image enhancement module 22 to sharpen the input image 24 with sharpening levels in an intermediate sharpening level range at pixel locations associated with skin probability values in the high skin probability range and face probability values in a low face probability range. In this case, the intermediate sharpening level range is between the high and low sharpening level ranges, and the low face probability range is below the high face probability range. This type of sharpening gives flexibility in sharpening features outside of human faces. For these pixels, the decision of whether or not to sharpen human-skin toned features is unclear because these features may be features, such as autumn leaves and dry grass, which observers typically want to appear sharp and crisp or they may be features, such as a missed human face or other human skin content, which observers typically do not want sharpened.
  • FIG. 9 shows an embodiment of a method in accordance with which the control parameter module 20 computes a sharpening factor map that contains for each pixel of the input image 24 a respective sharpening factor that controls the sharpening levels applied by the image enhancement module 22 in the manner described in the preceding paragraphs. In this embodiment, the control parameter module 20 determines the sharpening factor map (FIG. 9, block 64), determines a global sharpening parameter (FIG. 9, block 66), and generates pixel sharpening parameters from a combination of the sharpening factor map and the global sharpening parameter value (FIG. 9, block 68).
  • In some of the embodiments in accordance with the method of FIG. 9, the sharpening factors g(f(x,y),s(x,y)) are computed in accordance with equation (5): g f x , y , s x , y = 1 1 + f x , y 2 { 0 i f s x , y < Lth f x , y 1 i f s x , y > Hth f x , y s x , y Lth f x , y Hth f x , y Lth ( f x , y Otherwise
    Figure imgb0008
    where f(x,y) is the face map probability for pixel (x,y), s(x,y) is the skin map probability for pixel (x,y), Lth(f(x,y)) is a lower skin probability threshold, and Hth(f(x,y)) is an upper skin probability threshold. In general, both Lth(f(x,y)) and Hth(f(x,y)) may vary depending on face probability values.
  • In some embodiments, the upper skin threshold is set to an empirically determined constant value. In one exemplary embodiment, the upper skin threshold is set of 0.69. In some embodiments, the sharpening of features in detected faces (e.g. eyes, lips etc) is lower than the sharpening of details outside of the detected faces (e.g. trees and buildings). In some of these embodiments, the adaptive sharpening is implemented by Hth(f(x,y)) = aH + bH · f(x,y). In one embodiment, aH=0.6836 and bH= 0. In another embodiment, aH=0.6836 and bH= -0.03.
  • In some embodiments, the lower skin threshold Lth(f(x,y)) is a decreasing function of the face probability value f(x,y), which adapts the lower skin threshold to more conservatively sharpening of facial pixels. In some of these embodiments, the lower skin threshold is give by: Lth f x , y = a b f x , y
    Figure imgb0009
    where a and b are empirically determined positive-valued constants. In one exemplary embodiment, a=0.61 and b=-.008.
  • FIG. 10 shows an example of a sharpening factor map 70 that is generated by the control parameter module 20 from the face map 60 shown in FIG. 6 and the skin map 62 shown in FIG. 8 in accordance with the sharpening factor generation process described above in connection with equations (5) and (6) .
  • In some embodiments the sharpening factor values in the sharpening factor map are combined with a global sharpening parameter, which is used to control or tune the sharpening levels that are applied to the pixels of the input image 24 to produce the enhanced image 26. The sharpening enhancement process to be tuned may be, for example, any type of spatial sharpening filter, including but not limited to an unsharp-masking filter, a high-pass filter, a high boost filter, a differences of Gaussians based filter, and a Laplacian of Gaussian filter.
  • In some embodiments, the image enhancement process to be tuned is a non-linear smoothing and sharpening filter that combines bilateral filtering and generalized unsharp-masking. This filter may be derived from a low pass filter of the type defined in equation (7): Out x , y = ij K ij In x i , y j
    Figure imgb0010
    where (x,y) specifies the coordinate locations of the pixels and the kernel is designed such that ij K ij = 1.
    Figure imgb0011
    In particular, the low-pass filter defined in equation (7) may be re-written as Out x , y = In x , y + ij K ij In x i , y j In x , y
    Figure imgb0012
    The corresponding bilateral filter is given by equation (9): Out x , y = In x , y + ij K ij ψ In x i , y j In x , y
    Figure imgb0013
    where the function ψ(z= In(x-i,y-j)-In(x,y)) usually a non-linear function. Note that if ψ(z)= z, equation (9) becomes equation (8). Similarly if ψ(z)=-λ·z, equation (9) becomes Out x , y = 1 + λ In x , y λ ij K ij In x i , y j
    Figure imgb0014
    which is a linear unsharp mask convolution filter in which λ is a sharpness parameter. In the illustrated embodiments, the image enhancement module 22 sharpens the input image in accordance with equation (9) and equation (11), which defines ψ(z) as follows: ψ z = R { z | z | < Th sign z Th + λ Th z otherwise
    Figure imgb0015
    In equation (11), Th is a noise threshold that determines the local deviation level that will be smoothed (should be high in case of strong noise). The sharpness parameter
    Figure imgb0016
    determines the extent of sharpness that is introduced into significant edges (higher values result in sharper images). The parameter R is a radicality parameter that determines how radical the enhancement should be: when R=1, the other parameters operate at their nominal values; when R=0, the output equals the input; when 0<R<1, the output is a weighted average between these two extremes.
  • In some embodiments, the noise threshold ThNOMINAL is given by equation (12): T h NOMINAL = { 4 N S N S < 3 12 3 N S .
    Figure imgb0017
    where the noise attribute Ns is estimated as the mean absolute high-pass content (HP(x)) over S, as defined in equation (13): N S = z S | HP z | dz
    Figure imgb0018
    In some embodiments, the smooth scene regions (S) are defined as regions with consistently low LP(z) activity (i.e., the values of LP(z) in the smooth region and its V-vicinity are lower than a threshold Tsm). Highlights and shadows are excluded from S due to possible tone saturation, which might introduce textures into S. That is, S = t | G H > In t > G L and x t V , t + V , LP z < T sm
    Figure imgb0019
  • Additional details regarding the estimation of the noise attribute Ns can be found in Application No. 11/388,152, which was filed on March 22, 2006 . In some embodiments, the sharpness parameter
    Figure imgb0020
    is calculated in accordance with equation (15): λ = λ GLOBAL g f x , y , s x , y
    Figure imgb0021
    where g(f(x,y),s(x,y)) is defined in equation (5) and λGLOBAL is a global sharpening parameter that is given by equation (16)': λ GLOBAL = 2.5 { 2.5 7.5 * 1 S h F 0.1 S h F < 0.1 0.1 S h F < 0.3 0.3 S h F
    Figure imgb0022
    where ShF is a sharpness measure. In some embodiments, the sharpness measure is determined in accordance with the method described in Application No. 10/835,910, which was filed on April 29, 2004 . In other embodiments, the sharpness measure amounts to averaging frequency band ratios over regions more likely to be sharp edges. In some of these embodiments, the sharpness measure is estimated by equation (17): S h F = reF HP z LP z 2 dz
    Figure imgb0023
    where F is a feature region characterized by a threshold Tsh in the local low-pass feature, as defined in equation (18): F = z | LP z > T sh
    Figure imgb0024
    The global sharpening parameter (λGLOBAL ), which is computed for each input image, takes into account the different levels of quality in each image. Thus, with respect to two different images that describe the same scene but differ in quality (e.g., one noisy and one clean, or one sharp and one blurry), the sharpness parameter
    Figure imgb0016
    will be different, even if the skin and face values are the same.
  • 2. CONTRAST-ENHANCING THE INPUT IMAGE
  • The embodiments that are described in detail below are configured to perform face-sensitive contrast enhancement of the input image. In general, the contrast enhancement may be performed instead or in addition to the sharpening enhancement described in the preceding section.
  • FIG. 11 shows an embodiment 72 of the image enhancement system 10 in which one of the other attribute extraction modules 18 is a tone mask extraction module 74, and the image enhancement module 22 includes a tone mapping module 76 and a enhanced image generator 78.
  • FIG. 12 shows an embodiment of a method in accordance with which the control parameter module 20 and the image enhancement module 22 cooperatively enhance the input image 24 to produce the enhanced image 26.
  • The control parameter module 20 ascertains a facial content measurement value indicating a degree to which the input image 24 contains human face content (FIG. 12, block 80). In general, the control parameter module 20 may ascertain the facial content measurement value in any of a wide variety of different ways. In some embodiments, the control parameter module 20 ascertains the facial measurement value by calculating the average of the values in the face map that is generated by the face map module 14. The resulting average value represents an estimate of the proportion of the input image 24 occupied by facial content.
  • The control parameter module 20 then tunes a tone-correction process in accordance with the facial content measurement value (FIG. 12, block 82). In some embodiments, the control parameter module 20 determines a weight that is used by the image enhancement module 22 to control the amount of an image-dependent (though not necessarily face-sensitive) default contrast enhancement 86 that should be applied to the input image 24. In the illustrated embodiments, the amount of the applied default contrast enhancement decreases with the value of the weight determined by the control parameter value. In general, the weight may be any function that maps facial content measurement values Γ to weight values υ(Γ). In general, the weight values may take on any positive or negative value. In one exemplary embodiment, the weight function · (υ(Γ)) is given by equation (19): υ Γ = { 1 i f Γ > θ F 0 i f Γ θ F
    Figure imgb0026
    where θF is an empirically determined threshold. In some exemplary embodiments θF =0.15.
  • The image enhancement module 22 enhances the input image 24 with an image-dependent default tone mapping module 76 that depends on a tone mask 77 that is extracted by the tone extraction module 74. The enhanced image generator 78 then enhances the input image 24 based on the image-dependent weight received from the control parameter module 20 and the tone-corrected pixel values received from the tone mapping module 76.
  • The tone mask extraction module 74 generates a tone mask of the input image 24. In some embodiments, the tone mask extraction module 74 generates the tone mask by performing an iterative operation on the input image 74. For instance, some embodiments generate the tone mask by using an iterative process (such as a Retinex process) that makes assumptions about the human visual system. Other embodiments generate this mask by performing a non-iterative operation on the input image 74. For example, some embodiments perform a low-pass filtering operation on the input image 74 to generate the tone mask (see, e.g., the embodiment described in connection with FIG. 2 of U.S. Patent No. 6,813,041 ).
  • The tone mapping module 76 modifies the pixel color values in the input image 24 through linear or non-linear combinations of the tone mask values and the pixel color values in the input image 24. In other embodiments, the tone mapping module 76 modifies the pixel color values in the input image 24 non-linearly. Among the types of non-linear operations that may be used to modify the input image pixel color values are rotated scaled sinusoidal functions, sigmoidal functions, and exponential functions that have input image pixel color values as part of their base (see, e.g., the embodiment described in connection with FIG. 3 of U.S. Patent No. 6,813,041 ).
  • The enhanced image generator 78 enhances the input image 24 based on the face-sensitive weight received from the control parameter module 20 and the tone-corrected pixel values received from the tone mapping module 76. In some exemplary embodiments, the enhanced image generator 78 calculates the values of the enhanced image pixels from a weighted combination of the values of the input image pixels and the values (T(x,y)) of the tone-corrected pixels 86 in accordance with equation (20): Out x , y = υ Γ T x , y + 1 υ Γ In x , y
    Figure imgb0027
  • III. EXEMPLARY ARCHITECTURES OF THE IMAGE PROCESSING SYSTEMS
  • Embodiments of the image enhancement system 10 (including the embodiment 72 shown in FIG. 11) may be implemented by one or more discrete modules (or data processing components) that are not limited to any particular hardware, firmware, or software configuration. In the illustrated embodiments, the modules of the image enhancement system 10 may be implemented in any computing or data processing environment, including in digital electronic circuitry (e.g., an application-specific integrated circuit, such as a digital signal processor (DSP)) or in computer hardware, firmware, device driver, or software. In some embodiments, the functionalities of the modules of the image enhancement system 10 are combined into a single data processing component. In some embodiments, the respective functionalities of each of one or more of the modules of the image enhancement system 10 are performed by a respective set of multiple data processing components.
  • In some implementations, process instructions (e.g., machine-readable code, such as computer software) for implementing the methods that are executed by the embodiments of the image enhancement system 10, as well as the data it generates, are stored in one or more machine-readable media. Storage devices suitable for tangibly embodying these instructions and data include all forms of non-volatile computer-readable memory, including, for example, semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices, magnetic disks such as internal hard disks and removable hard disks, magnetooptical disks, DVD-ROM/RAM, and CD-ROM/RAM.
  • In general, embodiments of the image enhancement system 10 may be implemented in any one of a wide variety of electronic devices, including desktop and workstation computers, television monitors, projection displays, or video cards connected to such large screens displays, video recording devices (e.g., VCRs and DVRs), cable or satellite set-top boxes capable of decoding and playing paid video programming, and digital camera devices. Due to its efficient use of processing and memory resources, some embodiments of the image enhancement system 10 may be implemented with relatively small and inexpensive components that have modest processing power and modest memory capacity. As a result,' these embodiments are highly suitable for incorporation in application environments that have significant size, processing, and memory constraints, including but not limited to printers, handheld electronic devices (e.g., a mobile telephone, a cordless telephone, a portable memory device such as a smart card, a personal digital assistant (PDA), a solid state digital audio player, a CD player, an MCD player, a game controller, a pager, and a miniature still image or video camera), digital cameras, and other embedded environments.
  • FIG. 13 shows an embodiment of a computer system 140 that incorporates an embodiment of the image enhancement system 10. The computer system 140 may be implemented by a standalone computer (e.g., a desktop computer, workstation computer, or a portable computer) or it may be incorporated in another electronic device (e.g., a printer or a digital camera). The computer system 140 includes a processing unit 142 (CPU), a system memory 144, and a system bus 146 that couples processing unit 142 to the various components of the computer system 140. The processing unit 142 typically includes one or more processors, each of which may be in the form of any one of various commercially available processors. The system memory 144 typically includes a read only memory (ROM) that stores a basic input/output system (BIOS) that contains start-up routines for the computer system 140 and a random access memory (RAM). The system bus 146 may be a memory bus, a peripheral bus or a local bus, and may be compatible with any of a variety of bus protocols, including PCI, VESA, Microchannel, ISA, and EISA. The computer system 140 also includes a persistent storage memory 148 (e.g., a hard drive, a floppy drive, a CD ROM drive, magnetic tape drives, flash memory devices, and digital video disks) that is connected to the system bus 146 and contains one or more computer-readable media disks that provide non-volatile or persistent storage for data, data structures and computer-executable instructions.
  • A user may interact (e.g., enter commands or data) with the computer system 140 using one or more input devices 150 (e.g., a keyboard, a computer mouse, a microphone, joystick, and touch pad). Information may be presented through a graphical user interface (GUI) that is displayed to the user on a display monitor 152, which is controlled by a display controller 154. The computer system 140 also typically includes peripheral output devices, such as speakers and a printer. One or more remote computers may be connected to the computer system 140 through a network interface card (NIC) 156.
  • As shown in FIG. 13, the system memory 144 also stores an embodiment of the image enhancement system 10, a GUI driver 158, and a database 160 containing image files corresponding to the input and enhancement images 24, 26, intermediate processing data, and output data. In some embodiments, the image enhancement system 10 interfaces with the GUI driver 158 and the user input 150 to control the creation of the enhanced image 26. In some embodiments, the computer system 140 additionally includes a graphics application program that is configured to render image data on the display monitor 152 and to perform various image processing operations on the enhanced image 26.
  • V. CONCLUSION
  • The embodiments that are described in detail herein are capable of enhancing images in face and skin sensitive ways. At least some of these embodiments enhance images in ways that avoid artifacts and other undesirable problems that arise as a result of enhancing an image based on segmentations of the image into discrete subject matter regions. At least some of these embodiments are capable of adjusting default control parameters in image-dependent ways.
  • Other embodiments are within the scope of the claims. For example, in some embodiments, a sequence of images (e.g., a sequence of video frames) may be processed by the image processing system 10. In some of these embodiments, a respective face map and a respective skin map are calculated for each of the images in the sequence in accordance with the methods described above.

Claims (9)

  1. A method, comprising:
    calculating a face map (60)comprising for each pixel of an input image (24) a respective face probability value indicating a degree to which the pixel corresponds to a human face, wherein variations in the face probability values are continuous across the face map (60);
    ascertaining a skin map (62) comprising for each pixel of the input image (24) a respective skin probability value indicating a degree to which the pixel corresponds to human skin, wherein the ascertaining comprises mapping all pixels of the input image (24) having similar values to similar respective skin probability values in the skin map (62); and
    enhancing the input image (24) with an enhancement level that varies pixel-by-pixel in accordance with the respective face probability values and the respective skin probability values.
  2. The method of claim 1, wherein the calculating comprises:
    processing patches (46) of the input image through a cascade (42) of classifiers, wherein each patch (46) is processed through a respective number of the classifiers depending on a per-classifier evaluation of the likelihood that the patch (46) corresponds to a human face; and
    calculating the face probability values for the pixels of the input image (24) in accordance with the respective numbers of classifiers through which corresponding ones of the patches (46) were processed.
  3. The method of claim 1, wherein the enhancing comprises sharpening the input image (24) with a sharpening level that varies pixel-by-pixel in accordance with the respective face probability values and the respective skin probability values.
  4. The method of claim 1, further comprising, for each pixel of the input image (24), determining a respective control parameter value from the respective face probability value and the respective skin probability value, wherein the determining is performed without regard to any segmentation of the face map (60) into one or more discrete facial and non-facial regions and without regard to any segmentation of the skin map (62) into one or more discrete facial and non-facial regions.
  5. The method of claim 1, further comprising, for each pixel of the input image (24), determining a respective control parameter value from the respective face probability value and the respective skin probability value, wherein the determining comprises inputting the face probability values and the skin probability values into a continuous function that maps each pair of corresponding face and skin probability values to a respective sharpening parameter.
  6. The method of claim 1, wherein the enhancing comprises determining one or more control parameter values in accordance with one or more image enhancement processes, and modifying the one or more control parameter values in accordance with the respective face probability values and the respective skin probability values.
  7. The method of claim 1, further comprising determining a global control parameter value based on global features of the input image (24) and determining for each pixel of the input image (24) a respective local control parameter value that corresponds to a respective modification of the global parameter value in accordance with the respective face probability values and the respective skin probability values, wherein the enhancing comprises enhancing the input image (24) in accordance with the local control parameter values.
  8. A system, comprising:
    a face map module (14) operable to calculate a face map (60) comprising for each pixel of an input image (24) a respective face probability value indicating a degree to which the pixel corresponds to a human face, wherein variations in the face probability values are continuous across the face map (60);
    a skin map module (26) operable to ascertain a skin map (62) comprising for each pixel of the input image (24) a respective skin probability value indicating a degree to which the pixel corresponds to human skin, wherein the skin map module (16) ascertains the skin map (62) by executing a process that includes mapping all pixels of the input image having similar values to similar respective skin probability values in the skin map (62); and
    an image enhancement module (22) operable to enhance the input image (24) with an enhancement level that varies pixel-by-pixel in accordance with the respective face probability values and the respective skin probability values.
  9. The system of claim 8, further comprising a control parameter module (20) that is operable to determine, for each pixel of the input image (24), a respective control parameter value from the respective face probability value and the respective skin probability value, wherein the control parameter module (20) determines the respective control parameter values without regard to any classification of the face map (60) into one or more regions of human face content and without regard to any classification of the skin map (62) into one or more regions of human skin content.
EP08754589.3A 2007-05-29 2008-05-20 Face and skin sensitive image enhancement Not-in-force EP2176830B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/754,711 US8031961B2 (en) 2007-05-29 2007-05-29 Face and skin sensitive image enhancement
PCT/US2008/006474 WO2008153702A1 (en) 2007-05-29 2008-05-20 Face and skin sensitive image enhancement

Publications (3)

Publication Number Publication Date
EP2176830A1 EP2176830A1 (en) 2010-04-21
EP2176830A4 EP2176830A4 (en) 2013-09-04
EP2176830B1 true EP2176830B1 (en) 2018-03-28

Family

ID=40088285

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08754589.3A Not-in-force EP2176830B1 (en) 2007-05-29 2008-05-20 Face and skin sensitive image enhancement

Country Status (3)

Country Link
US (1) US8031961B2 (en)
EP (1) EP2176830B1 (en)
WO (1) WO2008153702A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220245770A1 (en) * 2020-05-13 2022-08-04 Ambu A/S Method for adaptive denoising and sharpening and visualization systems implementing the method

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100735549B1 (en) * 2005-08-08 2007-07-04 삼성전자주식회사 Method and apparatus for conversion of skin color of image
CN101983507A (en) * 2008-02-01 2011-03-02 惠普开发有限公司 Automatic redeye detection
US8520089B2 (en) * 2008-07-30 2013-08-27 DigitalOptics Corporation Europe Limited Eye beautification
US9053524B2 (en) 2008-07-30 2015-06-09 Fotonation Limited Eye beautification under inaccurate localization
CN106919911A (en) * 2008-07-30 2017-07-04 快图有限公司 Modified using the automatic face and skin of face detection
JP2010147808A (en) * 2008-12-18 2010-07-01 Olympus Imaging Corp Imaging apparatus and image processing method in same
JP4727720B2 (en) * 2008-12-31 2011-07-20 株式会社モルフォ Image processing method and image processing apparatus
US8885967B2 (en) * 2009-01-19 2014-11-11 Csr Technology Inc. Method and apparatus for content adaptive sharpness enhancement
US8503814B2 (en) * 2009-01-19 2013-08-06 Csr Technology Inc. Method and apparatus for spectrum estimation
CN101853389A (en) * 2009-04-01 2010-10-06 索尼株式会社 Detection device and method for multi-class targets
US8559672B2 (en) * 2009-06-01 2013-10-15 Hewlett-Packard Development Company, L.P. Determining detection certainty in a cascade classifier
US10080006B2 (en) 2009-12-11 2018-09-18 Fotonation Limited Stereoscopic (3D) panorama creation on handheld device
US20110141226A1 (en) * 2009-12-11 2011-06-16 Fotonation Ireland Limited Panorama imaging based on a lo-res map
JP5844962B2 (en) * 2010-03-10 2016-01-20 任天堂株式会社 Image processing program, image processing apparatus, image processing method, and image processing system
US8666148B2 (en) 2010-06-03 2014-03-04 Adobe Systems Incorporated Image adjustment
US20120019727A1 (en) * 2010-07-21 2012-01-26 Fan Zhai Efficient Motion-Adaptive Noise Reduction Scheme for Video Signals
EP3675029B8 (en) * 2011-04-08 2022-05-04 Dolby Laboratories Licensing Corporation Local definition of global image transformations
US9008415B2 (en) 2011-09-02 2015-04-14 Adobe Systems Incorporated Automatic image adjustment parameter correction
US8903169B1 (en) 2011-09-02 2014-12-02 Adobe Systems Incorporated Automatic adaptation to image processing pipeline
JP6029286B2 (en) * 2012-02-17 2016-11-24 キヤノン株式会社 Photoelectric conversion device and imaging system
US8705853B2 (en) * 2012-04-20 2014-04-22 Apple Inc. Detecting skin tone
US9256927B2 (en) * 2012-07-06 2016-02-09 Yissum Research Development Companyof The Hebrew University of Jerusalem Ltd. Method and apparatus for enhancing a digital photographic image
US20140079319A1 (en) * 2012-09-20 2014-03-20 Htc Corporation Methods for enhancing images and apparatuses using the same
US9374478B1 (en) * 2013-12-03 2016-06-21 Marvell International Ltd. Adaptive image sharpening
US10192134B2 (en) 2014-06-30 2019-01-29 Microsoft Technology Licensing, Llc Color identification using infrared imaging
US9471966B2 (en) * 2014-11-21 2016-10-18 Adobe Systems Incorporated Area-dependent image enhancement
CN106611429B (en) * 2015-10-26 2019-02-05 腾讯科技(深圳)有限公司 Detect the method for skin area and the device of detection skin area
US11176641B2 (en) 2016-03-24 2021-11-16 Intel Corporation Skin map-aided skin smoothing of images using a bilateral filter
US10417745B2 (en) * 2016-06-28 2019-09-17 Raytheon Company Continuous motion scene based non-uniformity correction
KR102488563B1 (en) * 2016-07-29 2023-01-17 삼성전자주식회사 Apparatus and Method for Processing Differential Beauty Effect
US20180121729A1 (en) * 2016-11-02 2018-05-03 Umbo Cv Inc. Segmentation-based display highlighting subject of interest
EP3460747B1 (en) * 2017-09-25 2023-09-06 Vestel Elektronik Sanayi ve Ticaret A.S. Method, processing system and computer program for processing an image
CN108230331A (en) * 2017-09-30 2018-06-29 深圳市商汤科技有限公司 Image processing method and device, electronic equipment, computer storage media
CN110827204B (en) * 2018-08-14 2022-10-04 阿里巴巴集团控股有限公司 Image processing method and device and electronic equipment
CN111199176B (en) * 2018-11-20 2023-06-20 浙江宇视科技有限公司 Face identity detection method and device
US11275972B2 (en) * 2019-11-08 2022-03-15 International Business Machines Corporation Image classification masking
US11488285B2 (en) * 2020-04-13 2022-11-01 Apple Inc. Content based image processing
CN113808027B (en) * 2020-06-16 2023-10-17 北京达佳互联信息技术有限公司 Human body image processing method and device, electronic equipment and storage medium
WO2022213349A1 (en) * 2021-04-09 2022-10-13 鸿富锦精密工业(武汉)有限公司 Method and apparatus for recognizing face with mask, and computer storage medium
TW202243455A (en) * 2021-04-20 2022-11-01 南韓商三星電子股份有限公司 Image processing circuit, system-on-chip for generating enhanced image and correcting first image
US11995743B2 (en) * 2021-09-21 2024-05-28 Samsung Electronics Co., Ltd. Skin tone protection using a dual-core geometric skin tone model built in device-independent space

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5710833A (en) 1995-04-20 1998-01-20 Massachusetts Institute Of Technology Detection, recognition and coding of complex objects using probabilistic eigenspace analysis
US6249315B1 (en) 1997-03-24 2001-06-19 Jack M. Holm Strategy for pictorial digital image processing
US6813041B1 (en) 2000-03-31 2004-11-02 Hewlett-Packard Development Company, L.P. Method and apparatus for performing local color correction
US6665448B1 (en) 2000-09-29 2003-12-16 Hewlett-Packard Development Company, L.P. Selective smoothing and sharpening of images by generalized unsharp masking
US7099510B2 (en) 2000-11-29 2006-08-29 Hewlett-Packard Development Company, L.P. Method and system for object detection in digital images
EP1231565A1 (en) 2001-02-09 2002-08-14 GRETAG IMAGING Trading AG Image colour correction based on image pattern recognition, the image pattern including a reference colour
US20020172419A1 (en) 2001-05-15 2002-11-21 Qian Lin Image enhancement using face detection
US6845181B2 (en) 2001-07-12 2005-01-18 Eastman Kodak Company Method for processing a digital image to adjust brightness
US7050636B2 (en) 2001-12-07 2006-05-23 Eastman Kodak Company Method and system for improving an image characteristic based on image content
US7092573B2 (en) 2001-12-10 2006-08-15 Eastman Kodak Company Method and system for selectively applying enhancement to an image
US7356190B2 (en) 2002-07-02 2008-04-08 Canon Kabushiki Kaisha Image area extraction method, image reconstruction method using the extraction result and apparatus thereof
US7039222B2 (en) 2003-02-28 2006-05-02 Eastman Kodak Company Method and system for enhancing portrait images that are processed in a batch mode
US7609908B2 (en) * 2003-04-30 2009-10-27 Eastman Kodak Company Method for adjusting the brightness of a digital image utilizing belief values
US7471846B2 (en) * 2003-06-26 2008-12-30 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US7412105B2 (en) * 2003-10-03 2008-08-12 Adobe Systems Incorporated Tone selective adjustment of images
US7466868B2 (en) * 2003-10-03 2008-12-16 Adobe Systems Incorporated Determining parameters for adjusting images
US7379071B2 (en) * 2003-10-14 2008-05-27 Microsoft Corporation Geometry-driven feature point-based image synthesis
KR100571826B1 (en) * 2003-12-02 2006-04-17 삼성전자주식회사 Large volume face recognition appratus and method thereof
US7356195B2 (en) 2004-04-29 2008-04-08 Hewlett-Packard Development Company, L.P. System and method for estimating image sharpness
JP2006065688A (en) 2004-08-27 2006-03-09 Kyocera Mita Corp Image processor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220245770A1 (en) * 2020-05-13 2022-08-04 Ambu A/S Method for adaptive denoising and sharpening and visualization systems implementing the method

Also Published As

Publication number Publication date
WO2008153702A1 (en) 2008-12-18
EP2176830A4 (en) 2013-09-04
US8031961B2 (en) 2011-10-04
US20080298704A1 (en) 2008-12-04
EP2176830A1 (en) 2010-04-21

Similar Documents

Publication Publication Date Title
EP2176830B1 (en) Face and skin sensitive image enhancement
US10339643B2 (en) Algorithm and device for image processing
US8594439B2 (en) Image processing
US9311901B2 (en) Variable blend width compositing
US20050249429A1 (en) Method, apparatus, and program for image processing
US8417033B2 (en) Gradient based background segmentation and enhancement of images
US7092573B2 (en) Method and system for selectively applying enhancement to an image
US7162081B2 (en) Method and apparatus for determining regions of interest in images and for image transmission
US7933454B2 (en) Class-based image enhancement system
US8194992B2 (en) System and method for automatic enhancement of seascape images
US20110235905A1 (en) Image processing apparatus and method, and program
US20030053692A1 (en) Method of and apparatus for segmenting a pixellated image
US20140079319A1 (en) Methods for enhancing images and apparatuses using the same
JP2005527880A (en) User definable image reference points
US20110205227A1 (en) Method Of Using A Storage Switch
US9466007B2 (en) Method and device for image processing
US11200708B1 (en) Real-time color vector preview generation
JP2007018025A (en) Complexity measuring method, processing selection method, image processing method, its program and record medium, image processing device, and image processing system
Wang et al. Automatic hazy image enhancement via haze distribution estimation
Kojima et al. Fast and Effective Scene Segmentation Method for Luminance Adjustment of Multi-Exposure Image
Akai et al. Low-Artifact and Fast Backlit Image Enhancement Method Based on Suppression of Lightness Order Error
JP2007257469A (en) Similarity discrimination device, method and program
Ozturk et al. Efficient and natural image fusion method for low-light images based on active contour model and adaptive gamma correction
CN117716400A (en) Face region detection and local reshaping enhancement
Zamperoni Model-based selective image enhancement by means of adaptive rank order filtering

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20091216

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20130806

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 5/00 20060101AFI20130731BHEP

Ipc: G06T 7/40 20060101ALI20130731BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20171211

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 984054

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180415

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602008054586

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180628

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20180419

Year of fee payment: 11

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20180328

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180629

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180628

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20180423

Year of fee payment: 11

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20180419

Year of fee payment: 11

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 984054

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180328

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180730

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602008054586

Country of ref document: DE

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20180531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180531

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180531

26N No opposition filed

Effective date: 20190103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180520

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180520

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180531

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602008054586

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20190520

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180520

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191203

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190520

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20080520

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190531

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180328

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180728