WO2004084142A1 - 画像処理装置 - Google Patents
画像処理装置 Download PDFInfo
- Publication number
- WO2004084142A1 WO2004084142A1 PCT/JP2004/003726 JP2004003726W WO2004084142A1 WO 2004084142 A1 WO2004084142 A1 WO 2004084142A1 JP 2004003726 W JP2004003726 W JP 2004003726W WO 2004084142 A1 WO2004084142 A1 WO 2004084142A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- color component
- pixel
- image processing
- predetermined area
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 302
- 238000000034 method Methods 0.000 claims description 105
- 230000008569 process Effects 0.000 claims description 95
- 238000000605 extraction Methods 0.000 claims description 51
- 230000000694 effects Effects 0.000 claims description 13
- 230000010365 information processing Effects 0.000 claims description 7
- 238000003702 image correction Methods 0.000 abstract description 61
- 238000010586 diagram Methods 0.000 description 25
- 230000015572 biosynthetic process Effects 0.000 description 24
- 238000003786 synthesis reaction Methods 0.000 description 24
- 238000001514 detection method Methods 0.000 description 20
- 238000012937 correction Methods 0.000 description 14
- 238000005070 sampling Methods 0.000 description 13
- 210000004709 eyebrow Anatomy 0.000 description 12
- 230000000873 masking effect Effects 0.000 description 10
- 230000009467 reduction Effects 0.000 description 7
- 230000007850 degeneration Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 4
- 238000005034 decoration Methods 0.000 description 4
- 230000037303 wrinkles Effects 0.000 description 4
- 238000009499 grossing Methods 0.000 description 3
- 208000002874 Acne Vulgaris Diseases 0.000 description 2
- 206010000496 acne Diseases 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000011946 reduction process Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 208000029152 Small face Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 210000000720 eyelash Anatomy 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000005357 flat glass Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/62—Retouching, i.e. modification of isolated colours only or in isolated picture areas only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/62—Retouching, i.e. modification of isolated colours only or in isolated picture areas only
- H04N1/628—Memory colours, e.g. skin or sky
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20012—Locally adaptive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Definitions
- the present invention relates to a technique effectively applied to image processing of a captured image, particularly an image of a person as a subject.
- the signal level difference between each of the pixels forming the face-based image and each of the pixels surrounding each pixel is detected by the difference detection unit.
- the signal level difference is compared with a reference value by a threshold value determination unit.
- the arithmetic unit multiplies the signal level difference by a predetermined coefficient according to the result of the comparison, and adds the result to each pixel value. From the result of the addition, the reference value in the comparison and the coefficient in the multiplication unit are selected according to the position of the pixel and the image, whereby an image from which the undesired components included in the face image are removed is obtained.
- a person image processing apparatus capable of adding decorations to a body part of a person image, particularly to a face part and a head part, so that decorations do not overlap with each other.
- This person image processing apparatus includes means for setting a position and a range of a body part region based on at least one element constituting a body part of a person image, and adding a decoration only to a background region excluding the body part region. Means.
- the person image processing apparatus further includes means for setting the position and range of the body part region based on at least one element constituting the body part of the person image, and adding a decoration along the outer periphery of the body part region. In some cases.
- the blur processing when the blur processing is performed on the skin of the subject, the blur processing is performed on an object having a color of a component close to the skin color existing in the image. For this reason, even those having a component color different from the skin color were subjected to the blurring process and were prevented from being blurred.
- the skin color component to be subjected to the blur processing is fixedly held in advance. For this reason, the conventional technology may not be able to cope with differences in skin color due to race and individual differences. In such a case, the blurring of the skin area may not be performed correctly.
- the above-mentioned problem in the blurring process is not limited to the blurring process, but is also a problem in other image processing.
- An object of the present invention is to solve such a problem and to prevent image processing from being performed on an area that is not originally subjected to image processing such as blur processing. For example, by performing image processing (for example, blurring processing) only on a specific region (for example, the skin portion of a face) of a person as a subject, a background having a color of a component close to the skin color is not obtained by the image processing.
- image processing for example, blurring processing
- a specific region for example, the skin portion of a face
- the purpose is to prevent a natural state (for example, blurring).
- an image processing apparatus comprising a predetermined area specifying unit and an image generating unit.
- the predetermined area specifying means specifies a predetermined area determined based on a body part of a person who is a subject in the image.
- the predetermined area specifying means may be configured so that the predetermined area is manually specified by the user. That is, the predetermined area specifying means is configured to specify the body part based on the area in the image specified by the user, and further specify the predetermined area based on the specified body part. May be.
- the predetermined area specifying means may be configured to specify an area specified by the user as a body part, or may be configured to specify a body part based on a point, area, color, shape, etc. specified by the user. May be configured. Then, the predetermined area specifying means may be configured to specify the predetermined area based on the body part specified in this way.
- the predetermined area specifying means may be configured to specify the predetermined area independently of the input by the user.
- the predetermined area specifying means includes detection means for detecting a body part of a person who is a subject in an image, and specifying means for specifying a predetermined area based on the body part detected by the detection means. May be configured.
- the detecting means detects the position and range (size) of the body part of the subject.
- Body part means a part or the whole of a person's body, such as the head, face, hands, feet, and torso.
- the detecting means may be configured using any existing means.
- the image generating means generates an image in which image processing has been performed on the predetermined area specified by the predetermined area specifying means.
- image processing is processing for operating an image. Examples of image processing include image correction and texture mapping. is there.
- image correction here is a process of operating an image without changing the essence of a subject in the image.
- image correction include blurring, edge enhancement, brightness correction, and color correction.
- the blurring process is a process for blurring an image portion such as wrinkles or spots on the skin to make the skin of the person who is the subject look smooth.
- the blurring process is a process performed using, for example, a technique called smoothing, and is performed by removing a high-frequency component in a skin image. Examples include moving average filters, weighted average filters (including Gaussian filters) and ⁇ -filters.
- the first aspect of the present invention it is possible to acquire an image that has not been subjected to image processing for a portion other than the predetermined area specified by the predetermined area specifying means. That is, it is possible to acquire an image that has been subjected to image processing limited to only a predetermined area based on the body part of a person as a subject. Therefore, it is possible to prevent a portion (for example, a background) different from the subject to be subjected to the image processing from becoming unnatural due to the image processing. In other words, image processing is prevented from being performed on a part that the user does not intend.
- the image processing to be performed on the image is a blurring process
- the blurring process is not performed on portions other than the predetermined area specified by the predetermined area specifying means, and the image can be obtained.
- the image generating means is an image which has been subjected to image processing based on the color component of the skin color of the person who is the subject, which is extracted from the body part which is the reference of the predetermined area specified by the predetermined area specifying means. May be generated.
- the image generating means configured as described above, image processing corresponding to the skin color of each person as a subject is executed. Therefore, it is possible to acquire images in which image processing has been performed accurately even for subjects with different skin colors in response to skin color differences due to race or individual differences.
- the image generating means is equal to or close to a color component mainly occupying the body part which is the area within the predetermined area specified by the predetermined area specifying means and which is a reference of the predetermined area. It may be configured to generate an image in which image processing has been performed on a region having a color component.
- the image generation means in the first aspect is configured to include an intensity value calculation means, a mask means, an image processing means, and a color component calculation means.
- the intensity value calculating means calculates, for each pixel of the image to be processed, an intensity value indicating how close the color component of each pixel is to a predetermined skin color component.
- the color component may be a value based on any color space, such as a Lab value, an RGB value, and an Xy value.
- the skin color component is a predetermined value and is stored, for example, in a RAM (Random Access Memory) of the image capturing device.
- the intensity value is represented by a value of 256 gradations from 0 to 255. For example, if the intensity value is 0, it indicates that it is farthest from the skin color component, and if it is 255, it indicates that it is closest to the skin color component (it is the skin color component itself).
- the masking unit sets the intensity value of the pixel other than the predetermined region specified by the predetermined region specifying unit to a value indicating that the pixel is far from the skin color component.
- the mask means generates a mask image for performing a mask process on an area other than the predetermined area, and performs a multiplication process on the generated mask image by an image indicating the intensity value of each pixel, thereby obtaining an intensity value.
- the image processing means performs image processing on the image to be processed.
- the definition of the image processing performed by the image processing means is the same as the image processing defined in the description of the image generation means.
- the color component calculation means calculates a color component closer to the color component of the pixel of the original image as a new color component of the pixel as the intensity value of each pixel is farther from the skin color component. The closer the intensity value at is to the skin color component, the closer to the color component of the pixel of the image generated by the image processing means as a new color component of this pixel. Calculate color components.
- the color component calculation means calculates a new color component of each pixel (that is, a color component of each pixel of an output image) based on the intensity values calculated by the intensity value calculation means and the mask means.
- the image processing is a blurring process
- the larger the intensity value the more strongly the image component that reflects the color components of the blurred image (the image obtained by performing the blurring process on the original image) is generated.
- a color component calculation means is configured. Further, the color component calculating means is configured so that an image in which the color component of the original image is more strongly reflected as the intensity value is smaller is generated.
- the image generating means in the first aspect of the present invention is configured to include an intensity value calculating means, a masking means, and an image processing means.
- the image generation means does not include the color component calculation means, and the color components of the output image are calculated by the image processing means.
- the intensity value calculation means and the mask means in the third aspect are the same as the intensity value calculation means and the mask means in the second aspect ( while the image processing means in the third aspect is a processing object. As the intensity value of each pixel is farther from the skin color component for the image, the effect of the image processing is weakened and the image processing is performed on this pixel, and the intensity value of each pixel is closer to the skin color component.
- the image processing means performs image processing on this pixel by increasing the influence of the image processing as shown by c.
- the image processing means performs image processing based on the intensity values of each pixel of the image obtained by the intensity value calculating means and the masking means.
- it is not necessary to provide the color component calculating means, and thus, it is possible to realize a reduction in the size of the apparatus, a higher processing speed, a reduction in cost, and the like. .
- the image generating means includes an intensity value calculating means, an image processing means, and a color component calculating means.
- the definition of the intensity value calculated by the intensity value calculating means is different from that in the second aspect.
- the intensity value indicates how close the color component of each pixel is to the color component that mainly occupies the body part that is the reference of the predetermined area. Therefore, the intensity value calculating means according to the fourth aspect is provided for each of the images to be processed. For the pixel, an intensity value is calculated that indicates how close the color component of each pixel is to the color component that mainly occupies the body part that is the reference for the predetermined area.
- the fourth embodiment is different from the second embodiment in that the mask means may or may not be provided.
- the color component calculation means calculates a new color component of each pixel without using the intensity value in the mask image, as a matter of course.
- the image processing means is configured to perform image processing on a predetermined area of the image to be processed.
- the fourth embodiment and the second embodiment have the same configuration.
- the intensity value changes according to the processing result of the predetermined area specifying means. That is, image processing corresponding to the skin color of the body part of each person as the subject is executed. Therefore, it is possible to cope with the difference in skin color due to race or individual difference, and to accurately perform image processing on subjects having different skin colors.
- the image generating means is configured to include an intensity value calculating means and an image processing means. Further, in the fifth aspect of the present invention, similarly to the third aspect of the present invention, it is not necessary to provide a color component calculating means, and the color components of the output image are calculated by the image processing means. Therefore, according to the fifth aspect of the present invention, similarly to the third aspect of the present invention, it is possible to realize a reduction in the size of the device, an increase in the speed of processing, a reduction in cost, and the like.
- the intensity value is not obtained based on a predetermined skin color component, It is determined based on the color component that mainly occupies the portion. For this reason, according to the fifth aspect of the present invention, it is possible to accurately perform image processing on subjects having different skin colors in response to skin color differences due to race or individual difference. It becomes possible. Further, in the fifth aspect of the present invention, the mask means may or may not be provided as in the fourth aspect.
- the image processing means may be configured not to execute the image processing on pixels having a predetermined range of intensity values.
- the “predetermined range of intensity values” is an intensity value indicating an area that is not desired to be subjected to image processing. Specific examples include skin color components and body parts that have There is an intensity value indicating that the color component occupies the farthest position.
- the color component calculation means in the second and fourth aspects and the image processing means in the third and fifth aspects provide an output in which the influence of the image processing is not reduced for pixels having intensity values within a predetermined range. May be set to generate images. In such a setting, there is no need to perform image processing on pixels having intensity values within a predetermined range.
- the image processing means performs image processing irrespective of the intensity value of each pixel ( and performs processing corresponding to the intensity value by the color component calculation means). May be executed, and as a result, there may occur pixels in which the image processing performed by the image processing unit is not reflected at all.
- the configuration determined by the means is particularly effective in the second and fourth aspects.
- the image generating means determines the content of the image processing to be performed based on the size of the body part which is the reference of the predetermined area specified by the predetermined area specifying means. For example, the image generating means determines a parameter for performing the predetermined image processing based on the size of the body part that has been the reference for the predetermined area. Examples of such parameters include the degree of blurring of the blurring process (more specifically, the size of the radius when performing the blurring process using a Gaussian filter), the degree of edge enhancement, and the brightness. There is a degree of correction and the like. Examples of determining the type of image processing include determining whether to perform edge enhancement, whether to perform brightness correction, and whether to perform blurring processing.
- the blurring process as an example, if the blurring process is performed excessively on a small area, the entire area will be blurred and a desired image (for example, an image in which the skin has been appropriately smoothed) cannot be obtained.
- a desired image for example, an image in which the skin has been appropriately smoothed
- a portion to be blurred for example, an undesired component contained in a face image, such as wrinkles, a spot, rough skin, acne, etc.
- a desired blur is not obtained. No image is obtained.
- image processing such as the degree of edge enhancement.
- an appropriate image processing content is determined and executed in accordance with the size of the body part serving as a reference of the predetermined area, and a desired image of the user is obtained. It becomes possible.
- the image generating means may be configured to determine the content of the image processing to be performed based on the size of the body part detected by the detecting means.
- an element extracting means is further provided.
- the element extracting means extracts at least one element included in a predetermined area, which is an element constituting a body part of a person who is a subject in an image to be processed.
- elements refer to the parts that make up the body part. Examples of such elements are facial parts (specifically, eyes, eyelids, lips, nose, nostrils, eyebrows, eyelashes, etc.). In this case, the face is the body part, and the face part is the element. Any existing technology may be applied to the element extraction means.
- the image generating means restricts and executes image processing on an element region determined based on the element extracted by the element extracting means.
- the image generation means may be configured not to perform image processing on the predetermined area. Further, the image generating means performs processing with different parameters (the degree of image processing is suppressed) when performing image processing on the predetermined area as compared with other areas where image processing is performed. May be configured.
- the image generating means performs limited image processing on an element region determined based on the elements extracted by the element extracting means. Therefore, the influence of the image processing on the element area is suppressed.
- the content of the image processing is determined based on the color component and the body part used as a reference for the predetermined area. In this case, for a pixel having a color component close to a skin color component or a color component mainly occupying a body part which is a reference of the predetermined region within the predetermined region, none of the pixels corresponding to the above-mentioned elements are included. Image processing was performed under the conditions.
- the seventh aspect of the present invention is effective. That is, an element that cannot be sufficiently distinguished by the intensity value calculation means or the mask means It is possible to accurately suppress image processing for.
- the image generating means may be configured to further include an edge mask means.
- the edge masking means obtains the edge intensity for each pixel of the image to be processed, and for each pixel, the stronger the intensity of the extracted edge is, the more the skin color component and the body part that is the reference for the predetermined area It gives an intensity value that indicates that the color component is far from the color component that mainly occupies.
- the edge of the element constituting the body part of the subject is acquired as a pixel having an intensity value indicating that it is far from the color component.
- the edge mask means may be configured to give the intensity of the edge acquired for a certain pixel to surrounding pixels located within a predetermined range from the pixel.
- the color component calculation means and the image processing means further include a new color component in each pixel based on the intensity value obtained by the edge mask means.
- the edge mask means gives an intensity value to each pixel after reducing the image to be processed, and further provides an original value. You may be comprised so that it may enlarge to the image of a magnitude
- the image processing is restricted and performed as described above.
- the blur processing for such unnecessary skin components will function effectively. not c Therefore, it is necessary to control the Ejji of these unwanted components can not be extracted by the edge mask means.
- edge mask means Therefore, it is possible to prevent edge acquisition of unnecessary components of the skin. That is, with the edge mask means configured as described above, it is possible to perform a desired blurring process. To obtain the same effect, it is also effective to perform edge extraction after smoothing processing such as a median filter, or to set a large radius for the filter used for edge extraction processing. In addition, since the edge is extracted from the reduced image, the time required for the edge extraction processing can be reduced.
- An eighth aspect of the present invention is an image processing device, comprising: an image specifying unit and an image processing unit.
- the image specifying unit specifies a position and a range of a region including an arbitrary image in the image.
- the arbitrary image is an image to be subjected to image processing by the image processing means, and may be any image. For example, a part or the whole of a human body such as a face or a hand, an object such as food or a car, or a background such as a sky or a mountain.
- the image processing means generates an image obtained by performing image processing on an area within the area specified by the image specifying means and having a color component equal to or close to a color component mainly occupying the area.
- Examples of image processing executed by the image processing means include processing using a low-pass filter or a high-pass filter.
- There are various other processes such as the image process, the color inversion process, and the image rotation process defined in the first embodiment of the present invention.
- the eighth aspect of the present invention it is possible to prevent image processing from being performed on a part different from the main part even in the specified area. For example, when it is desired to change the color of only the body (main part) of a car, it is possible to prevent the color of a window glass, a bumper, etc. (a part different from the main part) from being changed.
- the first to eighth aspects of the present invention may be realized by executing a program by an information processing device. That is, the present invention specifies the processing executed by each means in the first to eighth aspects as a program for causing an information processing apparatus to execute, or a recording medium on which the program is recorded. Can You.
- the present invention it is possible to perform image correction limited to only a specific region of a person as a subject. Therefore, it is possible to prevent a portion different from the subject (for example, the background) from becoming unnatural due to the image correction. In addition, it is possible to perform image processing according to a difference in skin color due to race or individual difference.
- FIG. 1 is a diagram showing functional blocks of the first embodiment of the image correction device
- FIG. 2 is a diagram showing a processing flow of the first embodiment of the image correction device
- FIG. FIG. 1 is a diagram showing functional blocks of the first embodiment of the image correction device
- FIG. 2 is a diagram showing a processing flow of the first embodiment of the image correction device
- FIG. FIG. 1 is a diagram showing functional blocks of the first embodiment of the image correction device
- FIG. 2 is a diagram showing a processing flow of the first embodiment of the image correction device
- FIG. 4 is a diagram showing an outline of a skin color region extraction process in the first embodiment
- FIG. 5 is a diagram showing an outline of a skin color intensity extraction process
- Figure 6 is a diagram showing an example of a histogram of skin color components
- FIG. 7 is a diagram showing an outline of the skin beautification process in the first embodiment
- FIG. 8 is a diagram showing an example of an n ⁇ n operator
- FIG. 9 is a diagram showing functional blocks of a second embodiment of the image correction device
- FIG. 10 is a diagram showing a processing flow of the second embodiment of the image correction device
- FIG. FIG. 12 is a diagram showing a function block of a third embodiment of the device
- FIG. 12 is a diagram showing a processing flow of a third embodiment of the image correction device
- FIG. 13 is a diagram showing an example of a Sobel filter And;
- FIG. 14 is a diagram illustrating an example of a difference between a skin color intensity image and an edge mask image.
- an image correction device 1a which is a first embodiment of an image correction device for performing image correction on a skin region in a human image will be described. Specifically, a case of blur processing will be described as an example of image processing to be performed.
- the image correction device 1a is used for images other than a person image, such as a car image. It may be applied to an image or a landscape image. In this case, various types of image processing such as image correction using a high-pass filter that converts colors are conceivable.
- a person image is an image that includes at least part or all of the face of a person. Therefore, the person image may include an image of the entire person, or may include an image of only the face or upper body of the person. Also, images of a plurality of persons may be included.
- the background is a scenery other than a person (background: also includes things that were noticed as subjects), patterns, etc.
- the image correction device la includes hardware such as a CPU (central processing unit), a main memory (RAM), and an auxiliary storage device connected via a bus.
- the auxiliary storage device is configured using a nonvolatile storage device.
- the non-volatile storage device referred to here is a so-called ROM (Read-Only Memory: E PROM (Erasable
- Fluoroelectric RAM refers to a hard disk, etc.
- the storage unit St is configured using a so-called RAM.
- the storage unit St is used when each process is executed by the face detection unit 2, the mask processing unit 3, the skin color region extraction unit 4a, and the beautiful skin processing unit 5a.
- the storage unit St stores the data of the original image 6 to be processed, the mask image 7 as the intermediate generation data, and the color intensity image of the skin. Data such as image 8, masked skin color intensity image 9, skin color region image 10a, blurred image 11 and the like, and unskinned image 12 data as output data are read and written.
- FIG. 2 is a diagram showing a process executed by each functional unit shown in FIG. 1 and an overall process flow of the image correction device 1a.
- each functional unit will be described with reference to FIGS.
- the face detection unit 2 performs a face detection process.
- the face detection processing the data of the original image 6 is input, and the face position detection processing S O1 is executed to output face rectangle coordinates. That is, in the face detection processing, the face is detected as the body part of the subject. The position of the face of the person who is the subject in the original image 6 is specified by the face rectangle coordinates.
- the data of the original image 6 is data of a human image input to the image correction device 1a.
- the face rectangle is a rectangle that is recognized as a rectangle including the face of a person included in the original image 6 (hereinafter, referred to as a face rectangle: see 17 in FIG. 3 (a)).
- the face rectangle coordinates are data indicating the position and size of the face rectangle in the original image 6.
- the face position detection processing S O 1 may be realized by any existing method (for example, see Patent Document 3).
- face rectangular coordinates may be obtained by template matching using a reference template corresponding to the outline of the entire face.
- face rectangular coordinates may be obtained by template matching based on face components (eyes, nose, ears, etc.).
- the vertices of the hair may be detected by chroma processing, and the face rectangle coordinates may be obtained based on the vertices.
- the face rectangle and the face rectangle coordinates may be manually specified by the user.
- a face rectangle or face rectangle coordinates may be specified based on information input by the user, that is, semi-automatically.
- the mask processing unit 3 performs a mask process.
- the masking process the face rectangle coordinates are input, and the mask image creating process S02 is executed to output the data of the mask image 7.
- the mask image creation processing S02 based on the position of the face of the person who has become the subject, that is, in the present apparatus 1, based on the input face rectangular coordinates, the face of the person who has become the subject and the area below the face Is estimated, and a mask image 7 for masking a region other than the estimated region is generated.
- a predetermined area here, the face and the area under the face
- a mask image for masking other than this area is specified. 7 is generated.
- the face detection unit 2 and the mask processing unit 3 are applied as an example of the predetermined area specifying unit.
- the mask processing unit 3 is applied as an example of the specifying unit and the masking unit.
- FIG. 3 is a diagram showing an outline of the mask image creation processing S02.
- the coordinates of the two ellipses 13 and 14 corresponding to the input face rectangle coordinates are calculated using the following equation (1). Specifically, first, the width of the face rectangle
- the ellipse 13 is a figure showing the area of the face of the person who became the subject
- the ellipse 14 is a figure showing the area below the face of the person who was the subject (neck, chest, shoulders, etc.).
- the ellipse 13 is set to be in contact with the four points of the face rectangle 17.
- the ellipse 14 is set so as to circumscribe the lowermost part of the ellipse 13 with its major axis being horizontal.
- the obtained two ellipses 13 and 14 are enlarged to obtain ellipses 15 and 16, respectively.
- the ellipse 13 and the ellipse 15 and the ellipse 14 and the ellipse 16 have the same center (the long axis and the short axis respectively). Intersection). Then, a mask image 7 is obtained using the obtained ellipses 13 to 16.
- the inside of the ellipse 13 and the inside of the ellipse 14 are set as transmission regions (regions not masked).
- the region between the ellipse 15 and the ellipse 13 and the region between the ellipse .16 and the ellipse 14 from the outside (the ellipses 15 and 16) to the inside (the ellipses 13 and 1) 4) Generates a gradation of transmittance that increases the transmission ratio toward. This gradation may be linear or non-linear.
- An area outside the ellipse 15 and outside the ellipse 16 is set as an opaque area (area to be masked).
- the mask image 7 may be generated using any figure other than the ellipse. For example, it may be generated by using a special figure having the shape of a person's upper body.
- the skin color region extraction unit 4a executes a skin color region extraction process.
- the skin color region extraction processing will be described.
- the data of the original image 6, the face rectangular coordinates, and the data of the mask image 7 are input, and the skin color intensity extraction process S03, the synthesis process SO4a, and the skin
- the color area correction processing S05 the data of the skin color area image 10a is output.
- the skin color region extraction unit 4a is applied as an example of the intensity value calculation unit.
- FIG. 4 is a diagram showing an outline of the skin color region extraction processing.
- the skin color region extraction processing first, the data of the original image 6 and the face rectangle coordinates are input, and the skin color intensity extraction processing S O 3 is executed.
- FIG. 5 is a diagram showing an outline of the skin color intensity extraction processing S03.
- FIG. 6 is a diagram showing a histogram of skin color components used in skin color intensity extraction processing S 03.
- the skin color intensity extraction processing S 03 will be described below with reference to FIGS.
- the sampling area 18 is specified inside the face rectangle 17 using the input face rectangle coordinates.
- the sampling area 18 is determined by, for example, a value obtained by multiplying the center coordinate of the face rectangle 17 and w and h of the face rectangle 17 by a constant. Is specified.
- the sampling area 18 may be specified by other methods. It is desirable that the sampling area 18 be set so as not to include an area having a color distinct from the skin color, such as an eye or a nostril.
- pixel values (color component values) in the sampling area 18 are sampled (skin color sampling).
- skin color sampling the skin color of the subject's face is mainly sampled.
- a histogram shown in FIG. 6 is formed based on the sampled color component values.
- FIG. 6 shows an example of a histogram formed based on the Lab color space.
- the upper and lower 10% components (shaded area in Fig. 6) on the horizontal axis (L or the values of a and b) are cut off.
- the value of 10% may be appropriately changed by a designer.
- the standard deviation and the center of gravity in the sampling area 18 are calculated using the values of L ab of the uncut portion in the skin color component histogram. Then, the degree of skin color at each pixel of the original image 6 (hereinafter, referred to as skin color intensity: equivalent to the intensity value) is calculated by the equation of Equation 2 using these six calculated values ( Extraction of skin color intensity), and a skin color intensity image 8 is generated.
- the noise component referred to here is information about a pixel mainly having a color component other than the skin color, such as a nostril or an eye in the sampling area 18. Through such processing, even when a color component other than the skin color, such as a nostril or an eye, is included in the sampling area 18, it is possible to delete the information about these. ( ⁇ Synthesis processing>)
- the skin color region extraction processing the data of the skin color intensity image 8 and the data of the mask image 7 are input, and the synthesis processing S04a is executed.
- the input skin color intensity image 8 and mask image 7 are combined. That is, execution of a t- synthesis process S 04 a in which a multiplication process is performed using the skin color intensity image 8 generated by the skin color intensity extraction process S 03 and the mask image 7 generated by the mask process. As a result, a color intensity image 9 of the masked skin is generated.
- the skin color region extraction processing the data of the masked skin color intensity image 9 is input, and the skin color region correction processing S05 is executed.
- the reduction processing is performed on the masked skin color intensity image 9 generated by the synthesis processing S 04 a.
- the degeneration process By performing the degeneration process, the color intensity of the skin around the eyes and mouth is reduced. In other words, black areas (areas with low or zero skin color intensity) that are not to be subjected to the blur processing are expanded outward.
- This degeneration processing prevents the blur processing from being performed around the eyes and the mouth. In other words, it is possible to prevent the area around the eyes and mouth that should obtain a clear image from being blurred.
- a skin color region image 10a is generated. In the skin color region image 10a, pixels having high skin color intensity are represented by large pixel values, and pixels having low skin color intensity are represented by small pixel values.
- the beautiful skin processing section 5a performs a beautiful skin treatment.
- the beautiful skin processing performed by the beautiful skin processing unit 5a will be described.
- the data of the original image 6 and the data of the skin color area image 10a are input, and the blurring filter processing S06a and the beautiful skin synthesis processing S07 are executed, whereby the beautiful skin image 1 is obtained. 2 data is output.
- the mask processing unit 3, the skin color region extraction unit 4a, and the beautiful skin processing unit 5a are applied as an example of the image generation unit.
- the beautiful skin processing unit 5a is applied as an example of a color component calculation unit.
- the data of the beautiful skin image 12 is also data output by the image correction device 1a.
- the blur filter processing S06a is executed.
- the blur filter process SO 6 a a blur process is performed on the original image 6.
- the blur processing may be any existing blur processing. For example, there are methods using a moving average filter, a weighted average filter (including a Gaussian filter), and an ⁇ -filter.
- the blur filter processing S 06 a among the pixels of the original image 6, only the pixels in the skin color area image 10 a whose skin color intensity value is greater than 0 are subjected to the blur processing. For this reason, the blurring process is not performed on the pixel having the skin color intensity of 0, that is, the pixel that is obviously not the skin color or the pixel masked by the combining process S04a.
- the execution of the blurring filter processing S 06 a generates a blurred image 11.
- the beautiful skin processing the data of the original image 6, the data of the skin color area image 10a, and the data of the blurred image 11 are input, and the beautiful skin synthesis processing S07 is executed.
- the beautiful skin synthesis process S07 translucent synthesis is performed on the original image 6 and the blurred image 11 using the skin color intensity in the skin color region image 10a.
- Equation 3 is an equation for translucent synthesis performed in the beautiful skin synthesis process S07.
- G G ors xQ.-V) + G smaall , xV
- Ror g , G org , B org RGB component of original image
- V The skin color intensity of the skin color region image (0 to 1)
- a synthesis process according to the skin color intensity is executed. Specifically, the pixel value (RGB component) of the blurred image 11 is strongly reflected for pixels with high skin color intensity, and the pixel value (RG B component) is strongly reflected.
- the degree of blurring is increased in a region having a high skin color intensity (ie, a skin color region), and a region having a low skin color intensity (ie, a region not having a skin color). For, the degree of blur becomes weaker.
- the beautiful skin synthesis process S07 a beautiful skin image 12 is generated.
- the face of the subject is detected from the image to be processed by the face position detection process S01, and the face rectangular coordinates are obtained.
- a mask image 7 for masking other than the upper body of the subject is generated based on the face rectangle coordinates.
- a blur process in which the mask process by the mask image 7 is reflected is executed. For this reason, when the blurring process is performed on an area having a skin color component such as a subject's face, an area other than the subject having the skin color component (for example, the background) in the same image is blurred. No action is taken.
- the color component of the skin of the subject is extracted from inside the sampling area 18, that is, inside the face area of the subject detected by the face position detection processing S 01. Then, based on the extracted skin color components, a region to be subjected to the blurring processing is determined. That is, a skin color component that is recognized as a skin color when the skin color intensity image 8 is created is determined based on the extracted skin color component. For this reason, for example, if the skin is white and a person is a subject, a skin color intensity image 8 is generated based on the extracted white and skin color components. A skin color intensity image 8 is generated based on the extracted color components of the black skin color.
- the skin color is sampled from the face position of the original image 6 without fixedly determining the skin color. Therefore, it is possible to cope with a difference in skin color due to race or individual difference, and a stable correction effect can be obtained.
- the image correction device 1a of the present invention may be mounted on various existing devices. For example, it may be mounted on a printer, a display, a digital camera, an MPEG (Moving Picture Experts Group) player, or the like. In such a case, the image data input to each device is input to the storage unit St as the original image 6 data.
- the data of the beautiful skin image 12 output from the image correction device 1a is used in accordance with the characteristics of each device. For example, when the image correction device 1a is mounted on a printer, the beautiful skin image 12 is printed by the printer.
- the image correction device 1a of the present invention may be virtually realized on an information processing device including a CPU by executing each of the processes S01 to S07 in FIG. 2 by the CPU.
- a program that causes the information processing device to execute each of the processes S01 to S07 is the invention of the present application.
- This program is recorded on a recording medium such as a CD-ROM, and may be directly executed by a personal computer or a server (for example, a server installed in an ASP (Application Service Provider)), or may be a hard disk or a ROM.
- the program may be stored in a nonvolatile storage device and executed by these devices.
- the data of the original image 6 may be input from a scanner or a digital camera connected to the information processing device. Further, the data of the original image 6 may be input by being uploaded or downloaded from another device via a network such as the Internet.
- the face detection unit 2, the mask processing unit 3, the skin color region extraction unit 4a, and the beautiful skin processing unit 5a may be configured using chips mounted as hardware, respectively.
- the storage unit St may be configured using the RAM of another device to which the image correction device 1a is attached. That is, the storage unit St does not necessarily need to be provided inside the image correction device 1a, and is not included in the face detection unit 2, the mask processing unit 3, the skin color region extraction unit 4a, and the beautiful skin processing unit 5a. If it is configured to be accessible, it may be provided outside the image correction device 1a. In this case, the storage unit St is stored by another device (for example, the CPU of the device to which the image correction device 1a is attached) and the processing units 2 to 5 of the image correction device 1a. May be configured to be shared.
- the degree of blur may be determined based on the size of the face rectangle detected in the face position detection processing S 01. Specifically, the larger the face rectangle, the stronger (larger) the degree of blurring is performed. Conversely, the smaller the face rectangle, the weaker (smaller) the degree of blurring is. For example, this can be realized by operating parameters such as the radius of the moving average filter and the weight average filter. In the case of a Gaussian filter, the degree of blur can be changed by changing the standard deviation ⁇ in the following equation.
- FIG. 8 is a diagram showing a specific example of an operator of ⁇ ⁇ ⁇ .
- a blur process is performed, but image processing other than the blur process (eg, edge enhancement, brightness correction, color correction, texture mapping) may be performed.
- image processing other than the blur process eg, edge enhancement, brightness correction, color correction, texture mapping
- the mask processing unit 3 does not necessarily need to be provided. However, when the mask processing unit 3 is not provided, the processing based on the mask image 7 is not performed. Therefore, the time required to obtain the beautiful skin image 12 may increase.
- FIG. 9 is a diagram showing a function block of the image correction device 1b.
- the image correction device 1b differs from the image correction device 1a in that the image correction device 1b includes a beautiful skin processing unit 5b instead of the beautiful skin processing unit 5a.
- the image correction device 1b includes a beautiful skin processing unit 5b instead of the beautiful skin processing unit 5a.
- FIG. 10 is a diagram showing a process executed by each functional unit shown in FIG. 9 and an overall process flow of the image correction device 1b.
- each functional unit of the image correction device 1b will be described with reference to FIGS.
- the beautiful skin processing unit 5b is different from the beautiful skin processing unit 5a in that the beautiful skin processing unit 5b does not perform the beautiful skin synthesis process S07 and performs the blur filter process S06b instead of the blur filter process S06a.
- the beautiful skin processing performed by the beautiful skin processing unit 5b will be described.
- the data of the original image 6 and the data of the skin color area image 10a are input, and the blur filter processing S06b is executed.
- the blur filter process S 06 b a blur process according to the skin color intensity included in the skin color region image 10 a is performed on each pixel of the original image 6.
- the degree of blur is set to a large degree for the blurring process for pixels with high skin color intensity, and the degree of blurring is set to low for pixels with low skin color intensity.
- the blur filter processing 06b may be configured as follows.
- a blurred image 11 is generated by the blurring filter process SO6a, and the beautiful skin synthesis process S07 is performed using the blurred image 11, the original image 6, and the skin color region image 10a.
- the beautiful skin images 1 and 2 were generated.
- the beautiful skin correction device 1 b may generate the beautiful skin image 12 without generating the blurred image 11. More specifically, when calculating the value of each pixel of the beautiful skin image 12 based on the equation (3), a blur process is executed for the pixel to be processed each time. That is, each value of the RGB component of the blurred image used in Equation 3 is calculated only for the necessary pixels each time. With this configuration, there is no need to buffer the blurred image 11 and it is possible to save memory space. You.
- the beautiful skin image 12 as an output image is directly generated without generating the blurred image 11 in the beautiful skin processing. Therefore, it is possible to reduce the time required for the blur filter processing S 06 a for generating the blurred image 11 and the beautiful skin synthesis processing S 07.
- FIG. 1 1 for explaining an image correction apparatus 1 c is a third embodiment of the image correction apparatus is a diagram showing the functional Proc of the image correction apparatus 1 c.
- the image correction device 1c is different from the image correction device 1b in that the image correction device 1c includes a skin color region extraction unit 4c instead of the skin color region extraction unit 4a and an edge mask processing unit 19.
- the image correction device 1c includes a skin color region extraction unit 4c instead of the skin color region extraction unit 4a and an edge mask processing unit 19.
- FIG. 12 is a diagram illustrating a process executed by each functional unit illustrated in FIG. 11 and an overall process flow of the image correction device 1c.
- the respective functional units of the image correction device 1c will be described with reference to FIGS.
- the edge mask processing section 19 executes an edge mask process.
- the edge mask processing the original image 6 is input, and the edge mask image creation processing S08 is executed, so that the data of the edge mask image 20 is output.
- the input original image 6 is reduced, and a reduced image is obtained.
- the reduction ratio may be determined based on the size of the face rectangle.
- the width of the largest one of the input face rectangles may be reduced so as to be about a specified pixel (several tens to one hundred pixels).
- edge extraction processing is performed by any existing technology.
- edge extraction using a Sobel filter is performed.
- Figure 13 shows the Sobelf It is a figure showing an example of a filter.
- FIG. 13 (a) shows a downward Sobel filter
- FIG. 13 (b) shows an upward Sobel filter.
- An edge extraction process using each Sobel filter is performed, and an edge image of each Sobel filter is obtained. In this case, two edge images are acquired.
- the obtained edge images are grayed and combined to obtain a combined edge image.
- the edge extracted by the downward soft bell filter and the edge extracted by the upward soft bell filter are represented in the combined edge image.
- the obtained synthesized edge image is inverted, and an inverted edge image is obtained.
- a reduction process is performed on the inverted edge image.
- an image in which the extracted edges are spread around is obtained.
- the inverted edge image subjected to the degeneration processing is enlarged to the size of the original image 6, and an edge mask image 20 is obtained.
- the pixel values in the edge mask image 20 are treated as the skin color intensity. That is, since the pixel value of the obtained edge portion has a low or zero pixel value due to the inversion process, it is treated as a pixel having a low skin color intensity.
- the influence of the extracted edge extends to its surroundings due to the degeneration processing. That is, an edge mask image 20 is created as an image indicating that the extracted edge and its surroundings have low skin color intensity.
- the skin color region extraction unit 4c differs from the skin color region extraction unit 4b in that the synthesis process S04c is performed instead of the synthesis process S04a.
- the skin color area extraction processing performed by the skin color area extraction unit 4c will be described, particularly the synthesis processing S04c.
- the skin color intensity extraction process S03 After the skin color intensity extraction process S03 is performed, the skin color intensity image 8, the mask image 7, and the et al.
- the dimask image 20 is input, and the combining process S04c is executed.
- the input skin color intensity image 8, mask image 7, and edge mask image 20 are combined. That is, generated by skin color intensity extraction processing S 03 A multiplication process is performed using the obtained skin color intensity image 8, the mask image 7 generated by the mask process, and the edge mask image 20 generated by the edge mask process.
- the color intensity image 9 c of the mask Sumihada is generated.
- the edge mask image 20 is used in the combining process S04c.
- the color intensity of the extracted edge and the surrounding skin is set to low or zero. Therefore, the synthesis process S04c obtains a masked skin color intensity image 9c in which the color intensity of the edge and the surrounding skin is low or set to 0.
- the beautiful skin processing is performed using the masked skin color intensity image 9c, the other skin is maintained while maintaining the sharpness of the edge and its surroundings, that is, the eyes, eyebrows, and mouth. It is possible to perform the blurring process on the color portion of. In particular, it is effective for beautiful skin processing of a face image having lipstick close to skin color or eyebrows close to skin color (for example, thin eyebrows).
- FIG. 14 is a diagram for illustrating a difference between the skin color intensity image 8 and the edge mask image 20.
- FIG. 14 (a) is an example of the skin color intensity image 8
- FIG. 14 (b) is an example of the edge mask image 20.
- the skin color intensity image 8 since the color of the eyebrow of the left person in the original image 6 is close to the skin color, the skin color intensity image 8 has a value indicating that the skin color intensity of the eyebrow portion is close to the skin color.
- the skin color intensity image 8 since the color of the lips of the right person in the original image 6 is close to the skin color, the skin color intensity image 8 has a value indicating that the skin color intensity of the lips is close to the skin color.
- the blurring process is performed on the eyebrows of the left person and the lips of the right person, and a beautiful skin image 12 with blurred eyebrows and lips is obtained.
- the edge mask image 20 since the edges of the eyebrows of the left person and the edges of the lips of the right person were extracted, the skin color intensity of the eyebrows of the left person and the lips of the right person became skin tone. The value indicates that it is far away. For this reason, by using the edge mask image 20, blurring is not performed on the eyebrows and lips, and the sharpness of these portions is maintained. It becomes possible.
- the order in which the enlargement processing, the inversion processing, and the reduction processing are performed may be changed as necessary.
- the degeneration processing is performed before the inversion processing, the pixel value (skin color strength) is not high or the pixel value (skin color strength) is not expanded to the outside, but the pixel value (skin color strength) is high. 5 The area of 5 ("1" for skin color intensity) is expanded outward.
- the face element mask processing section (corresponding to the face element extraction means) and the face element mask image creation processing are executed, and the edge mask image 20 is created.
- a face element mask image may be created.
- elements face elements included in the subject's face are extracted instead of edges. Such face elements are extracted, for example, by performing template matching. Then, the face element mask image is configured so that the extracted face element and the skin color intensity of pixels within a predetermined range from this face element are set to low or zero.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/549,653 US7587083B2 (en) | 2003-03-20 | 2004-03-19 | Image processing device |
EP04722000.9A EP1612730B1 (en) | 2003-03-20 | 2004-03-19 | Image processing device |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-078467 | 2003-03-20 | ||
JP2003078467 | 2003-03-20 | ||
JP2003-409264 | 2003-12-03 | ||
JP2003409264A JP4461789B2 (ja) | 2003-03-20 | 2003-12-08 | 画像処理装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004084142A1 true WO2004084142A1 (ja) | 2004-09-30 |
Family
ID=33032366
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/003726 WO2004084142A1 (ja) | 2003-03-20 | 2004-03-19 | 画像処理装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US7587083B2 (ja) |
EP (1) | EP1612730B1 (ja) |
JP (1) | JP4461789B2 (ja) |
WO (1) | WO2004084142A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080267443A1 (en) * | 2006-05-05 | 2008-10-30 | Parham Aarabi | Method, System and Computer Program Product for Automatic and Semi-Automatic Modification of Digital Images of Faces |
US8379958B2 (en) | 2007-03-20 | 2013-02-19 | Fujifilm Corporation | Image processing apparatus and image processing method |
CN108519400A (zh) * | 2018-02-06 | 2018-09-11 | 温州市交通工程试验检测有限公司 | 一种预应力梁灌浆饱满度智能检测方法及*** |
Families Citing this family (76)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7440593B1 (en) * | 2003-06-26 | 2008-10-21 | Fotonation Vision Limited | Method of improving orientation and color balance of digital images using face detection information |
KR100640427B1 (ko) * | 2004-01-05 | 2006-10-30 | 삼성전자주식회사 | 휴대 단말기의 데이터 및 배경 색상변경 방법 |
JP4534750B2 (ja) * | 2004-12-21 | 2010-09-01 | 株式会社ニコン | 画像処理装置および画像処理プログラム |
JP2006215685A (ja) * | 2005-02-02 | 2006-08-17 | Funai Electric Co Ltd | フォトプリンタ |
JP2006338377A (ja) * | 2005-06-02 | 2006-12-14 | Fujifilm Holdings Corp | 画像補正方法および装置並びにプログラム |
JP2007011926A (ja) * | 2005-07-04 | 2007-01-18 | Fujifilm Holdings Corp | 画像処理方法および装置並びにプログラム |
KR101303877B1 (ko) * | 2005-08-05 | 2013-09-04 | 삼성전자주식회사 | 얼굴 검출과 피부 영역 검출을 적용하여 피부의 선호색변환을 수행하는 방법 및 장치 |
KR100735549B1 (ko) * | 2005-08-08 | 2007-07-04 | 삼성전자주식회사 | 영상의 피부색을 변환하는 영상 처리 방법 및 장치 |
JP4977996B2 (ja) * | 2005-10-28 | 2012-07-18 | 株式会社ニコン | 撮像装置 |
US8620038B2 (en) * | 2006-05-05 | 2013-12-31 | Parham Aarabi | Method, system and computer program product for automatic and semi-automatic modification of digital images of faces |
JP5354842B2 (ja) | 2006-05-26 | 2013-11-27 | キヤノン株式会社 | 画像処理方法および画像処理装置 |
JP4747970B2 (ja) * | 2006-07-04 | 2011-08-17 | オムロン株式会社 | 画像処理装置 |
JP4712659B2 (ja) * | 2006-09-22 | 2011-06-29 | 富士フイルム株式会社 | 画像評価装置およびそのプログラム |
US8131011B2 (en) * | 2006-09-25 | 2012-03-06 | University Of Southern California | Human detection and tracking system |
JP4315215B2 (ja) * | 2007-05-18 | 2009-08-19 | カシオ計算機株式会社 | 撮像装置、及び顔検出方法、顔検出制御プログラム |
JP4885079B2 (ja) * | 2007-07-04 | 2012-02-29 | 富士フイルム株式会社 | デジタルカメラ、撮影方法及び撮影プログラム |
JP4666179B2 (ja) | 2007-07-13 | 2011-04-06 | 富士フイルム株式会社 | 画像処理方法および画像処理装置 |
CA2635068A1 (en) * | 2007-09-18 | 2009-03-18 | Parham Aarabi | Emulating cosmetic facial treatments with digital images |
JP2009145991A (ja) * | 2007-12-11 | 2009-07-02 | Ricoh Co Ltd | 画像処理装置、画像処理方法、プログラムおよび記憶媒体 |
JP4946913B2 (ja) * | 2008-02-26 | 2012-06-06 | 株式会社ニコン | 撮像装置および画像処理プログラム |
JP2010003118A (ja) * | 2008-06-20 | 2010-01-07 | Seiko Epson Corp | 画像処理装置、画像処理方法および画像処理プログラム |
JP5127592B2 (ja) | 2008-06-25 | 2013-01-23 | キヤノン株式会社 | 画像処理装置および画像処理方法、プログラム並びに、コンピュータ読み取り可能な記録媒体 |
KR101445613B1 (ko) | 2008-07-23 | 2014-09-29 | 삼성전자주식회사 | 영상 처리 방법 및 장치, 이를 이용한 디지털 촬영 장치 |
JP5547730B2 (ja) * | 2008-07-30 | 2014-07-16 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | 顔検知を用いた顔及び肌の自動美化 |
JP5181913B2 (ja) * | 2008-08-07 | 2013-04-10 | 株式会社リコー | 画像処理装置、撮像装置、画像処理方法及びプログラム |
US8311360B2 (en) * | 2008-11-13 | 2012-11-13 | Seiko Epson Corporation | Shadow remover |
US7916905B2 (en) * | 2009-02-02 | 2011-03-29 | Kabushiki Kaisha Toshiba | System and method for image facial area detection employing skin tones |
US8306283B2 (en) * | 2009-04-21 | 2012-11-06 | Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. | Focus enhancing method for portrait in digital image |
US8339506B2 (en) * | 2009-04-24 | 2012-12-25 | Qualcomm Incorporated | Image capture parameter adjustment using face brightness information |
JP5300595B2 (ja) * | 2009-05-28 | 2013-09-25 | キヤノン株式会社 | 画像処理装置及び方法、及びコンピュータプログラム |
KR101613616B1 (ko) | 2009-07-17 | 2016-04-19 | 삼성전자주식회사 | 적응적으로 최적의 미백 효과를 설정해주는 디지털 카메라 및 그 제어방법 |
US8558331B2 (en) | 2009-12-08 | 2013-10-15 | Qualcomm Incorporated | Magnetic tunnel junction device |
JP2010073222A (ja) * | 2010-01-07 | 2010-04-02 | Kao Corp | メイクアップシミュレーション方法 |
US8849029B2 (en) | 2010-02-26 | 2014-09-30 | Nec Corporation | Image processing method, image processing device and program |
JP4760999B1 (ja) * | 2010-10-29 | 2011-08-31 | オムロン株式会社 | 画像処理装置、画像処理方法、および制御プログラム |
JP4862955B1 (ja) * | 2010-10-29 | 2012-01-25 | オムロン株式会社 | 画像処理装置、画像処理方法、および制御プログラム |
CN102696054B (zh) * | 2010-11-10 | 2016-08-03 | 松下知识产权经营株式会社 | 进深信息生成装置、进深信息生成方法及立体图像变换装置 |
US20120288168A1 (en) * | 2011-05-09 | 2012-11-15 | Telibrahma Convergent Communications Pvt. Ltd. | System and a method for enhancing appeareance of a face |
US20130044939A1 (en) * | 2011-08-18 | 2013-02-21 | Ucl Business Plc | Method and system for modifying binocular images |
JP5753055B2 (ja) * | 2011-10-05 | 2015-07-22 | 花王株式会社 | 肌画像分析装置及び肌画像分析方法 |
JP5896204B2 (ja) * | 2011-11-04 | 2016-03-30 | カシオ計算機株式会社 | 画像処理装置及びプログラム |
US20140003662A1 (en) * | 2011-12-16 | 2014-01-02 | Peng Wang | Reduced image quality for video data background regions |
JP5959923B2 (ja) * | 2012-04-26 | 2016-08-02 | キヤノン株式会社 | 検出装置、その制御方法、および制御プログラム、並びに撮像装置および表示装置 |
US20140015854A1 (en) * | 2012-07-13 | 2014-01-16 | Research In Motion Limited | Application of Filters Requiring Face Detection in Picture Editor |
JP5420049B1 (ja) | 2012-10-31 | 2014-02-19 | Eizo株式会社 | 拡大率推定装置またはその方法 |
CN103905707A (zh) * | 2012-12-24 | 2014-07-02 | 腾讯科技(深圳)有限公司 | 一种摄像装置的控制方法及装置 |
US8983152B2 (en) | 2013-05-14 | 2015-03-17 | Google Inc. | Image masks for face-related selection and processing in images |
JP5632937B2 (ja) * | 2013-06-14 | 2014-11-26 | キヤノン株式会社 | 画像処理装置及び方法、及びコンピュータプログラム |
US9639956B2 (en) * | 2013-12-19 | 2017-05-02 | Google Inc. | Image adjustment using texture mask |
JP2014096836A (ja) * | 2014-01-08 | 2014-05-22 | Nikon Corp | デジタルカメラ |
JP6234234B2 (ja) * | 2014-01-14 | 2017-11-22 | キヤノン株式会社 | 情報処理装置、情報処理方法、プログラム |
KR101426785B1 (ko) * | 2014-03-05 | 2014-08-06 | 성결대학교 산학협력단 | 스마트 디바이스에서 예술감성 반영을 통한 민감영역에 대한 사용자 친화적 영상왜곡 방법 |
US9256950B1 (en) | 2014-03-06 | 2016-02-09 | Google Inc. | Detecting and modifying facial features of persons in images |
US9679194B2 (en) | 2014-07-17 | 2017-06-13 | At&T Intellectual Property I, L.P. | Automated obscurity for pervasive imaging |
CN104574306A (zh) * | 2014-12-24 | 2015-04-29 | 掌赢信息科技(上海)有限公司 | 一种即时视频中的人脸美化方法和电子设备 |
JP5930245B1 (ja) | 2015-01-23 | 2016-06-08 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP6458570B2 (ja) * | 2015-03-12 | 2019-01-30 | オムロン株式会社 | 画像処理装置および画像処理方法 |
JP6458569B2 (ja) * | 2015-03-12 | 2019-01-30 | オムロン株式会社 | 画像処理装置および画像処理方法 |
JP6432399B2 (ja) * | 2015-03-12 | 2018-12-05 | オムロン株式会社 | 画像処理装置および画像処理方法 |
WO2016160638A1 (en) * | 2015-03-27 | 2016-10-06 | Google Inc. | User sliders for simplified adjustment of images |
JP6179569B2 (ja) * | 2015-08-20 | 2017-08-16 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
US9864901B2 (en) | 2015-09-15 | 2018-01-09 | Google Llc | Feature detection and masking in images based on color distributions |
US9547908B1 (en) | 2015-09-28 | 2017-01-17 | Google Inc. | Feature mask determination for images |
JP2017102642A (ja) * | 2015-12-01 | 2017-06-08 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
US10055821B2 (en) * | 2016-01-30 | 2018-08-21 | John W. Glotzbach | Device for and method of enhancing quality of an image |
JP6859611B2 (ja) * | 2016-06-09 | 2021-04-14 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
KR102500715B1 (ko) | 2016-07-28 | 2023-02-17 | 삼성전자주식회사 | 전자 장치 및 전자 장치 제어 방법 |
US11602277B2 (en) | 2017-01-16 | 2023-03-14 | Hoya Corporation | Endoscope system and image display device |
JP2019070870A (ja) * | 2017-10-05 | 2019-05-09 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP2019070872A (ja) * | 2017-10-05 | 2019-05-09 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP7087331B2 (ja) * | 2017-10-05 | 2022-06-21 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP7003558B2 (ja) * | 2017-10-12 | 2022-01-20 | カシオ計算機株式会社 | 画像処理装置、画像処理方法、及びプログラム |
JP6972043B2 (ja) * | 2019-01-11 | 2021-11-24 | 株式会社東芝 | 情報処理装置、情報処理方法およびプログラム |
JP7383891B2 (ja) * | 2019-03-25 | 2023-11-21 | カシオ計算機株式会社 | 画像処理装置、画像処理方法、及びプログラム |
JP7400198B2 (ja) * | 2019-03-25 | 2023-12-19 | カシオ計算機株式会社 | 画像処理装置、画像処理方法、及びプログラム |
CN113808027B (zh) | 2020-06-16 | 2023-10-17 | 北京达佳互联信息技术有限公司 | 一种人体图像处理方法、装置、电子设备及存储介质 |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07113612A (ja) * | 1993-10-18 | 1995-05-02 | Sanyo Electric Co Ltd | 部品位置認識装置 |
JPH10208038A (ja) * | 1997-01-21 | 1998-08-07 | Sharp Corp | 画像処理方法および画像処理装置 |
JPH10229505A (ja) * | 1996-12-18 | 1998-08-25 | Lucent Technol Inc | 低ビットレートビデオシステムのためのフィルタリング装置および方法 |
JP2000105815A (ja) * | 1998-09-28 | 2000-04-11 | Yasuhiko Arakawa | 顔画像処理方法および顔画像処理装置 |
JP2000187722A (ja) * | 1998-12-24 | 2000-07-04 | Dainippon Screen Mfg Co Ltd | 画像処理パラメータの決定方法およびその処理を実行するためのプログラムを記録した記録媒体 |
JP2000196901A (ja) * | 1998-12-24 | 2000-07-14 | Dainippon Screen Mfg Co Ltd | 画像の鮮鋭度強調方法、並びに、その処理を実行するためのプログラムを記録した記録媒体 |
WO2000076398A1 (en) | 1999-06-14 | 2000-12-21 | The Procter & Gamble Company | Skin imaging and analysis systems and methods |
JP2001307069A (ja) * | 2000-04-24 | 2001-11-02 | Anritsu Corp | 画像処理による異物検出方法および装置 |
JP2002199179A (ja) * | 2000-12-27 | 2002-07-12 | Oki Electric Ind Co Ltd | 傾き検出装置 |
JP2003016445A (ja) * | 2001-06-29 | 2003-01-17 | Minolta Co Ltd | 画像処理のための装置と方法 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000048184A (ja) * | 1998-05-29 | 2000-02-18 | Canon Inc | 画像処理方法及び顔領域抽出方法とその装置 |
JP2985879B1 (ja) | 1998-06-30 | 1999-12-06 | オムロン株式会社 | 人物画像処理装置 |
US6466685B1 (en) * | 1998-07-14 | 2002-10-15 | Kabushiki Kaisha Toshiba | Pattern recognition apparatus and method |
KR100636910B1 (ko) * | 1998-07-28 | 2007-01-31 | 엘지전자 주식회사 | 동영상검색시스템 |
US6829384B2 (en) * | 2001-02-28 | 2004-12-07 | Carnegie Mellon University | Object finder for photographic images |
US7340079B2 (en) * | 2002-09-13 | 2008-03-04 | Sony Corporation | Image recognition apparatus, image recognition processing method, and image recognition program |
US7269292B2 (en) * | 2003-06-26 | 2007-09-11 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
-
2003
- 2003-12-08 JP JP2003409264A patent/JP4461789B2/ja not_active Expired - Fee Related
-
2004
- 2004-03-19 EP EP04722000.9A patent/EP1612730B1/en not_active Expired - Lifetime
- 2004-03-19 US US10/549,653 patent/US7587083B2/en active Active
- 2004-03-19 WO PCT/JP2004/003726 patent/WO2004084142A1/ja active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07113612A (ja) * | 1993-10-18 | 1995-05-02 | Sanyo Electric Co Ltd | 部品位置認識装置 |
JPH10229505A (ja) * | 1996-12-18 | 1998-08-25 | Lucent Technol Inc | 低ビットレートビデオシステムのためのフィルタリング装置および方法 |
JPH10208038A (ja) * | 1997-01-21 | 1998-08-07 | Sharp Corp | 画像処理方法および画像処理装置 |
JP2000105815A (ja) * | 1998-09-28 | 2000-04-11 | Yasuhiko Arakawa | 顔画像処理方法および顔画像処理装置 |
JP2000187722A (ja) * | 1998-12-24 | 2000-07-04 | Dainippon Screen Mfg Co Ltd | 画像処理パラメータの決定方法およびその処理を実行するためのプログラムを記録した記録媒体 |
JP2000196901A (ja) * | 1998-12-24 | 2000-07-14 | Dainippon Screen Mfg Co Ltd | 画像の鮮鋭度強調方法、並びに、その処理を実行するためのプログラムを記録した記録媒体 |
WO2000076398A1 (en) | 1999-06-14 | 2000-12-21 | The Procter & Gamble Company | Skin imaging and analysis systems and methods |
JP2001307069A (ja) * | 2000-04-24 | 2001-11-02 | Anritsu Corp | 画像処理による異物検出方法および装置 |
JP2002199179A (ja) * | 2000-12-27 | 2002-07-12 | Oki Electric Ind Co Ltd | 傾き検出装置 |
JP2003016445A (ja) * | 2001-06-29 | 2003-01-17 | Minolta Co Ltd | 画像処理のための装置と方法 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080267443A1 (en) * | 2006-05-05 | 2008-10-30 | Parham Aarabi | Method, System and Computer Program Product for Automatic and Semi-Automatic Modification of Digital Images of Faces |
US8265351B2 (en) * | 2006-05-05 | 2012-09-11 | Parham Aarabi | Method, system and computer program product for automatic and semi-automatic modification of digital images of faces |
US8379958B2 (en) | 2007-03-20 | 2013-02-19 | Fujifilm Corporation | Image processing apparatus and image processing method |
CN108519400A (zh) * | 2018-02-06 | 2018-09-11 | 温州市交通工程试验检测有限公司 | 一种预应力梁灌浆饱满度智能检测方法及*** |
CN108519400B (zh) * | 2018-02-06 | 2022-01-18 | 温州市交通工程试验检测有限公司 | 一种预应力梁灌浆饱满度智能检测方法及*** |
Also Published As
Publication number | Publication date |
---|---|
EP1612730B1 (en) | 2020-07-01 |
EP1612730A4 (en) | 2008-05-14 |
JP4461789B2 (ja) | 2010-05-12 |
US20070041640A1 (en) | 2007-02-22 |
JP2004303193A (ja) | 2004-10-28 |
EP1612730A1 (en) | 2006-01-04 |
US7587083B2 (en) | 2009-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4461789B2 (ja) | 画像処理装置 | |
EP1596573B1 (en) | Image correction apparatus | |
CN108229278B (zh) | 人脸图像处理方法、装置和电子设备 | |
CN108229279B (zh) | 人脸图像处理方法、装置和电子设备 | |
JP5547730B2 (ja) | 顔検知を用いた顔及び肌の自動美化 | |
US9142054B2 (en) | System and method for changing hair color in digital images | |
US8520089B2 (en) | Eye beautification | |
CN112884637B (zh) | 特效生成方法、装置、设备及存储介质 | |
WO2022161009A1 (zh) | 图像处理方法及装置、存储介质、终端 | |
EP1372109A2 (en) | Method and system for enhancing portrait images | |
EP1453002A2 (en) | Enhancing portrait images that are processed in a batch mode | |
JP2008234342A (ja) | 画像処理装置及び画像処理方法 | |
JP2005293539A (ja) | 表情認識装置 | |
JP2006164133A (ja) | 画像処理方法および装置並びにプログラム | |
JP2005242535A (ja) | 画像補正装置 | |
CN100458848C (zh) | 图像处理装置 | |
KR20010102873A (ko) | 정형 이미지 템플릿을 이용한 실사 아바타 생성 방법 및시스템 | |
CN113379623A (zh) | 图像处理方法、装置、电子设备及存储介质 | |
JP2004242068A (ja) | 画像処理方法、画像処理装置及び画像処理プログラム | |
JP4349019B2 (ja) | 画像補正装置 | |
JP6376673B2 (ja) | 画像処理装置 | |
CN114627003A (zh) | 人脸图像的眼部脂肪去除方法、***、设备及存储介质 | |
WO2017034012A1 (ja) | 画像処理装置およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004722000 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20048107412 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 2004722000 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007041640 Country of ref document: US Ref document number: 10549653 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10549653 Country of ref document: US |