CN108961189B - Image processing method, image processing device, computer equipment and storage medium - Google Patents

Image processing method, image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN108961189B
CN108961189B CN201810758859.7A CN201810758859A CN108961189B CN 108961189 B CN108961189 B CN 108961189B CN 201810758859 A CN201810758859 A CN 201810758859A CN 108961189 B CN108961189 B CN 108961189B
Authority
CN
China
Prior art keywords
image
processed
data
average skin
color space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810758859.7A
Other languages
Chinese (zh)
Other versions
CN108961189A (en
Inventor
何茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201810758859.7A priority Critical patent/CN108961189B/en
Publication of CN108961189A publication Critical patent/CN108961189A/en
Application granted granted Critical
Publication of CN108961189B publication Critical patent/CN108961189B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image processing method, an image processing device, a computer device and a storage medium. The method comprises the following steps: acquiring an image to be processed; determining current average skin color data of the image to be processed in a first color space; determining target average skin color data of the image to be processed in the first color space according to the current average skin color data in the first color space, a preset transformation matrix and an adjusting parameter; and adjusting the image to be processed according to the current average skin color data in the first color space and the target average skin color data in the first color space to obtain a target image. The method and the device have the advantages that the effect of the image after whitening treatment is better, and the requirements of users can be better met.

Description

Image processing method, image processing device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, a computer device, and a storage medium.
Background
Along with the popularization of intelligent terminals, the intelligent terminals have become common shooting tools for people. For example, people often use smart terminals to take self-shots, so that the shot photos are applied to many fields such as head portrait production and social network sharing. However, in order to make the shared image more effective, people usually perform whitening processing on the shot image through a smart terminal.
Conventionally, an image is usually whitened by adjusting the brightness of the image or by using an image filter.
However, the image processed by the conventional technology has a poor effect, and cannot meet the requirements of users.
Disclosure of Invention
Therefore, it is necessary to provide an image processing method, an image processing apparatus, a computer device, and a storage medium for solving the problem that an image processed by the conventional technology is poor in effect and cannot meet the requirements of users.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring an image to be processed;
determining current average skin color data of the image to be processed in a first color space;
determining target average skin color data of the image to be processed in the first color space according to the current average skin color data in the first color space, a preset transformation matrix and an adjusting parameter;
and adjusting the image to be processed according to the current average skin color data in the first color space and the target average skin color data in the first color space to obtain a target image.
In the image processing method provided by this embodiment, the computer device determines the target average skin color data of the image to be processed in the first color space by determining the current average skin color data of the acquired image to be processed in the first color space, and according to the current average skin color data of the image to be processed in the first color space, the preset transformation matrix and the adjustment parameter, and then adjusts the image to be processed according to the current average skin color data of the image to be processed in the first color space and the target average skin color data of the image to be processed in the first color space, so as to obtain the target image. Because the computer device performs subsequent whitening processing on the image to be processed based on the current average skin color data of the image to be processed, which is obtained by the computer device, is different for different images to be processed, so that the determined target average skin color data of the image to be processed is also different, that is, the whitening processing degree of the computer device on different images to be processed is different, so that the image to be processed is processed through the current average skin color data of the image to be processed, the influence of the skin color of a user and the change of environmental factors (such as light and the like) on the image processing effect is weakened, the effect of the image to be whitened is better, and the requirements of the user can be better met.
In one embodiment, the determining the current average skin color data of the image to be processed in the first color space includes:
performing skin detection on the image to be processed to obtain a gray value of each pixel point on the image to be processed;
and determining the current average skin color data of the image to be processed in a first color space according to the RGB value of each pixel point on the image to be processed and the gray value corresponding to the pixel point.
In one embodiment, the determining, according to the RGB value of each pixel point on the image to be processed and the gray value corresponding to the pixel point, the current average skin color data of the image to be processed in the first color space includes:
according to the formula: AvgColor ═ Sum (Src. Mask) ÷ Sum (Mask), determining current average skin color data AvgColor of the image to be processed in a first color space; wherein Src is the RGB value of the pixel point, Mask is the gray value corresponding to the pixel point, and · is a dot-by-symbol.
In the image processing method provided in this embodiment, after the computer device performs skin detection on the image to be processed and obtains the gray value of each pixel point on the image to be processed according to the skin detection result, the computer device may determine, according to the RGB value and the gray value of each pixel point on the image to be processed, the current average skin color data of the image to be processed in the first color space through the formula (1). When the computer equipment determines the current average skin color data of the image to be processed in the first color space, the RGB value and the gray value of each pixel point on the image to be processed are combined, so that the result accuracy of the determined current average skin color data of the image to be processed in the first color space is higher. Meanwhile, the gray value of each pixel point on the image to be processed is obtained according to the skin detection result, so that the current average skin color data of the image to be processed in the first color space, which is determined by the computer equipment, can better reflect the real skin color of the user, the result of performing subsequent whitening processing on the image to be processed based on the current average skin color data of the image to be processed in the first color space can better meet the requirements of the user, and the effect of the image to be whitened is further improved.
In one embodiment, the determining target average skin color data of the image to be processed in the first color space according to the current average skin color data in the first color space, a preset transformation matrix and an adjustment parameter includes:
determining the current average skin color data of the image to be processed in a second color space according to a preset transformation matrix and the current average skin color data in the first color space;
wherein the current average skin color data in the second color space comprises 3 component data, one of which represents image brightness and the other two of which represent image color;
determining target average skin color data of the image to be processed in a second color space according to the component data and the adjustment parameters corresponding to the component data;
and determining target average skin color data of the image to be processed in the first color space according to the inverse matrix of the preset transformation matrix and the target average skin color data in the second color space.
In one embodiment, the determining target average skin color data of the image to be processed in the second color space according to each of the component data and the adjustment parameter corresponding to the component data includes:
according to the formula:
Figure BDA0001727403320000031
determining target average skin color data of the image to be processed in a second color space; wherein x ', y ', and z ' are three component data of the target average skin color data in the second color space, x, y, and z are three component data of the current average skin color data in the second color space, x represents image brightness, y and z represent image color, dx is an adjustment parameter corresponding to component data x, ry is an adjustment parameter corresponding to component data y, and rz is an adjustment parameter corresponding to component data z.
In the image processing method provided by this embodiment, the computer device determines, according to a preset transformation matrix and current average skin color data of the to-be-processed image in the first color space, current average skin color data of the to-be-processed image in the second color space, then determines, according to each component data included in the current average skin color data of the to-be-processed image in the second color space and an adjustment parameter corresponding to each component data, target average skin color data of the to-be-processed image in the second color space, and determines target average skin color data of the to-be-processed image in the first color space according to an inverse matrix of the preset transformation matrix and the target average skin color data of the to-be-processed image in the second color space. Because the computer device converts the current average skin color data of the image to be processed from the first color space to the second color space through the preset transformation matrix, and the separation degree of each component data contained in the current average skin color data of the image to be processed in the second color space is higher, that is, the correlation among each component data is lower, in this way, the target average skin color data of the image to be processed in the second color space can be easily determined according to each component data and the adjusting parameters corresponding to each component data, and the whitening processing (including brightness processing and color cast correction) of the image to be processed is facilitated.
In one embodiment, before the acquiring the image to be processed, the method further includes:
acquiring N training images; wherein N represents the number of training images, N is more than or equal to 1, and N is a positive integer;
respectively determining the current average skin color data of each training image in the N training images in a first color space to obtain N current average skin color data;
performing principal component analysis on an array M formed by the N current average skin color data to obtain the preset transformation matrix and the array M;
the array m includes N current average skin color data in the second color space, and the current average skin color data in each second color space includes component data x, component data y, and component data z.
In one embodiment, the method further comprises:
visually displaying the array m;
receiving an input visual display result, and determining that component data x in the array m is used for representing image brightness, and component data y and component data z are used for representing image color according to the visual display result; wherein the visual display result is used for indicating the meaning of each component data representation in the array m.
In the image processing method provided by this embodiment, the computer device performs principal component analysis on an array M formed by current average skin color data of N training images in the first color space to obtain a preset transformation matrix and the array M. The preset transformation matrix is obtained by processing the N training images, namely the preset transformation matrix is obtained by big data analysis, so that the accuracy of the preset transformation matrix obtained by computer equipment is higher, the average skin color data of the to-be-processed image converted based on the transformation matrix is more accurate, and the whitening processing effect of the to-be-processed image is further improved.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the first acquisition module is used for acquiring an image to be processed;
the first determination module is used for determining the current average skin color data of the image to be processed in a first color space;
a second determining module, configured to determine, according to the current average skin color data in the first color space, a preset transformation matrix and an adjustment parameter, target average skin color data of the image to be processed in the first color space;
and the adjusting module is used for adjusting the image to be processed according to the current average skin color data in the first color space and the target average skin color data in the first color space to obtain a target image.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring an image to be processed;
determining current average skin color data of the image to be processed in a first color space;
determining target average skin color data of the image to be processed in the first color space according to the current average skin color data in the first color space, a preset transformation matrix and an adjusting parameter;
and adjusting the image to be processed according to the current average skin color data in the first color space and the target average skin color data in the first color space to obtain a target image.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring an image to be processed;
determining current average skin color data of the image to be processed in a first color space;
determining target average skin color data of the image to be processed in the first color space according to the current average skin color data in the first color space, a preset transformation matrix and an adjusting parameter;
and adjusting the image to be processed according to the current average skin color data in the first color space and the target average skin color data in the first color space to obtain a target image.
The device, the computer device and the storage medium provided by the embodiment of the invention can enable the computer device to determine the target average skin color data of the image to be processed in the first color space by determining the current average skin color data of the acquired image to be processed in the first color space, according to the current average skin color data of the image to be processed in the first color space, the preset transformation matrix and the adjusting parameters, and then adjust the image to be processed according to the current average skin color data of the image to be processed in the first color space and the target average skin color data of the image to be processed in the first color space to obtain the target image. Because the computer device performs subsequent whitening processing on the image to be processed based on the current average skin color data of the image to be processed, which is obtained by the computer device, is different for different images to be processed, so that the determined target average skin color data of the image to be processed is also different, that is, the whitening processing degree of the computer device on different images to be processed is different, so that the image to be processed is processed through the current average skin color data of the image to be processed, the influence of the skin color of a user and the change of environmental factors (such as light and the like) on the image processing effect is weakened, the effect of the image to be whitened is better, and the requirements of the user can be better met.
Drawings
FIG. 1a is a schematic diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 1 is a flowchart illustrating an image processing method according to an embodiment;
FIG. 2 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 3 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 4 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment;
fig. 6 is a schematic structural diagram of an image processing apparatus according to another embodiment;
fig. 7 is a schematic structural diagram of an image processing apparatus according to another embodiment;
fig. 8 is a schematic structural diagram of an image processing apparatus according to another embodiment;
fig. 9 is a schematic structural diagram of an image processing apparatus according to another embodiment.
Detailed Description
The image processing method provided by the embodiment of the invention can be applied to the computer equipment shown in FIG. 1 a. The computer device comprises a processor and a memory connected by a system bus, wherein a computer program is stored in the memory, and the steps of the method embodiments described below can be executed when the processor executes the computer program. Optionally, the computer device may further comprise a network interface, a display screen and an input device. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a nonvolatile storage medium storing an operating system and a computer program, and an internal memory. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. Optionally, the computer device may be an electronic device that has an image processing function and can interact with an external device or a user, such as a personal computer pc (personal computer), a mobile terminal, a portable device, and a personal digital assistant, and the specific form of the computer device is not limited in the embodiment of the present invention.
In a conventional image processing technology, a method of adjusting brightness of an image or using an image filter is generally adopted to perform whitening processing on the image. However, the image processed by the conventional technology has a poor effect, and cannot meet the requirements of users. To this end, embodiments of the present invention provide a method, an apparatus, a computer device, and a storage medium for image processing, which are intended to solve the technical problems in the conventional technologies described above.
It should be noted that the execution subject of the method embodiments described below may be an image processing apparatus, and the apparatus may be implemented as part of or all of the computer device described above by software, hardware, or a combination of software and hardware. The method embodiments described below are described by way of example with the execution subject being a computer device.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention are further described in detail by the following embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment. The embodiment relates to a specific process of how the computer device obtains a target image after whitening processing according to skin color data of an image to be processed. As shown in fig. 1, the method may include:
and S101, acquiring an image to be processed.
Specifically, the image to be processed is an image that needs to be subjected to whitening treatment. The whitening treatment mainly comprises brightness treatment of an image and color cast treatment of the image. The to-be-processed image may be an image acquired in real time by an image acquisition device such as a camera, an image imported from other devices, or an image downloaded from a cloud, which is not limited in this embodiment.
Optionally, the process of acquiring the image to be processed by the computer device may be: the computer equipment receives a processing instruction input by a user and acquires an image to be processed according to the processing instruction.
Wherein, the processing instruction comprises an image identifier. The image identification refers to a unique identification for distinguishing different images to be processed. For example, the image identifier may be one or more of an image name, an image code, an image storage address, and the like. Specifically, the computer device obtains the image to be processed according to the image identifier included in the received processing instruction.
S102, determining the current average skin color data of the image to be processed in a first color space.
Specifically, the current average skin color data of the to-be-processed image in the first color space may represent the average skin color of the user on the to-be-processed image. A method of encoding a color is called a "color space" or "color gamut", and the first color space is a space defined according to colors recognized by human eyes, and represents colors of three channels of red, green, and blue. The image to be processed is a Red Green Blue (RGB) three-channel image, and the pixel value of each channel is between 0 and 255.
Optionally, the computer device may perform corresponding calculation on RGB values of all pixel points in the skin region of the image to be processed, so as to obtain current average skin color data of the image to be processed in the first color space. Of course, the computer device may also determine the current average skin color data of the to-be-processed image in the first color space according to the process provided in the following embodiment, and this embodiment does not limit the specific manner in which the computer device determines the current average skin color data of the to-be-processed image in the first color space.
S103, determining target average skin color data of the image to be processed in the first color space according to the current average skin color data in the first color space, a preset transformation matrix and an adjusting parameter.
In particular, presetThe transformation matrix is used for performing mutual conversion between different color spaces on related skin color data of the image to be processed, that is, the computer device can convert the skin color data of the image to be processed in a first color space into a second color space through a preset transformation matrix, and can also convert the skin color data of the image to be processed in the second color space into the first color space through an inverse matrix of the preset transformation matrix. Alternatively, the second color space may be YUV color space, YIQ color space, YCrCbColor space, etc. The transformation matrix may be configured in the computer device in advance, or may be obtained from other external devices, which is not limited in this embodiment.
Optionally, in the second color space, the current average skin color data of the image to be processed may be corrected according to the adjustment parameter, and then the data obtained by correcting the current average skin color data of the image to be processed (i.e., the target average skin color data) may be converted into the first color space through an inverse matrix of a preset transformation matrix. The adjusting parameters include related parameters for performing whitening processing on the image to be processed, and may include an image brightness adjusting parameter and an image color adjusting parameter. The adjusting parameter can be a preset set of values or a set of values input in real time according to the requirements of a user. The number of parameters included in the adjustment parameters is the same as the number of color channels of the image to be processed. Usually, the image to be processed is an RGB three-channel image, that is, the number of color channels of the image to be processed is 3, and correspondingly, the number of channels of the image to be processed in the second color space is also 3, so that the number of parameters of the adjustment parameter is also 3. The target average skin color data in the first color space is a target adjustment value of the current average skin color data of the image to be processed in the first color space.
After the computer device obtains the current average skin color data of the image to be processed in the first color space, the computer device can correspondingly calculate the current average skin color data of the image to be processed in the first color space through a preset transformation matrix and an adjusting parameter, so as to obtain the target average skin color data of the image to be processed in the first color space.
S104, adjusting the image to be processed according to the current average skin color data in the first color space and the target average skin color data in the first color space to obtain a target image.
Specifically, the target image is an image after the whitening treatment. After the computer equipment obtains the target average skin color data of the image to be processed in the first color space, the computer equipment performs curve function adjustment on the image to be processed according to the current average skin color data of the image to be processed in the first color space and the target average skin color data of the image to be processed in the first color space, and then the target image can be obtained. The course of the curve function adjustment is briefly described below: the computer device can adjust for the RGB channels or for the red, green and blue single channels individually. When the red, green and blue channels are independently adjusted, a certain point on a red channel curve is pulled upwards, the red color of the point is increased, and the cyan color of the point is reduced; when the green channel curve is pulled up, the green color is increased and the magenta color is decreased; the blue channel curve is pulled up, the blue color increases and the yellow color decreases at this point. The adjustment of the brightness and the color cast of the image to be processed can be realized through curve adjustment.
In the image processing method provided by this embodiment, the computer device determines the target average skin color data of the image to be processed in the first color space by determining the current average skin color data of the acquired image to be processed in the first color space, and according to the current average skin color data of the image to be processed in the first color space, the preset transformation matrix and the adjustment parameter, and then adjusts the image to be processed according to the current average skin color data of the image to be processed in the first color space and the target average skin color data of the image to be processed in the first color space, so as to obtain the target image. Because the computer device performs subsequent whitening processing on the image to be processed based on the current average skin color data of the image to be processed, which is obtained by the computer device, is different for different images to be processed, so that the determined target average skin color data of the image to be processed is also different, that is, the whitening processing degree of the computer device on different images to be processed is different, so that the image to be processed is processed through the current average skin color data of the image to be processed, the influence of the skin color of a user and the change of environmental factors (such as light and the like) on the image processing effect is weakened, the effect of the image to be whitened is better, and the requirements of the user can be better met.
Fig. 2 is a schematic flowchart of an image processing method according to another embodiment. The present embodiment relates to a specific process of how a computer device determines the current average skin tone data of an image to be processed in a first color space. On the basis of the foregoing embodiment, optionally, as shown in fig. 2, the foregoing S102 may include:
s201, performing skin detection on the image to be processed to obtain a gray value of each pixel point on the image to be processed.
Specifically, the computer device may perform skin detection on the image to be processed by using a skin detection method, so as to obtain a gray value of each pixel point on the image to be processed. Wherein, the gray value of the pixel point on the image to be processed ranges from 0 to 255. The closer the gray value of the pixel point is to 255, the higher the probability that the pixel point belongs to the pixel point in the skin area in the image to be processed is; if the gray value of the pixel point is equal to 255, the pixel point belongs to the pixel point in the skin area in the image to be processed; the closer the gray value of the pixel point is to 0, the higher the probability that the pixel point belongs to the pixel point in the non-skin area in the image to be processed is; and if the gray value of the pixel point is equal to 0, the pixel point belongs to the pixel point in the non-skin area in the image to be processed. Optionally, the skin detection method may include: a skin color prior statistics-based method, a human face key point-based detection method, a skin detection method based on the distance between a human face characteristic point and a color space, and the like. For example, when skin is detected by using a skin color prior statistical method, a certain skin color sample is used to calculate the skin color distribution probability of the skin color sample in a certain color space, and the probability that the current color is a skin region is calculated by using the obtained probability distribution map.
S202, determining the current average skin color data of the image to be processed in a first color space according to the RGB value of each pixel point on the image to be processed and the gray value corresponding to the pixel point.
Specifically, after the computer device obtains the gray value of each pixel point on the image to be processed, the computer device may determine, according to the RGB value of each pixel point and the gray value corresponding to the pixel point, the current average skin color data of the image to be processed in the first color space through a corresponding calculation algorithm. As long as the current average skin color data of the image to be processed in the first color space can be determined through the RGB value of each pixel point on the image to be processed and the gray value corresponding to the pixel point, this embodiment does not limit this.
Alternatively, the computer device may be according to formula (1): AvgColor ═ Sum (Src. Mask) ÷ Sum (Mask), determining current average skin color data AvgColor of the image to be processed in the first color space; wherein Src is the RGB value of the pixel point, Mask is the gray value corresponding to the pixel point, and · is a dot-by-symbol. The current average skin color data AvgColor is data of three channels, and specifically, the current average skin color data AvgColor can be represented by an array of 1 × 3.
Specifically, after obtaining the gray value of each pixel point on the image to be processed, the computer device determines the current average skin color data AvgColor of the image to be processed in the first color space according to the RGB value and the gray value of each pixel point and through a formula AvgColor Sum (Src · Mask) ÷ Sum (Mask) or other relational expressions including Sum (Src · Mask) ÷ Sum (Mask). For example, if n pixel points exist on the image to be processed, the computer device determines, according to the RGB values and the gray values of the n pixel points, that (Src) is obtained by using the formula AvgColor1·Mask1+Src2·Mask2+...+Srcn·Maskn)÷(Mask1+Mask2+...+Maskn) Or comprises (Src)1·Mask1+Src2·Mask2+...+Srcn·Maskn)÷(Mask1+Mask2+...+Maskn) Determining other relations ofThe current average skin tone data AvgColor of the image in the first color space.
In the image processing method provided in this embodiment, after the computer device performs skin detection on the image to be processed and obtains the gray value of each pixel point on the image to be processed according to the skin detection result, the computer device may determine, according to the RGB value and the gray value of each pixel point on the image to be processed, the current average skin color data of the image to be processed in the first color space through the formula (1). When the computer equipment determines the current average skin color data of the image to be processed in the first color space, the RGB value and the gray value of each pixel point on the image to be processed are combined, so that the result accuracy of the determined current average skin color data of the image to be processed in the first color space is higher. Meanwhile, the gray value of each pixel point on the image to be processed is obtained according to the skin detection result, so that the current average skin color data of the image to be processed in the first color space, which is determined by the computer equipment, can better reflect the real skin color of the user, the result of performing subsequent whitening processing on the image to be processed based on the current average skin color data of the image to be processed in the first color space can better meet the requirements of the user, and the effect of the image to be whitened is further improved.
Fig. 3 is a schematic flowchart of an image processing method according to another embodiment. The present embodiment relates to a specific process of how a computer device determines target average skin color data of an image to be processed in a first color space. On the basis of the foregoing embodiment, optionally, as shown in fig. 3, the foregoing S103 may include:
s301, determining the current average skin color data of the image to be processed in the second color space according to a preset transformation matrix and the current average skin color data in the first color space.
Wherein the current average skin color data in the second color space comprises 3 component data, one of which represents image brightness and the other two of which represent image color.
Specifically, the computer device multiplies the current average skin color data of the image to be processed in the first color space by a preset transformation matrix, and the multiplied result is the current average skin color data of the image to be processed in the second color space. Namely, the computer equipment transforms the current average skin color data of the image to be processed in the first color space to the current average skin color data in the second color space through a preset transformation matrix. The current average skin color data of the image to be processed in the second color space comprises 3 component data, and the 3 component data are higher in separation degree, so that the image to be processed is convenient to whiten. Alternatively, the transformation matrix may be a 3 x 3 matrix.
S302, determining target average skin color data of the image to be processed in a second color space according to the component data and the adjustment parameters corresponding to the component data.
Specifically, the number of parameters included in the adjustment parameter is the same as the number of component data included in the current average skin color data of the image to be processed in the second color space, and the type of the adjustment parameter corresponds to the type of the component data, that is, the type of the adjustment parameter corresponding to the component data used for representing the image brightness is the image brightness adjustment parameter, and the type of the adjustment parameter corresponding to the component data used for representing the image color is the image color adjustment parameter. After the computer device obtains 3 component data included in the current average skin color data of the image to be processed in the second color space, the computer device determines a target brightness value of the image to be processed in the second color space according to the image brightness adjusting parameter and the component data used for representing the image brightness, and determines a target color value of the image to be processed in the second color space according to the image color adjusting parameter and the component data used for representing the image color, wherein the target brightness value and the target color value are the target average skin color data of the image to be processed in the second color space.
As an optional implementation manner, the process of determining, by the computer device, the target average skin color data of the image to be processed in the second color space according to each of the component data and the adjustment parameter corresponding to the component data may be: according to the formula:
Figure BDA0001727403320000121
determining target average skin color data of the image to be processed in a second color space; wherein x ', y ', and z ' are three component data of the target average skin color data in the second color space, x, y, and z are three component data of the current average skin color data in the second color space, x represents image brightness, y and z represent image color, dx is an adjustment parameter corresponding to component data x, ry is an adjustment parameter corresponding to component data y, and rz is an adjustment parameter corresponding to component data z.
Specifically, the adjustment parameter may be a set of preset fixed values, or a set of values input in real time according to the actual adjustment requirement of the user. And dx in the adjustment parameters is an image brightness adjustment parameter, when the value of dx is greater than 0, the image to be processed is brightened, when the value of dx is less than 0, the image to be processed is dimmed, and when the value of dx is equal to 0, the image to be processed is not brightened. In order to make the processed image more effective and natural, the value of dx is optionally 10 or 20. And ry and rz in the adjustment parameters are image color adjustment parameters, the value ranges of ry and rz are 0-1, when ry and rz are equal to 0, the processed image (namely the target image) is free from color cast, and when ry and rz are equal to 1, the color of the image to be processed is not adjusted. In order to make the processed image more effective and natural, optionally, the value of ry is 0.5, and the value of rz is 0.5.
Specifically, the computer device determines a target brightness value of the image to be processed in the second color space through x' ═ x + dx or another relation containing x + dx; the computer device determines a target color value of the image to be processed in the second color space through y' ═ y ry or other relational expressions containing y ry; the computer device determines a further target color value of the image to be processed in the second color space by means of z' ═ z rz or another relation containing z rz. And the determined target brightness value and the two target color values are the target average skin color data of the image to be processed in the second color space. Because the separation degree between x, y and z is higher, the brightness adjustment and the color cast adjustment of the image to be processed are convenient to carry out in the color space, so that x ', y ' and z ' are easy to obtain.
S303, obtaining target average skin color data of the image to be processed in the first color space according to the inverse matrix of the preset transformation matrix and the target average skin color data in the second color space.
Specifically, the computer device performs inverse calculation on a preset transformation matrix to obtain an inverse matrix of the transformation matrix, and then multiplies the inverse matrix of the transformation matrix with target average skin color data of the image to be processed in a second color space, so as to convert the target average skin color data of the image to be processed from the second color space to a first color space, thereby obtaining target average skin color data of the image to be processed in the first color space.
In the image processing method provided by this embodiment, the computer device determines, according to a preset transformation matrix and current average skin color data of the to-be-processed image in the first color space, current average skin color data of the to-be-processed image in the second color space, then determines, according to each component data included in the current average skin color data of the to-be-processed image in the second color space and an adjustment parameter corresponding to each component data, target average skin color data of the to-be-processed image in the second color space, and determines target average skin color data of the to-be-processed image in the first color space according to an inverse matrix of the preset transformation matrix and the target average skin color data of the to-be-processed image in the second color space. Because the computer device converts the current average skin color data of the image to be processed from the first color space to the second color space through the preset transformation matrix, and the separation degree of each component data contained in the current average skin color data of the image to be processed in the second color space is higher, that is, the correlation among each component data is lower, in this way, the target average skin color data of the image to be processed in the second color space can be easily determined according to each component data and the adjusting parameters corresponding to each component data, and the whitening processing (including brightness processing and color cast correction) of the image to be processed is facilitated.
Fig. 4 is a flowchart illustrating an image processing method according to another embodiment. The embodiment relates to a specific process how the computer device obtains the preset transformation matrix, and a specific process for obtaining the meaning of each component data in the second color space according to the transformation matrix and the big data statistical result. On the basis of the foregoing embodiment, optionally, as shown in fig. 4, before the foregoing S101, the method further includes:
s401, obtaining N training images.
Wherein N represents the number of training images, N is more than or equal to 1, and N is a positive integer.
Specifically, the training image is also an image that needs whitening treatment. The computer device may obtain N training images from an open image database, or may obtain N training images from a preset image database, and the specific sources of the N training images are not limited in this embodiment. It can be understood that the closer the N training images are to the actual application scene, the better the effect of the target image obtained by the method provided by the invention is. For example, when whitening is performed on an asian image, N training images of asians are selected as much as possible, and when whitening is performed on an african image, N training images of africans are selected as much as possible.
S402, respectively determining the current average skin color data of each training image in the N training images in the first color space to obtain N current average skin color data.
It should be noted that, for the process of determining the current average skin color data of each training image in the first color space by the computer device, reference may be made to the process of determining the current average skin color data of the image to be processed by the computer device, which is not described herein again in this embodiment.
And S403, performing principal component analysis on an array M formed by the N current average skin color data to obtain the preset transformation matrix and the array M.
The array m includes N current average skin color data in the second color space, and the current average skin color data in each second color space includes component data x, component data y, and component data z.
Specifically, the computer device may form an array M according to the N current average skin color data, and since each training image is a three-channel image, the obtained array M is an array of N × 3. After the computer device obtains an array M formed by N current average skin color data, the computer device performs Principal Component Analysis (PCA) on the array M to obtain a preset transformation matrix, and multiplies the array M by the preset transformation matrix to obtain an array M. The preset transformation matrix obtained after the principal component analysis is a 3 x 3 matrix, and then the array M obtained according to the preset transformation matrix and the array M is also an N x 3 array, and the data M is used for determining the meaning of each component data in the second color space.
Optionally, after the step S403, the method further includes steps S404-S405:
s404, visually displaying the array m. Specifically, the data m may be visually displayed through a display screen of the computer device, so that a relevant person may recognize the meaning that the current average skin color data in each second color space in the data m includes the component data x, the component data y, and the component data z.
S405, receiving an input visual display result, and determining that component data x in the array m is used for representing image brightness, and component data y and component data z are used for representing image color according to the visual display result. Wherein the visual display result is used for indicating the meaning of each component data representation in the array m.
Specifically, the computer device may receive the input visual display result in a text manner, or may receive the input visual display result in a voice manner, which is not limited in this embodiment. Since the visualization display result input by the user is used to indicate the meaning of each component data representation in the array m, the computer device can determine, from the received visualization display result, that component data x in array m is used to represent image brightness, and component data y and component data z are used to represent image color. The meaning of the component data x, the component data y, and the component data z in the array m may be stored in the computer device in advance, so that when the computer device performs whitening processing on an image to be processed, it may be known that the component data x in the array m is used for representing image brightness, the component data y and the component data z are used for representing image color, and target average skin color data of the image to be processed in the second color space is determined according to each component data in the array m and an adjustment parameter corresponding to each component data, and a specific determination process may refer to the step S302 described above, which is not described herein again.
Because the array M is obtained based on big data statistics, and the computer equipment carries out principal component analysis on the array M, the separation degree of each component data included in the obtained array M is higher, namely, the correlation among each component data is lower, and therefore, the whitening treatment on the training image is greatly facilitated.
It can be understood that the greater the number N of the selected training images, and the closer each training image is to the actual application scene, the more accurate the preset transformation matrix obtained after the computer device performs principal component analysis on the array M is. The preset transformation matrix obtained by training may be used in step S103 of the image processing method to obtain the target average skin color data of the image to be processed in the first color space, so as to improve the accuracy of calculating the target average skin color data in the first color space and further improve the image processing effect.
In the image processing method provided by this embodiment, the computer device performs principal component analysis on an array M formed by current average skin color data of N training images in the first color space to obtain a preset transformation matrix and the array M. The preset transformation matrix is obtained by processing the N training images, namely the preset transformation matrix is obtained by big data analysis, so that the accuracy of the preset transformation matrix obtained by computer equipment is higher, the average skin color data of the to-be-processed image converted based on the transformation matrix is more accurate, and the whitening processing effect of the to-be-processed image is further improved.
It should be understood that although the various steps in the flowcharts of fig. 1-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment. As shown in fig. 5, the apparatus includes: a first obtaining module 11, a first determining module 12, a second determining module 13 and an adjusting module 14.
Specifically, the first obtaining module 11 is configured to obtain an image to be processed.
The first determination module 12 is configured to determine current average skin color data of the image to be processed in a first color space.
The second determining module 13 is configured to determine target average skin color data of the image to be processed in the first color space according to the current average skin color data in the first color space, a preset transformation matrix, and an adjustment parameter;
the adjusting module 14 is configured to adjust the image to be processed according to the current average skin color data in the first color space and the target average skin color data in the first color space, so as to obtain a target image.
The image processing apparatus provided in this embodiment may implement the method embodiments described above, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to another embodiment. On the basis of the embodiment shown in fig. 5, as shown in fig. 6, the first determining module 12 includes a detecting unit 121 and a first determining unit 122.
Specifically, the detection unit 121 is configured to perform skin detection on the image to be processed to obtain a gray value of each pixel point on the image to be processed.
The first determining unit 122 is configured to determine current average skin color data of the image to be processed in a first color space according to the RGB value of each pixel point on the image to be processed and the gray value corresponding to the pixel point.
In one embodiment, the first determining unit 122 is specifically configured to: AvgColor ═ Sum (Src · Mask) ÷ Sum (Mask), determine the current average skin color data AvgColor of the image to be processed in the first color space, be a dot-by-dot symbol; wherein Src is the RGB value of the pixel point, and Mask is the gray value corresponding to the pixel point.
The image processing apparatus provided in this embodiment may implement the method embodiments described above, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to another embodiment. On the basis of the embodiment shown in fig. 6, as shown in fig. 7, the second determining module 13 includes a second determining unit 131, a third determining unit 132 and a fourth determining unit 133.
Specifically, the second determining unit 131 is configured to determine current average skin color data of the image to be processed in a second color space according to a preset transformation matrix and the current average skin color data in the first color space; wherein the current average skin color data in the second color space comprises 3 component data, one of which represents image brightness and the other two of which represent image color.
The third determining unit 132 is configured to determine, according to each of the component data and the adjustment parameter corresponding to the component data, target average skin color data of the image to be processed in the second color space.
The fourth determining unit 133 is configured to determine the target average skin color data of the to-be-processed image in the first color space according to the inverse matrix of the preset transformation matrix and the target average skin color data in the second color space.
In one embodiment, the third determining unit 132 is specifically configured to:
Figure BDA0001727403320000181
determining target average skin color data of the image to be processed in a second color space; wherein x ', y ', and z ' are three component data of the target average skin color data in the second color space, x, y, and z are three component data of the current average skin color data in the second color space, x represents image brightness, y and z represent image color, dx is an adjustment parameter corresponding to component data x, ry is an adjustment parameter corresponding to component data y, and rz is an adjustment parameter corresponding to component data z.
The image processing apparatus provided in this embodiment may implement the method embodiments described above, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 8 is a schematic structural diagram of an image processing apparatus according to another embodiment. On the basis of the embodiment shown in fig. 7, as shown in fig. 8, the image processing apparatus further includes a second acquiring module 15, a third determining module 16 and a processing module 17.
Specifically, the second obtaining module 15 is configured to obtain N training images; wherein N represents the number of training images, N is more than or equal to 1, and N is a positive integer.
The third determining module 16 is configured to determine current average skin color data of each of the N training images in the first color space, respectively, to obtain N current average skin color data.
The processing module 17 is configured to perform principal component analysis on an array M formed by the N current average skin color data to obtain the preset transformation matrix and the array M. The array m includes N current average skin color data in the second color space, and the current average skin color data in each second color space includes component data x, component data y, and component data z.
The image processing apparatus provided in this embodiment may implement the method embodiments described above, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 9 is a schematic structural diagram of an image processing apparatus according to another embodiment. On the basis of the embodiment shown in fig. 8, as shown in fig. 9, the image processing apparatus further includes a display module 18, a receiving module 19, and a fourth determining module 20.
Specifically, the display module 18 is configured to perform visual display on the array m.
The receiving module 19 is used for receiving the input visual display result.
The fourth determining module 20 is configured to determine, according to the visual display result received by the receiving module 19, that the component data x in the array m is used for representing image brightness, and the component data y and the component data z are used for representing image color; wherein the visual display result is used for indicating the meaning of each component data representation in the array m.
The image processing apparatus provided in this embodiment may implement the method embodiments described above, and the implementation principle and the technical effect are similar, which are not described herein again.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 1 a. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the architecture shown in FIG. 1a is only a block diagram of some of the structures associated with the inventive arrangements and is not intended to limit the computing devices to which the inventive arrangements may be applied, as a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring an image to be processed;
determining current average skin color data of the image to be processed in a first color space;
determining target average skin color data of the image to be processed in the first color space according to the current average skin color data in the first color space, a preset transformation matrix and an adjusting parameter;
and adjusting the image to be processed according to the current average skin color data in the first color space and the target average skin color data in the first color space to obtain a target image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing skin detection on the image to be processed to obtain a gray value of each pixel point on the image to be processed; and determining the current average skin color data of the image to be processed in a first color space according to the RGB value of each pixel point on the image to be processed and the gray value corresponding to the pixel point.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
according to the formula: AvgColor ═ Sum (Src. Mask) ÷ Sum (Mask), determining current average skin color data AvgColor of the image to be processed in a first color space; wherein Src is the RGB value of the pixel point, Mask is the gray value corresponding to the pixel point, and · is a dot-by-symbol.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining the current average skin color data of the image to be processed in a second color space according to a preset transformation matrix and the current average skin color data in the first color space; determining target average skin color data of the image to be processed in a second color space according to the component data and the adjustment parameters corresponding to the component data; and determining target average skin color data of the image to be processed in the first color space according to the inverse matrix of the preset transformation matrix and the target average skin color data in the second color space. Wherein the current average skin color data in the second color space comprises 3 component data, one of which represents image brightness and the other two of which represent image color.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
according to the formula:
Figure BDA0001727403320000201
determining target average skin color data of the image to be processed in a second color space; wherein x ', y ', and z ' are three component data of the target average skin color data in the second color space, x, y, and z are three component data of the current average skin color data in the second color space, and x representsAnd image brightness, y and z represent image colors, dx is an adjusting parameter corresponding to the component data x, ry is an adjusting parameter corresponding to the component data y, and rz is an adjusting parameter corresponding to the component data z.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring N training images; respectively determining the current average skin color data of each training image in the N training images in a first color space to obtain N current average skin color data; performing principal component analysis on an array M formed by the N current average skin color data to obtain the preset transformation matrix and the array M; wherein N represents the number of training images, N is more than or equal to 1, and N is a positive integer; the array m includes N current average skin color data in the second color space, and the current average skin color data in each second color space includes component data x, component data y, and component data z.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
visually displaying the array m; receiving an input visual display result, and determining that component data x in the array m is used for representing image brightness, and component data y and component data z are used for representing image color according to the visual display result; wherein the visual display result is used for indicating the meaning of each component data representation in the array m.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image to be processed;
determining current average skin color data of the image to be processed in a first color space;
determining target average skin color data of the image to be processed in the first color space according to the current average skin color data in the first color space, a preset transformation matrix and an adjusting parameter;
and adjusting the image to be processed according to the current average skin color data in the first color space and the target average skin color data in the first color space to obtain a target image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing skin detection on the image to be processed to obtain a gray value of each pixel point on the image to be processed; and determining the current average skin color data of the image to be processed in a first color space according to the RGB value of each pixel point on the image to be processed and the gray value corresponding to the pixel point.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to the formula: AvgColor ═ Sum (Src. Mask) ÷ Sum (Mask), determining current average skin color data AvgColor of the image to be processed in a first color space; wherein Src is the RGB value of the pixel point, Mask is the gray value corresponding to the pixel point, and · is a dot-by-symbol.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the current average skin color data of the image to be processed in a second color space according to a preset transformation matrix and the current average skin color data in the first color space; determining target average skin color data of the image to be processed in a second color space according to the component data and the adjustment parameters corresponding to the component data; and determining target average skin color data of the image to be processed in the first color space according to the inverse matrix of the preset transformation matrix and the target average skin color data in the second color space. Wherein the current average skin color data in the second color space comprises 3 component data, one of which represents image brightness and the other two of which represent image color.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to the formula:
Figure BDA0001727403320000221
determining target average skin color data of the image to be processed in a second color space; wherein x ', y ', and z ' are three component data of the target average skin color data in the second color space, x, y, and z are three component data of the current average skin color data in the second color space, x represents image brightness, y and z represent image color, dx is an adjustment parameter corresponding to component data x, ry is an adjustment parameter corresponding to component data y, and rz is an adjustment parameter corresponding to component data z.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring N training images; respectively determining the current average skin color data of each training image in the N training images in a first color space to obtain N current average skin color data; performing principal component analysis on an array M formed by the N current average skin color data to obtain the preset transformation matrix and the array M; wherein N represents the number of training images, N is more than or equal to 1, and N is a positive integer; the array m includes N current average skin color data in the second color space, and the current average skin color data in each second color space includes component data x, component data y, and component data z.
In one embodiment, the computer program when executed by the processor further performs the steps of:
visually displaying the array m;
receiving an input visual display result, and determining that component data x in the array m is used for representing image brightness, and component data y and component data z are used for representing image color according to the visual display result; wherein the visual display result is used for indicating the meaning of each component data representation in the array m.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (8)

1. An image processing method, comprising:
acquiring an image to be processed;
determining the current average skin color data of the image to be processed in a first color space;
determining target average skin color data of the image to be processed in the first color space according to the current average skin color data in the first color space, a preset transformation matrix and an adjusting parameter;
adjusting the image to be processed according to the current average skin color data in the first color space and the target average skin color data in the first color space to obtain a target image;
the determining the current average skin color data of the image to be processed in the first color space comprises:
performing skin detection on the image to be processed to obtain a gray value of each pixel point on the image to be processed;
according to the formula: AvgColor ═ Sum (Src. Mask) ÷ Sum (Mask), determining current average skin color data AvgColor of the image to be processed in a first color space; wherein Src is the RGB value of the pixel point, Mask is the gray value corresponding to the pixel point, and · is a dot-by-symbol.
2. The method according to claim 1, wherein determining the target average skin color data of the image to be processed in the first color space according to the current average skin color data in the first color space, a preset transformation matrix and an adjustment parameter comprises:
determining the current average skin color data of the image to be processed in a second color space according to a preset transformation matrix and the current average skin color data in the first color space;
wherein the current average skin color data in the second color space comprises 3 component data, one of which represents image brightness and the other two of which represent image color;
determining target average skin color data of the image to be processed in a second color space according to the component data and the adjustment parameters corresponding to the component data;
and determining target average skin color data of the image to be processed in the first color space according to the inverse matrix of the preset transformation matrix and the target average skin color data in the second color space.
3. The method according to claim 2, wherein the determining the target average skin color data of the image to be processed in the second color space according to each of the component data and the adjustment parameter corresponding to the component data comprises:
according to the formula:
Figure FDA0002681119170000021
determining target average skin color data of the image to be processed in a second color space;
wherein x ', y ', and z ' are three component data of the target average skin color data in the second color space, x, y, and z are three component data of the current average skin color data in the second color space, x represents image brightness, y and z represent image color, dx is an adjustment parameter corresponding to component data x, ry is an adjustment parameter corresponding to component data y, and rz is an adjustment parameter corresponding to component data z.
4. The method according to any one of claims 1-3, wherein prior to said acquiring an image to be processed, the method further comprises:
acquiring N training images; wherein N represents the number of training images, N is more than or equal to 1, and N is a positive integer;
respectively determining the current average skin color data of each training image in the N training images in a first color space to obtain N current average skin color data;
performing principal component analysis on an array M formed by the N current average skin color data to obtain the preset transformation matrix and data M;
the array m includes N current average skin color data in the second color space, and the current average skin color data in each second color space includes component data x, component data y, and component data z.
5. The method of claim 4, further comprising:
visually displaying the array m;
receiving an input visual display result, and determining that component data x in the array m is used for representing image brightness, and component data y and component data z are used for representing image color according to the visual display result; wherein the visual display result is used for indicating the meaning of each component data representation in the array m.
6. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring an image to be processed;
the first determination module is used for determining the current average skin color data of the image to be processed in a first color space;
a second determining module, configured to determine, according to the current average skin color data in the first color space, a preset transformation matrix and an adjustment parameter, target average skin color data of the image to be processed in the first color space;
the adjusting module is used for adjusting the image to be processed according to the current average skin color data in the first color space and the target average skin color data in the first color space to obtain a target image;
the first determining module comprises a detecting unit and a first determining unit; the detection unit is used for carrying out skin detection on the image to be processed to obtain the gray value of each pixel point of the image to be processed; the first determination unit is configured to: AvgColor ═ Sum (Src. Mask) ÷ Sum (Mask), determining current average skin color data AvgColor of the image to be processed in a first color space; wherein Src is the RGB value of the pixel point, Mask is the gray value corresponding to the pixel point, and · is a dot-by-symbol.
7. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN201810758859.7A 2018-07-11 2018-07-11 Image processing method, image processing device, computer equipment and storage medium Active CN108961189B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810758859.7A CN108961189B (en) 2018-07-11 2018-07-11 Image processing method, image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810758859.7A CN108961189B (en) 2018-07-11 2018-07-11 Image processing method, image processing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108961189A CN108961189A (en) 2018-12-07
CN108961189B true CN108961189B (en) 2020-10-30

Family

ID=64482851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810758859.7A Active CN108961189B (en) 2018-07-11 2018-07-11 Image processing method, image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108961189B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741281B (en) * 2019-01-04 2020-09-29 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392420A (en) * 2014-12-10 2015-03-04 厦门美图之家科技有限公司 Method for optimizing skin color quickly
CN104599297A (en) * 2013-10-31 2015-05-06 厦门美图网科技有限公司 Image processing method for automatically blushing human face
CN106407909A (en) * 2016-08-31 2017-02-15 北京云图微动科技有限公司 Face recognition method, device and system
CN107730446A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, computer equipment and computer-readable recording medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100850460B1 (en) * 2002-10-10 2008-08-07 삼성테크윈 주식회사 Method for improving face image within digital camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104599297A (en) * 2013-10-31 2015-05-06 厦门美图网科技有限公司 Image processing method for automatically blushing human face
CN104392420A (en) * 2014-12-10 2015-03-04 厦门美图之家科技有限公司 Method for optimizing skin color quickly
CN106407909A (en) * 2016-08-31 2017-02-15 北京云图微动科技有限公司 Face recognition method, device and system
CN107730446A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, computer equipment and computer-readable recording medium

Also Published As

Publication number Publication date
CN108961189A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
US20200258197A1 (en) Method for generating high-resolution picture, computer device, and storage medium
CN108961175B (en) Face brightness adjusting method and device, computer equipment and storage medium
CN110288614B (en) Image processing method, device, equipment and storage medium
CN106570909B (en) Skin color detection method, device and terminal
CN109919030B (en) Black eye type identification method and device, computer equipment and storage medium
CN110580693B (en) Image processing method, image processing device, computer equipment and storage medium
US9378564B2 (en) Methods for color correcting digital images and devices thereof
CN110930296A (en) Image processing method, device, equipment and storage medium
CA3154893A1 (en) Image color transferring method, device, computer equipment and storage medium
WO2020135224A1 (en) Color space mapping method and device, computer readable storage medium and apparatus
CN113676713A (en) Image processing method, apparatus, device and medium
CN111950390A (en) Skin sensitivity determination method and device, storage medium and equipment
CN110610469B (en) Face image privacy protection method, device, equipment and storage medium
CN102088539A (en) Method and system for evaluating pre-shot picture quality
CN113469092A (en) Character recognition model generation method and device, computer equipment and storage medium
CN112037160A (en) Image processing method, device and equipment
CN111784703A (en) Image segmentation method and device, electronic equipment and storage medium
CN108961189B (en) Image processing method, image processing device, computer equipment and storage medium
CN110046573B (en) Face image recognition method and device, computer equipment and storage medium
CN112489144A (en) Image processing method, image processing apparatus, terminal device, and storage medium
CN110750154A (en) Display control method, system, device, equipment and storage medium
CN115205168A (en) Image processing method, device, electronic equipment, storage medium and product
CN114663570A (en) Map generation method and device, electronic device and readable storage medium
CN106887024B (en) The processing method and processing system of photo
CN111062860B (en) Image color adjusting method and device based on scene and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant