CN117372615A - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117372615A
CN117372615A CN202311338029.6A CN202311338029A CN117372615A CN 117372615 A CN117372615 A CN 117372615A CN 202311338029 A CN202311338029 A CN 202311338029A CN 117372615 A CN117372615 A CN 117372615A
Authority
CN
China
Prior art keywords
image
value
whitening
target
skin area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311338029.6A
Other languages
Chinese (zh)
Inventor
吕烨鑫
王文博
田晨光
胡晓文
杨熙
王丹阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202311338029.6A priority Critical patent/CN117372615A/en
Publication of CN117372615A publication Critical patent/CN117372615A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an image processing method, an image processing device, electronic equipment and a storage medium, relates to the technical field of image processing, and particularly relates to the fields of large models, artificial intelligence and deep learning. The specific implementation scheme is as follows: acquiring a skin area of a person in an original first image; performing redness whitening and fuchsin adjustment on a skin area in the first image to obtain a target whitening image; and adjusting the contrast and the saturation of the skin area in the target whitening image to obtain a second image. According to the embodiment of the disclosure, the skin area can be whitened while the original skin color state is maintained, and the proportion of the highlight and shadow parts of the skin is maintained, so that a better whitening effect is realized.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to the fields of large models, artificial intelligence, and deep learning, and more particularly, to an image processing method, apparatus, electronic device, and storage medium.
Background
Digital image cosmetology and person whitening have become hot spots in the fields of digital photography and image processing today, and traditional whitening algorithms destroy the natural ground color of skin color in the process of pursuing skin whitening, so that the processed image is unrealistic.
Disclosure of Invention
The disclosure provides an image processing method, an image processing device, electronic equipment and a storage medium.
According to a first aspect of the present disclosure, there is provided an image processing method, the method comprising: acquiring a skin area of a person in an original first image; performing redness whitening and fuchsin adjustment on a skin area in the first image to obtain a target whitening image; and adjusting the contrast and the saturation of the skin area in the target whitening image to obtain a second image.
According to a second aspect of the present disclosure, there is provided an image processing apparatus including: the first acquisition module is used for acquiring the skin area of the person in the original first image; the second acquisition module is used for performing reddish whitening and fuchsin adjustment on the skin area in the first image to obtain a target whitening image; and the third acquisition module is used for adjusting the contrast and the saturation of the skin area in the target whitening image to obtain a second image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of an image processing method according to an embodiment of the disclosure;
FIG. 2 is a flow chart of another image processing method provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart of another image processing method according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of another image processing method provided by an embodiment of the present disclosure;
FIG. 5 is a flow chart of another image processing method provided by an embodiment of the present disclosure;
FIG. 6 is a flow chart of another image processing method provided by an embodiment of the present disclosure;
FIG. 7 is a flow chart of a training method for an image processing model according to an embodiment of the present disclosure;
FIG. 8 is a model block diagram of an image processing model provided by an embodiment of the present disclosure;
FIG. 9 is a flow chart of another image processing method provided by an embodiment of the present disclosure;
fig. 10 is a schematic structural view of an image processing apparatus provided in an embodiment of the present disclosure;
FIG. 11 is a schematic structural diagram of a training device for an image processing model according to an embodiment of the present disclosure;
fig. 12 is a schematic structural view of another image processing apparatus provided by an embodiment of the present disclosure;
fig. 13 is a block diagram of an electronic device for implementing an image processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Image Processing (Image Processing), a technique of analyzing an Image with a computer to achieve a desired result. Also known as image processing. Image processing generally refers to digital image processing. The digital image is a large two-dimensional array obtained by photographing with equipment such as an industrial camera, a video camera, a scanner and the like, wherein the elements of the array are called pixels, and the values of the pixels are called gray values. Image processing techniques generally include image compression, enhancement and restoration, matching, description and recognition, etc.
Large Models (Large Models), which refer to deep neural network Models with millions or billions of parameters, are capable of performing complex processing and task processing on Large-scale data, such as natural language processing (including machine translation, text generation, language Models, etc.), computer vision (including image classification, object detection, image generation, etc.), speech recognition (including speech-to-text, speech recognition, etc.), etc., through specialized training procedures.
Artificial intelligence (Artificial Intelligence) is a new technical science based on computer science, which is a cross discipline of cross fusion of multiple disciplines such as computer, psychology, philosophy, etc., to research and develop theories, methods, techniques and application systems for simulating, extending and expanding human intelligence, and the research in this field includes robots, language recognition, image recognition, natural language processing and expert systems.
Deep Learning (DL) is a new research direction in the field of machine Learning, and Deep Learning is an internal rule and a representation hierarchy of Learning sample data, and the final goal is to enable a machine to have analysis Learning ability like a person and to recognize data such as characters, images, sounds and the like.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the disclosure. As shown in fig. 1, the method comprises the steps of:
s101, acquiring a skin area of a person in an original first image.
In some implementations, the first image is an image to be processed, and the first image may be taken in real time or may be obtained directly on a gallery, database, or network.
It will be appreciated that the first image includes a skin region of the person, and the skin region of the person in the original first image is acquired for subsequent processing.
Alternatively, the person region in the original first image may be identified based on the person detection model, and the skin region of the person region may be identified based on the skin detection model.
In some implementations, feature points in an original first image may be extracted by a character detection model, a character region in the first image may be determined from all feature points, the character detection model may be a pre-trained deep neural network model, and the feature points may be extracted based on the pre-trained deep neural network model, thereby outputting the character region in the first image.
In some implementations, skin feature points in the person region may be extracted by a skin detection model, the skin region in the person region may be determined from all of the skin feature points, the skin detection model may be a pre-trained deep neural network model, and the skin feature points are extracted based on the pre-trained deep neural network model, thereby obtaining the skin region in the person region.
S102, performing reddish whitening and fuchsin adjustment on the skin area in the first image to obtain a target whitening image.
Considering that the direct whitening of the skin area can cause excessive skin whitening, so that the image is unnatural and the whitening effect is poor, the skin area in the first image can be reddish and whitened, so that the original natural ground color and reddish characteristics of the skin area are reserved while the skin area is whitened, and the processed target whitening image is more natural and real.
Optionally, the target whitening image may be obtained by continuously adjusting the values of each channel of the skin area in the first image, that is, the Red channel, the Green channel and the Blue channel of the Red Green Blue (RGB) channel, so as to change the effect presented by the skin area in the first image until the skin area in the first image achieves the reddish whitening effect.
Optionally, a target value of reddish whitening can be configured in advance, the values of all channels of the skin area in the first image are adjusted to corresponding target values, the effect of reddish whitening is directly achieved, and the target whitening image is obtained.
Further, after the skin is whitened by partial red, the red channel value of the skin area of the first image can be regulated again, so that the aim of regulating the magenta color of the first image is fulfilled, and the skin effect presented by the target whitening image is more natural and vivid.
And S103, adjusting the contrast and the saturation of the skin area in the target whitening image to obtain a second image.
It is understood that contrast refers to the measurement of different brightness levels between the brightest white and darkest black in a bright-dark region of an image, and that the larger the difference range is, the larger the contrast is, the clearer and more striking the image is, and the more vivid and bright the color is, that is, the higher the contrast is, which is helpful to the definition, detail and gray level of the image. Saturation refers to the vividness of a color, also called purity, and in colorimetry, as saturation decreases, the color becomes dull until it is called achromatic, so that proper contrast and saturation may make the image appear more effective.
In some implementations, after the target whitening image is obtained, the contrast and saturation of the skin area in the target whitening image are adjusted to improve the effect presented by the skin area in the target whitening image, so that the target whitening image is clearer, the color is richer, and a second image with better presentation effect is obtained.
In some implementations, the target whitening image may be adjusted with a preset contrast, so that the proportion of the highlight and shadow portions of the skin is preserved, and a contrast-adjusted whitening contrast image is obtained. In some implementations, a contrast adjusting frame can be further arranged, the adjusting images with different effects are obtained by continuously adjusting the value of the contrast in the contrast adjusting frame, and the adjusting image with the best skin presenting effect is selected as the whitening contrast image.
Further, the saturation adjustment can be performed on the whitening contrast image by adopting the preset saturation, so that the skin area can be ensured to present a better skin color state, and a final second image is obtained. In some implementations, a saturation adjusting frame may be further set, and saturation adjusting images with different effects are obtained by continuously adjusting the saturation value in the saturation adjusting frame, and the saturation adjusting image with the best skin presenting effect is selected as the second image.
In the embodiment of the disclosure, the target whitening image with the skin reddening state reserved is obtained by performing reddening whitening and fuchsin adjustment on the skin area of the person in the first image, so that the original skin color state is reserved in the target whitening image, and the authenticity of the skin area in the first image is ensured; furthermore, in order to ensure the image presentation effect, the contrast and saturation of the target whitening image are adjusted, the proportion of the highlight and shadow parts of the skin is reserved, the original skin state is reserved, and meanwhile, a better whitening effect can be realized, so that the processed second image is more real.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the disclosure. As shown in fig. 2, the method comprises the steps of:
s201, acquiring a skin area of a person in an original first image.
In the embodiment of the present disclosure, the implementation method of step S201 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S202, performing reddening whitening on a skin area in the first image based on the color lookup table to obtain a primary whitening image.
The color look-up table may be understood as a discrete function from which the corresponding output value may be derived after the input value has been determined.
In some implementations, the color look-up table may be pre-configured with pixel values for different color effects, i.e., the color look-up table includes pre-configured pixel values for reddish.
In some implementations, first RGB values for pixels within a skin region in a first image may be determined, and second RGB values for redness are determined in a color lookup table based on the first RGB values; and replacing the first RGB value of the pixel in the skin area with the second RGB value to obtain the primary whitening image. That is, the first RGB value is input into the color lookup table, the second RGB value of the redness is output, the first RGB value of the pixel point in the skin area is reset to the second RGB value, the redness whitening treatment of the skin area is realized, and the primary whitening image is obtained.
It is understood that the first RGB value and the second RGB value each include three channel values, that is, the first RGB value includes a first red (R) channel value, a first green (G) channel value and a first blue (B) channel value, and the second RGB value includes a second red (R) channel value, a second green (G) channel value and a second blue (B) channel value, that is, when the first RGB value of the pixel in the skin area is replaced with the second RGB value, the first red (R) channel value, the first green (G) channel value and the first blue (B) channel value of the pixel in the skin area are replaced respectively, so as to obtain a primary whitening image, thereby improving the efficiency of whitening the skin area and the whitening effect.
S203, performing fuchsin adjustment on the skin area in the primary whitening image based on color mode conversion to obtain a target whitening image.
Color mode is an algorithm for representing colors, and common color modes include RGB mode, printed four color separation (CMYK) mode, gray mode, and the like.
In order to keep the original details of the skin area, the skin area in the primary whitening image is prevented from being excessively whitened, and fuchsin adjustment is performed on the skin area again, so that the skin area is more close to the real skin state; to facilitate magenta adjustment, the primary whitened image is color mode converted from RGB mode, e.g., RGB mode to CMYK mode; compared with the RGB mode of the primary whitening image, the CMYK mode can restore the colors in the primary whitening image more accurately, so that the aim of carrying out fuchsin adjustment on the skin area in the primary whitening image is achieved by adjusting the values of all channels in the CMYK mode, and the image after fuchsin adjustment is obtained as a target whitening image.
S204, adjusting the contrast and the saturation of the skin area in the target whitening image to obtain a second image.
In the embodiment of the present disclosure, the implementation method of step S204 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
According to the embodiment of the disclosure, the skin area is subjected to reddish whitening based on the color lookup table to obtain the primary whitening image, the efficiency and the effect of reddish whitening on the skin area are improved, further, the skin area in the primary whitening image is subjected to fuchsin adjustment again based on color mode conversion, so that the color details in the skin area can be accurately changed, the whitening closer to the skin color is realized, the skin area in the target whitening image is more natural and real, and the effect of adjusting and obtaining the second image is better.
Fig. 3 is a flowchart illustrating another image processing method according to an embodiment of the disclosure. As shown in fig. 3, the method comprises the steps of:
s301, acquiring a skin area of a person in an original first image.
In the embodiment of the present disclosure, the implementation method of step S301 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S302, performing reddening whitening on a skin area in the first image based on the color lookup table to obtain a primary whitening image.
In the embodiment of the present disclosure, the implementation method of step S302 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S303, converting pixels in a skin area in the primary whitening image from an RGB color mode to a printing four-color separation mode CMYK color mode to obtain a candidate whitening image.
In order to facilitate fuchsin adjustment, the primary whitening image is subjected to color mode conversion from an RGB mode, so that the effect and accuracy of fuchsin adjustment on the skin area in the primary whitening image are improved.
Alternatively, a first red R value, a first green G value, and a first blue B value of a pixel within a skin region in the primary whitening image may be acquired, and a black K value of the pixel may be determined based on a maximum value of the first R value, the first G value, and the first B value.
It can be appreciated that, since the values in the CMYK color mode are 0-1, the values of the pixels in the skin area in the primary whitened image can be normalized, and the black K value of the pixel is determined based on the normalized first red R value, first green G value, and first blue B value. Illustratively, the first red R value may be R 1 =r/255, the first green G value may be G 1 The first blue B value may be B 1 =B/255。
Alternatively, the black K value of a pixel may be: k=1-max (R 1 ,G 1 ,B 1 )。
Further, a cyan C value for the pixel may be determined based on the first R value and the K value for the pixel; alternatively, the cyan C value of a pixel may be: c= (1-R 1 -K)/(1-K)。
Further, an initial magenta M value of the pixel may be determined based on the first G value and the K value of the pixel, and the initial M value may be expanded to obtain the M value of the pixel; alternatively, the initial magenta M value for a pixel may be: m= (1-G) 1 -K)/(1-K), expanding the initial magenta M value of the pixel to obtain the M value of the pixel as: m' =γm, i.e. willThe initial M value is expanded by a factor of y to obtain the M value M' for the pixel.
In some implementations, γ is a preset value, which is used to increase the fuchsin color of the skin area, and to implement skin redness treatment, so as to achieve a whitening effect closer to skin color.
Further, determining a yellow Y value for the pixel based on the first B value and the K value for the pixel; alternatively, the yellow Y value of a pixel may be: y= (1-B) 1 -K)/(1-K)。
After determining the K value, the C value, the M value and the Y value of the pixel, obtaining a candidate whitening image based on the K value, the C value, the M value and the Y value of the pixel, namely obtaining CMYK values corresponding to each pixel of the skin area, and converting all the pixels of the skin area from RGB values to CMYK values to obtain the candidate whitening image.
S304, performing inverse conversion from the CMYK color mode to the RGB color mode on pixels in the skin area in the candidate whitening image to obtain a target whitening image.
Considering that the CMYK mode is a color mode depending on reflection, in order to facilitate the subsequent processing of the image, after the magenta adjustment is performed on the skin area to obtain a candidate whitened image, the candidate whitened image is inversely converted from the CMYK color mode to the RGB color mode to obtain a target whitened image.
Alternatively, the second R value of the pixel may be determined based on the C value and the K value of the pixel within the skin region in the candidate whitened image; the second R value may be: r is R 2 =(1-C)*(1-K)。
Further, a second G value of the pixel may be determined based on the M value and the K value of the pixel in the candidate whitened image; the second G value may be: g 2 =(1-M′)*(1-K)。
Further, a second B value of the pixel may be determined based on the Y value and the K value of the pixel in the candidate whitened image; the second B value may be: b (B) 2 =(1-Y)*(1-K)。
After determining a second R value, a second G value and a second B value of the pixel, obtaining a target whitening image based on the second R value, the second G value and the second B value of the pixel; it can be understood that the second R value, the second G value, and the second B value obtained herein are normalized values, so that inverse normalization processing can be performed on the second R value, the second G value, and the second B value, that is, the second R value, the second G value, and the second B value are multiplied by 255 respectively to obtain second RGB values, and CMYK values of pixels in the skin area are updated to corresponding second RGB values, so as to obtain a target whitening image, where the target whitening image is an image obtained by performing magenta processing on the skin area, and the skin color state of the skin area is closer to the real state, and the whitening effect is better.
And S305, adjusting the contrast and the saturation of the skin area in the target whitening image to obtain a second image.
In the embodiment of the present disclosure, the implementation method of step S305 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
In the embodiment of the disclosure, the primary whitening image after the reddening adjustment is subjected to color mode conversion, that is, the RGB mode is converted into the CMYK mode, so that magenta adjustment with better effect is realized, and then the CMYK mode is reversely converted into the RGB mode, so that the target whitening image after the magenta adjustment is obtained, the target whitening image can realize whitening closer to skin color, red skin treatment is realized, and the whitening effect on the skin area is better.
Fig. 4 is a flowchart illustrating another image processing method according to an embodiment of the disclosure. As shown in fig. 4, the method comprises the steps of:
s401, acquiring a skin area of a person in an original first image.
In the embodiment of the present disclosure, the implementation method of step S401 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S402, performing reddish whitening and fuchsin adjustment on the skin area in the first image to obtain a target whitening image.
In the embodiment of the present disclosure, the implementation method of step S402 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S403, determining target contrast, and carrying out contrast adjustment on pixels in a skin area in the target whitening image based on the target contrast to obtain a whitening contrast image.
In some implementations, determining a third RGB value for pixels within a skin area in the target whitened image, and obtaining a difference between the third RGB value and a set value as a fourth RGB value; determining a fifth RGB value based on the fourth RGB value and the target contrast, and determining a sum of the fifth RGB value and a set value as a sixth RGB value; and replacing the pixels in the skin area with the sixth RGB value from the third RGB value to obtain the whitening contrast image.
It can be understood that the third RGB value is a pixel value of a pixel point in the skin area in the target whitening image, and the set value is a manually preset fixed value, for example, the set value may be 0.5.
In some implementations, the target contrast may be a contrast value configured in advance, the target contrast may better keep the proportion of the highlight and shadow portions of the skin, and in general, the contrast of the image may be set to be a median value of a contrast range, for example, the contrast range is 0-100, and then the contrast of the image may be adjusted to be 50; optionally, in this embodiment, the target contrast may be set as a median of the contrast range, so as to facilitate adjustment based on the target contrast, and obtain a whitened contrast image with a better effect.
After determining the third RGB values of the pixels in the skin area, a difference between the third RGB values and the set value is determined, for example, when the set value is 0.5, that is, the difference between the third RGB values and 0.5 is calculated, so as to obtain a fourth RGB value.
Further, a fifth RGB value may be determined according to the fourth RGB value and the target contrast, for example, a product of the fourth RGB value and the target contrast is calculated to obtain the fifth RGB value, so as to adjust the fourth RGB value according to the target contrast; after the fifth RGB value is determined, a sum value of the fifth RGB value and the set value is calculated as a sixth RGB value.
Alternatively, the calculation of the sixth RGB value may be expressed as:
wherein g 6 Representing a sixth RGB value; alpha represents the target contrast; g 3 Representing a third RGB value;represents a set value, optionally +.>Can take a value of 0.5; />Representing a fourth RGB value; />Representing a fifth RGB value.
After the sixth RGB value of each pixel in the skin area is determined, the pixels in the skin area are replaced by the sixth RGB value from the third RGB value, so that the whitening contrast image is obtained, and the adjusting effect and the adjusting efficiency of the whitening contrast image are improved.
S404, determining target saturation and a preset weight vector, and adjusting the saturation of pixels in the skin area in the whitening contrast image based on the weight vector and the target saturation to obtain a second image.
Optionally, the target saturation may be a saturation value configured in advance, a proper saturation value may reach a perfect skin color state, a saturation value range is between 0% (complete gray) and 100% (very saturated), and in general, the saturation value of the image is between 20% and 80%, so the target saturation may be a value in the range of 20% to 80%, and in order to reach a better skin color state, the target saturation value may be a value in the range of 40% to 70%.
Optionally, the preset weight vector may be a vector with a dimension of 3, so as to adjust the values of the three channels of RGB based on the preset weight vector.
In some implementations, dot products are made on the sixth RGB values and the weight vectors of the pixels in the skin region in the skin-whitening contrast image to obtain the image brightness values of the pixels in the skin region; it will be appreciated that the sixth RGB value includes values of three channels, namely R channel, G channel and B channel, so that the sixth RGB value can be used as a 3-dimensional vector, and the dot product can be made on the sixth RGB value and the weight vector to obtain the image brightness value of the pixel in the skin area.
Further, according to the image brightness value of the pixels in the skin area, obtaining the image gray scale value of the pixels in the skin area; and fusing the sixth RGB value of the pixel and the gray scale value of the image based on the target saturation to obtain a second image.
In some implementations, the image gray scale value may be understood as a gray scale value, where any gray scale value is equal to the value of R, G, B in three channels, so that the values of 3 channels may be set to be the image brightness value, so as to obtain a corresponding image gray scale value, that is, the image gray scale value is l= (I, I), where I represents the image brightness value.
In some implementations, determining a fusion duty cycle of the sixth RGB values of the pixels within the skin region and the image gray scale values at the time of fusion based on the target saturation; and fusing the sixth RGB value of the pixel in the skin area with the gray scale value of the image according to the fusion duty ratio to obtain a second image so as to ensure that the second image can show a better whitening treatment effect, and the whitened image is more similar to a real image.
That is, the fusion duty ratio may be determined according to the target saturation, so that the sixth RGB value of the pixel and the image gray scale value are fused, for example, when the target saturation is β, the fusion duty ratio according to β:1- β may be set to fuse the sixth RGB value and the image gray scale value, that is, according to β×g 6 The fusion duty cycle of + (1-beta) L updates the pixel values of the pixels in the skin region to obtain a second image.
In the embodiment of the disclosure, the skin area in the target whitening image is preliminarily adjusted through the target contrast and the set value to obtain the whitening contrast image, and the contrast adjustment is performed by taking the target contrast as a reference, so that the proportion of the highlight and shadow parts of the skin is better kept; further, an image brightness value is determined according to the target saturation and the weight vector, an image gray scale value is determined based on the image brightness value, and the image gray scale value and a sixth RGB value are fused based on the target saturation, so that the skin state of the person in the second image is better adjusted and whitened, the skin area of the person in the second image is closer to the real skin color, and the authenticity of whitening the skin area is ensured.
Fig. 5 is a flowchart illustrating another image processing method according to an embodiment of the disclosure. As shown in fig. 5, the method comprises the steps of:
s501, performing human body detection on the first image to obtain a human body detection frame.
It will be appreciated that the human detection frame includes the region of the first image where the complete person is located.
Alternatively, the human detection algorithm may be a commonly used target detection algorithm, such as convolutional neural network (Convolutional Neural Networks, CNN), single-stage target detection (Single Shot MultiBox Detector, SSD) algorithm, and you need only look once (You Only Look Once, YOLO) series algorithm. That is, the human body detection frame in the first image is obtained through the pre-trained human body detection algorithm, so that the detection range of the skin area is reduced, and the efficiency of obtaining the skin area is improved.
S502, body part detection is carried out in the human body detection frame, and a body part detection area is obtained.
Since the human body detection frame may include a background area other than the human body, that is, there may be other areas in the human body detection frame other than the body part of the human body, in order to improve the efficiency of detecting the skin area, the body part detection is performed again on the human body detection frame, so as to obtain the body part detection area of the human body.
Alternatively, the body part detection algorithm may be a common target detection algorithm, such as CNN, SSD, and YOLO series algorithms, for example, the joint part of the human body is identified by the target detection algorithm, and the body part is determined according to the joint part of the human body, so as to obtain a body part detection frame only including the body part, and reduce the detection range when detecting the skin region.
S503, performing skin detection in the body part detection area, and extracting the skin area of the person.
The purpose of the embodiments of the present disclosure is to whiten skin area, so that only skin area of human body is extracted and detected, that is, skin detection is performed in a detection area of body part with smaller range, so as to reduce the detection range of skin area and increase the detection speed.
In some implementations, because the pixel values of the skin area and other areas of the body have certain differences, and the pixel values of the skin area are uniform, the skin area can be inspected based on the pixel value characteristics, for example, the pixels meeting the skin area characteristics are detected and extracted by adopting a common target detection algorithm, the skin area of the person is finally obtained, the skin area is whitened, the influence of the whitening operation on other areas of the body is avoided, and the accuracy and the effect of whitening the skin of the human body are improved.
S504, performing reddish whitening and fuchsin adjustment on the skin area in the first image to obtain a target whitening image.
In the embodiment of the present disclosure, the implementation method of step S504 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
And S505, adjusting the contrast and the saturation of the skin area in the target whitening image to obtain a second image.
In the embodiment of the present disclosure, the implementation method of step S505 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
In the embodiment of the disclosure, the human body detection frame and the body part detection area in the first image are sequentially acquired, the skin area is extracted and detected in the body part detection area, and the accuracy and the efficiency of skin area detection are improved by adopting the existing target detection algorithm, so that a better processing basis is provided for the subsequent whitening of the skin area.
Fig. 6 is a flowchart illustrating another image processing method according to an embodiment of the disclosure. As shown in fig. 6, the method comprises the steps of:
s601, performing human body detection on the first image to obtain a human body detection frame.
In the embodiment of the present disclosure, the implementation method of step S601 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S602, detecting the body part in the human body detection frame to obtain a body part detection area.
In the embodiment of the present disclosure, the implementation method of step S602 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S603, performing skin detection in the body part detection area, and extracting the skin area of the person.
In the embodiment of the present disclosure, the implementation method of step S603 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described in detail.
S604, performing reddening whitening on the skin area in the first image based on the color lookup table to obtain a primary whitening image.
In the embodiment of the present disclosure, the implementation method of step S604 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S605, converting pixels in a skin area in the primary whitening image from an RGB color mode to a printing four-color separation mode CMYK color mode to obtain a candidate whitening image.
In the embodiment of the present disclosure, the implementation method of step S605 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described in detail.
S606, performing inverse conversion from the CMYK color mode to the RGB color mode on pixels in the skin area in the candidate whitening image to obtain a target whitening image.
In the embodiment of the present disclosure, the implementation method of step S606 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S607, determining a target contrast, and performing contrast adjustment on pixels in a skin area in the target whitening image based on the target contrast to obtain a whitening contrast image.
In the embodiment of the present disclosure, the implementation method of step S607 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
And S608, determining target saturation and a preset weight vector, and adjusting the saturation of pixels in the skin area in the whitening contrast image based on the weight vector and the target saturation to obtain a second image.
In the embodiment of the present disclosure, the implementation method of step S608 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
In the embodiment of the disclosure, the skin area is subjected to redness whitening based on the color lookup table to obtain a primary whitening image, the efficiency and the effect of redness whitening on the skin area are improved, the primary whitening image subjected to redness adjustment is subjected to color mode conversion, namely the RGB mode is converted into the CMYK mode, so that better-effect fuchsin adjustment is realized, a target whitening image in a skin reddening state is obtained, an original skin color state is reserved in the target whitening image, and the authenticity of the skin area in the first image is ensured; the contrast and saturation adjustment is carried out on the target whitening image, the proportion of the highlight and shadow parts of the skin is reserved, and the state of the skin in the whitening contrast image is better adjusted, so that the skin area of the person in the second image is closer to the actual skin color, and the authenticity of whitening the skin area is ensured.
Fig. 7 is a flowchart of a training method of an image processing model according to an embodiment of the disclosure. As shown in fig. 7, the method includes the steps of:
S701, acquiring a sample image and a whitening reference image of the sample image.
In some implementations, the sample image may be any type of image including a person, and the reference image of the sample image is a better image after the sample image is subjected to the whitening process.
In some implementations, the reference image for whitening the sample image may be used as a standard image, that is, the sample image should be whitened as effectively as possible, that is, as the reference image is whitened.
S702, inputting the sample image into an image processing model, extracting a skin area of a person in the sample image, performing reddish whitening and fuchsin adjustment on the skin area in the sample image to obtain a sample whitening image, and performing contrast and saturation adjustment on the skin area in the sample whitening image to obtain a target whitening image.
In some implementations, the image processing model may be a large model capable of handling complex problems or a model that incorporates one or more neural network models, which may implement at least the functions of object detection, image modification, and image conditioning.
Alternatively, the image processing model may extract the skin region of the person from the sample image, that is, detect the skin region of the person in the sample image, so as to facilitate the subsequent whitening treatment on the skin region.
Optionally, the image processing model can also perform redness whitening and fuchsin adjustment on the extracted skin area to obtain a sample whitening image, so that the processing of the skin area in the sample whitening image can keep a reddish state, and the processing of the skin area in the sample image is more close to the real state of the skin.
Furthermore, the image processing model can also adjust the contrast and saturation of the skin area in the sample whitening image, the proportion of the highlight and shadow parts in the skin area is reserved, the color processing effect on the sample image is better, and the target whitening image with better whitening effect is obtained.
S703, adjusting the image processing model based on the difference between the whitening reference image and the target whitening image.
In some implementations, the difference in the whitening reference image and the target whitening image may be a difference in pixel values of all pixels in the whitening reference image and the target whitening image; or a difference between the overall presentation effect of the whitening reference image and the presentation effect of the target whitening image, which is used to evaluate whether the target whitening image is close to the whitening reference image.
It can be understood that the reference image is a standard image with better sample image processing effect, so that the smaller the difference between the target whitening image and the reference image is, the better the effect of the target whitening image generated by the image processing model is. The image processing model is adjusted based on the difference between the whitening reference image and the target whitening image.
In some implementations, a loss function of the image processing model may be determined based on a difference between the whitened reference image and the target whitened image, the image processing model being adjusted based on the loss function.
Optionally, the loss function may be a difference function of pixel values of all pixels in the whitening reference image and the target whitening image, or may be a difference function between an overall display effect of the whitening reference image and a display effect of the target whitening image, where the greater the loss function, the greater the difference between the whitening reference image and the target whitening image, and the smaller the loss function, and the closer the loss function is, the more the whitening reference image and the target whitening image are, and the more the whitening reference image and the target whitening image are.
Optionally, the parameters of the fixed model in the image processing model may be adjusted, or the parameters of the reddish whitening and magenta adjustment in the image processing model may be adjusted, and the saturation or contrast parameter may be adjusted, so that the image processing model may output a target whitening image with smaller difference from the whitening reference image, that is, the loss function corresponding to the image processing model is smaller.
And S704, training the adjusted image processing model by using the next sample image until the training is finished, and obtaining the target image processing model.
After the image processing model is initially adjusted, an adjusted image processing model can be obtained, and the adjusted image processing model is continuously trained by adopting the next sample image.
In some implementations, the next sample image also has a corresponding whitening reference image, which is used to analyze the target whitening image output by the adjusted image processing model to determine whether the image processing effect of the current adjusted image processing model is better, so as to push the above to train the image processing model through a large number of sample images until the training of the image processing model is finished.
It can be understood that when the difference between the target whitening image and the whitening reference image is minimum, that is, when the loss function of the image processing model is minimum, training of the image processing model is finished, and a final target image processing model is obtained.
In the embodiment of the disclosure, the target image processing model with the minimum loss function, that is, the target image processing model with the best image processing effect is obtained by training the image processing model, so that the image can be subjected to whitening treatment according to the target image processing model, better effects of redness and whitening, fuchsin adjustment, contrast adjustment and saturation adjustment are achieved, the effect of processing the image output by the target image processing model is better, and the real state of skin can be kept.
On the basis of the above embodiments, fig. 8 is a model structure diagram of an image processing model according to an embodiment of the present disclosure. As shown in fig. 8, the image processing model includes at least a skin extraction layer 801, a first adjustment layer 802, a second adjustment layer 803, a third adjustment layer 804, and a fourth adjustment layer 805.
As a possible implementation manner, the skin adjustment layer 801 is used for performing skin detection on a sample image, and extracting a skin region of a person in the sample image, so as to improve efficiency and accuracy of skin region detection.
In some implementations, human body detection can be performed on the sample image to obtain a sample human body detection frame; detecting the body part in the sample human body detection frame to obtain a sample detection area of the body part; and skin detection is carried out in the sample detection area, the skin area of the person in the sample image is extracted, and the efficiency of skin area detection is improved.
In the embodiment of the disclosure, the implementation methods of performing human body detection on the sample image, performing body part detection in the sample human body detection frame, and performing skin detection in the sample detection area may be implemented in any one of the embodiments of the disclosure, which are not limited herein or are not repeated herein.
As a possible implementation, the first adjustment layer 802 is configured to perform reddish whitening on the skin area in the sample image based on the color lookup table, so as to obtain a primary sample whitened image. Namely, the skin area after reddening and whitening is obtained according to the color lookup table, so that the effect of reddening skin can be reserved in the primary sample whitening image.
In the embodiment of the present disclosure, the implementation method for performing reddening whitening on the skin area in the sample image based on the color lookup table may be implemented in any one of the embodiments of the present disclosure, which is not limited herein, and is not repeated herein.
As a possible implementation, the second adjustment layer 803 is configured to perform magenta adjustment on the skin area in the primary sample whitened image based on the color mode conversion, so as to obtain the sample whitened image. The skin area in the sample whitening image is closer to the real state of the skin, the real effect of the image is ensured while the whitening is performed, and the whitening effect on the skin area is better.
In the embodiment of the present disclosure, based on color mode conversion, the method for performing magenta adjustment on the skin area in the primary sample whitening image may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
As a possible implementation manner, the third adjusting layer 804 is configured to perform contrast adjustment on pixels in a skin area in the sample whitening image based on the target contrast, so as to obtain the sample whitening contrast image. The target contrast is used as an adjusting reference, the efficiency of adjusting the contrast of the skin area in the sample whitening image is improved, the detail proportion of highlight and shadow is reserved, and the skin area whitening effect in the sample whitening contrast image is ensured.
In the embodiment of the present disclosure, the implementation method for performing contrast adjustment on the pixels in the skin area in the sample whitening image may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
As a possible implementation manner, the fourth adjustment layer 805 is configured to adjust the saturation of the pixels in the skin area in the sample whitening contrast image based on the target saturation and a preset weight vector, so as to obtain the target whitening image. The sample whitening comparison image is subjected to saturation adjustment based on the target saturation and the preset weight vector, so that the color characteristics of the image can be better displayed in the target whitening image, the real skin state of the person is reserved, the situation that the whitening effect is not real due to excessive whitening is avoided, and the image processing efficiency and the image processing effect are improved.
In the embodiment of the present disclosure, the method for implementing saturation adjustment on the pixels in the skin area in the sample whitening comparison image may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
In the embodiment of the disclosure, the image processing is performed based on the skin extraction layer, the first adjusting layer, the second adjusting layer, the third adjusting layer and the fourth adjusting layer in the image processing model, so that a target whitening image with a better whitening effect is obtained, the image processing efficiency can be improved through the image processing model, the accuracy of the target whitening image processing is ensured, and the target image processing model with a better and comprehensive final effect is obtained through training and adjusting each layer in the image processing model, so that the effect of image processing based on the target image processing model is ensured.
Fig. 9 is a flowchart of another image processing method according to an embodiment of the disclosure, based on the above embodiments. As shown in fig. 9, the method includes the steps of:
s901, acquiring an original first image, and inputting the first image into a target image processing model.
In some implementations, the first image is an image to be processed, and the first image may be taken in real time or may be obtained directly on a gallery, database, or network.
It will be appreciated that the first image includes an area of skin of the person.
The first image is input into a target image processing model for image processing. In some implementations, the target image processing model is a pre-trained model, and at least the functions of target extraction, image modification, image adjustment, and the like can be achieved.
In the embodiments of the present disclosure, the training method of the image processing model may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described in detail.
S902, extracting a skin area of a person in the first image by the target image processing model, performing redness whitening and fuchsin adjustment on the skin area in the first image to obtain a target whitening image, and performing contrast and saturation adjustment on the skin area in the target whitening image to obtain a second image.
In the embodiment of the present disclosure, the method for extracting the skin area of the person in the first image, performing the reddish whitening and fuchsin adjustment on the skin area in the first image, and performing the contrast and saturation adjustment on the skin area in the target whitened image may be implemented by any one of the modes in each embodiment of the present disclosure, which is not limited herein, and is not repeated herein.
In the embodiment of the disclosure, after the original first image is acquired, the first image is input into the trained image processing model, and the second image is output, so that the efficiency of acquiring the second image and the effect of processing the skin area in the second image are ensured, the experience of image processing is improved, the real skin state of the person in the image is ensured, and the authenticity of the processed skin area is stronger.
Fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown in fig. 10, the image processing apparatus 1000 includes:
a first obtaining module 1001, configured to obtain a skin area of a person in an original first image;
a second obtaining module 1002, configured to perform redness whitening and fuchsin adjustment on a skin area in the first image, to obtain a target whitening image;
and a third obtaining module 1003, configured to perform contrast and saturation adjustment on a skin area in the target whitening image, so as to obtain a second image.
In some implementations, the second acquisition module 1002 includes:
performing reddening whitening on a skin area in the first image based on the color lookup table to obtain a primary whitening image;
and performing fuchsin adjustment on the skin area in the primary whitening image based on the color mode conversion to obtain a target whitening image.
In some implementations, the second acquisition module 1002 includes:
determining a first RGB value of a pixel in a skin area in a first image, and determining a second RGB value of redness in a color lookup table based on the first RGB value;
and replacing the first RGB value of the pixel in the skin area with the second RGB value to obtain the primary whitening image.
In some implementations, the second acquisition module 1002 includes:
converting pixels in a skin area in the primary whitening image from an RGB color mode to a printing four-color separation mode CMYK color mode to obtain a candidate whitening image;
and performing inverse conversion from the CMYK color mode to the RGB color mode on pixels in the skin area in the candidate whitening image to obtain a target whitening image.
In some implementations, the second acquisition module 1002 includes:
acquiring a first red R value, a first green G value and a first blue B value of a pixel in a skin area in a primary whitening image, and determining the maximum value from the first R value, the first G value and the first B value as a black K value of the pixel;
determining a cyan C value for the pixel based on the first R value and the K value for the pixel;
determining an initial magenta M value of the pixel based on the first G value and the K value of the pixel, and expanding the initial M value to obtain the M value of the pixel;
Determining a yellow Y value for the pixel based on the first B value and the K value for the pixel;
and obtaining candidate whitening images based on the K value, the C value, the M value and the Y value of the pixels.
In some implementations, the second acquisition module 1002 includes:
determining a second R value of the pixel based on the C value and the K value of the pixel in the skin region in the candidate whitened image;
determining a second G value of the pixel based on the M value and the K value of the pixel in the candidate whitening image;
determining a second B value of the pixel based on the Y value and the K value of the pixel in the candidate whitening image;
and obtaining a target whitening image based on the second R value, the second G value and the second B value of the pixel.
In some implementations, the third acquisition module 1003 includes:
determining a target contrast, and performing contrast adjustment on pixels in a skin area in a target whitening image based on the target contrast to obtain a whitening contrast image;
and determining target saturation and a preset weight vector, and adjusting the saturation of pixels in the skin area in the whitening contrast image based on the weight vector and the target saturation to obtain a second image.
In some implementations, the third acquisition module 1003 includes:
determining a third RGB value of pixels in a skin area in the target contrast image, and acquiring a difference value between the third RGB value and a set value as a fourth RGB value;
Determining a fifth RGB value based on the fourth RGB value and the target contrast, and determining a sum of the fifth RGB value and a set value as a sixth RGB value;
and replacing the pixels in the skin area with the sixth RGB value from the third RGB value to obtain the whitening contrast image.
In some implementations, the third acquisition module 1003 includes:
performing dot product on the sixth RGB value of the pixels in the skin region of the skin-whitening contrast image and the weight vector to obtain the image brightness value of the pixels in the skin region;
obtaining the gray scale value of the pixel in the skin area according to the brightness value of the pixel in the skin area;
and fusing the sixth RGB value of the pixel and the gray scale value of the image based on the target saturation to obtain a second image.
In some implementations, the third acquisition module 1003 includes:
determining a fusion duty ratio of a sixth RGB value of pixels in the skin region and an image gray scale value during fusion based on the target saturation;
and fusing the sixth RGB value of the pixel in the skin area with the gray scale value of the image according to the fusion duty ratio to obtain a second image.
In some implementations, the first acquisition module 1001 includes:
and performing skin detection on the first image, and extracting the skin region of the person in the first image.
In some implementations, the first acquisition module 1001 includes:
performing human body detection on the first image to obtain a human body detection frame;
detecting the body part in the human body detection frame to obtain a body part detection area;
skin detection is performed in the body part detection area, and a skin area of the person is extracted.
In the embodiment of the disclosure, the target whitening image with the skin reddening state reserved is obtained by performing reddening whitening and fuchsin adjustment on the skin area of the person in the first image, so that the original skin color state is reserved in the target whitening image, and the authenticity of the skin area in the first image is ensured; in order to ensure the image presentation effect, the contrast and saturation of the target whitening image are adjusted, the proportion of the highlight and shadow parts of the skin is reserved, the original skin state is reserved, and meanwhile, a good whitening effect is achieved, so that the processed second image is more real.
Fig. 11 is a schematic structural diagram of a training device for an image processing model according to an embodiment of the present disclosure. As shown in fig. 11, the training apparatus 1100 for an image processing model includes:
an acquisition module 1101, configured to acquire a sample image and a whitening reference image of the sample image;
The processing module 1102 is configured to input a sample image into the image processing model, extract a skin area of a person in the sample image, perform reddish whitening and fuchsin adjustment on the skin area in the sample image to obtain a sample whitening image, and perform contrast and saturation adjustment on the skin area in the sample whitening image to obtain a target whitening image;
an adjustment module 1103, configured to adjust the image processing model based on a difference between the whitening reference image and the target whitening image;
the training module 1104 is configured to continue training the adjusted image processing model using the next sample image until the training is completed, so as to obtain the target image processing model.
In some implementations, the adjustment module 1103 includes:
and determining a loss function of the image processing model based on the difference between the whitening reference image and the target whitening image, and adjusting the image processing model based on the loss function.
In some implementations, the processing module 1102 includes:
performing reddening whitening on a skin area in the sample image by a first adjusting layer based on a color lookup table to obtain a primary sample whitening image;
and performing fuchsin adjustment on the skin area in the primary sample whitening image by the second adjustment layer based on color mode conversion to obtain the sample whitening image.
In some implementations, the processing module 1102 includes:
the third adjusting layer adjusts the contrast of pixels in a skin area in the sample whitening image based on the target contrast to obtain a sample whitening contrast image;
and the fourth adjusting layer adjusts the saturation of pixels in the skin area in the sample whitening comparison image based on the target saturation and a preset weight vector to obtain a target whitening image.
In some implementations, the processing module 1102 includes:
and performing skin detection on the sample image by the skin extraction layer, and extracting the skin region of the person in the sample image.
In some implementations, the processing module 1102 includes:
human body detection is carried out on the sample image, and a sample human body detection frame is obtained;
detecting the body part in the sample human body detection frame to obtain a sample detection area of the body part;
and performing skin detection in the sample detection area, and extracting the skin area of the person in the sample image.
In the embodiment of the disclosure, the target image processing model with the minimum loss function, that is, the target image processing model with the best image processing effect is obtained by training the image processing model, so that the image can be subjected to whitening treatment according to the target image processing model, better effects of redness and whitening, fuchsin adjustment, contrast adjustment and saturation adjustment are achieved, the effect of processing the image output by the target image processing model is better, and the real state of skin can be kept.
Fig. 12 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present disclosure. As shown in fig. 12, the image processing apparatus 1200 includes:
an image acquisition module 1201, configured to acquire an original first image, and input the first image into a target image processing model;
the image processing module 1202 is configured to extract a skin area of a person in the first image by using the target image processing model, perform reddish whitening and fuchsin adjustment on the skin area in the first image to obtain a target whitening image, and perform contrast and saturation adjustment on the skin area in the target whitening image to obtain a second image;
the target image processing model is a model obtained by training any one of training methods of training devices of the image processing model.
In the embodiment of the disclosure, after the original first image is acquired, the first image is input into the trained image processing model, and the second image is output, so that the efficiency of acquiring the second image and the effect of processing the skin area in the second image are ensured, the experience of image processing is improved, the real skin state of the person in the image is ensured, and the authenticity of the processed skin area is stronger
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 13 illustrates a schematic block diagram of an example electronic device 1300 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 13, the apparatus 1300 includes a computing unit 1301 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1302 or a computer program loaded from a storage unit 1308 into a Random Access Memory (RAM) 1303. In the RAM 1303, various programs and data required for the operation of the device 1300 can also be stored. The computing unit 1301, the ROM 1302, and the RAM 1303 are connected to each other through a bus 1304. An input/output (I/O) interface 1305 is also connected to bus 1304.
Various components in device 1300 are connected to I/O interface 1305, including: an input unit 1306 such as a keyboard, a mouse, or the like; an output unit 1307 such as various types of displays, speakers, and the like; storage unit 1308, such as a magnetic disk, optical disk, etc.; and a communication unit 1309 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1309 allows the device 1300 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1301 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1301 performs the respective methods and processes described above, for example, an image processing method. For example, in some embodiments, the image processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1308. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1300 via the ROM 1302 and/or the communication unit 1309. When the computer program is loaded into the RAM 1303 and executed by the computing unit 1301, one or more steps of the image processing method described above may be performed. Alternatively, in other embodiments, the computing unit 1301 may be configured to perform the image processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (41)

1. An image processing method, wherein the method comprises:
acquiring a skin area of a person in an original first image;
performing redness whitening and fuchsin adjustment on a skin area in the first image to obtain a target whitening image;
and adjusting the contrast and the saturation of the skin area in the target whitening image to obtain a second image.
2. The method of claim 1, wherein the performing the reddish whitening and fuchsin adjustment on the skin region in the first image to obtain the target whitened image comprises:
Performing reddening whitening on a skin area in the first image based on a color lookup table to obtain a primary whitening image;
and performing fuchsin adjustment on the skin area in the primary whitening image based on color mode conversion to obtain the target whitening image.
3. The method of claim 2, wherein the performing the reddish whitening of the skin region in the first image based on the color lookup table to obtain a primary whitened image comprises:
determining a first RGB value of a pixel in a skin area in the first image, and determining a second RGB value of redness in a color lookup table based on the first RGB value;
and replacing the first RGB value of the pixel in the skin area with the second RGB value to obtain the primary whitening image.
4. The method of claim 2, wherein the performing magenta adjustment of the skin area in the primary whitened image based on color mode conversion to obtain the target whitened image comprises:
converting pixels in a skin area in the primary whitening image from an RGB color mode to a printing four-color separation mode CMYK color mode to obtain a candidate whitening image;
and inversely converting pixels in a skin area in the candidate whitening image from the CMYK color mode to the RGB color mode to obtain the target whitening image.
5. A method according to claim 3, wherein said converting pixels in the skin area of the primary whitened image from RGB color mode to CMYK color mode to obtain a candidate whitened image comprises:
acquiring a first red R value, a first green G value and a first blue B value of a pixel in a skin area in the primary whitening image, and determining a black K value of the pixel based on the maximum value of the first R value, the first G value and the first B value;
determining a cyan C value for the pixel based on the first R value and the K value for the pixel;
determining an initial magenta M value of the pixel based on the first G value and the K value of the pixel, and expanding the initial M value to obtain the M value of the pixel;
determining a yellow Y value for the pixel based on the first B value and the K value for the pixel;
and obtaining the candidate whitening image based on the K value, the C value, the M value and the Y value of the pixel.
6. A method according to claim 3, wherein said inversely converting said candidate whitened image from said CMYK color mode to said RGB color mode to obtain said target whitened image comprises:
determining a second R value of a pixel in the candidate whitened image based on a C value and a K value of the pixel in the skin area;
Determining a second G value of a pixel in the candidate whitening image based on the M value and the K value of the pixel;
determining a second B value of the pixel based on the Y value and the K value of the pixel in the candidate whitening image;
and obtaining the target whitening image based on the second R value, the second G value and the second B value of the pixel.
7. The method of any of claims 1-6, wherein said adjusting the contrast and saturation of the skin area in the target whitened image results in a second image, comprising:
determining a target contrast, and performing contrast adjustment on pixels in a skin area in the target whitening image based on the target contrast to obtain a whitening contrast image;
and determining target saturation and a preset weight vector, and adjusting the saturation of pixels in a skin area in the whitening comparison image based on the weight vector and the target saturation to obtain the second image.
8. The method of claim 7, wherein the performing contrast adjustment on pixels in a skin region in the target whitened image based on the target contrast to obtain a whitened contrast image comprises:
determining a third RGB value of pixels in a skin area in the target contrast image, and acquiring a difference value between the third RGB value and a set value as a fourth RGB value;
Determining a fifth RGB value based on the fourth RGB value and the target contrast, and determining a sum of the fifth RGB value and the set value as a sixth RGB value;
and replacing the pixels in the skin area with the sixth RGB value from the third RGB value to obtain the whitening comparison image.
9. The method of claim 7, wherein the saturation adjusting pixels in the skin region of the whitened contrast image based on the weight vector and the target saturation to obtain the second image comprises:
performing dot product on the sixth RGB value of the pixels in the skin region of the whitening comparison image and the weight vector to obtain the image brightness value of the pixels in the skin region;
obtaining an image gray scale value of the pixel in the skin area according to the image brightness value of the pixel in the skin area;
and fusing the sixth RGB value of the pixel and the gray scale value of the image based on the target saturation to obtain the second image.
10. The method of claim 9, wherein the fusing the sixth RGB values of the pixels and the image gray scale values based on the target saturation to obtain the second image comprises:
Determining a fusion duty ratio of a sixth RGB value of pixels in the skin area and the image gray scale value during fusion based on the target saturation;
and according to the fusion duty ratio, fusing the sixth RGB value of the pixels in the skin area with the gray scale value of the image to obtain the second image.
11. The method of any of claims 1-6, wherein the acquiring the skin region of the person in the original first image comprises:
and performing skin detection on the first image, and extracting the skin area of the person in the first image.
12. The method of claim 11, wherein the performing skin detection on the first image to extract the skin region of the person in the first image comprises:
performing human body detection on the first image to obtain a human body detection frame;
detecting the body part in the human body detection frame to obtain a body part detection area;
and performing skin detection in the body part detection area, and extracting the skin area of the person.
13. A method of training an image processing model, wherein the method comprises:
acquiring a sample image and a whitening reference image of the sample image;
Inputting the sample image into an image processing model, extracting a skin area of a person in the sample image, performing reddish whitening and fuchsin adjustment on the skin area in the sample image to obtain a sample whitening image, and performing contrast and saturation adjustment on the skin area in the sample whitening image to obtain a target whitening image;
adjusting the image processing model based on the difference between the whitening reference image and the target whitening image;
and continuing to train the adjusted image processing model by using the next sample image until the training is finished, and obtaining the target image processing model.
14. The method of claim 13, wherein the adjusting the image processing model based on the difference between the whitened reference image and the target whitened image comprises:
and determining a loss function of the image processing model based on the difference between the whitening reference image and the target whitening image, and adjusting the image processing model based on the loss function.
15. The method of claim 13, wherein the image processing model includes a first adjustment layer and a second adjustment layer, wherein the performing the reddish whitening and fuchsin adjustment on the skin region in the sample image results in a sample whitened image, comprising:
Performing reddening whitening on a skin area in the sample image by the first adjusting layer based on a color lookup table to obtain a primary sample whitening image;
and performing fuchsin adjustment on the skin area in the primary sample whitening image by the second adjustment layer based on color mode conversion to obtain the sample whitening image.
16. The method of any of claims 13-15, wherein the image processing model includes a third adjustment layer and a fourth adjustment layer, the performing contrast and saturation adjustment on the skin region in the sample whitened image to obtain a target whitened image, comprising:
the third adjusting layer carries out contrast adjustment on pixels in a skin area in the sample whitening image based on target contrast to obtain a sample whitening contrast image;
and the fourth adjusting layer adjusts the saturation of pixels in the skin area in the sample whitening comparison image based on the target saturation and a preset weight vector to obtain the target whitening image.
17. The method of any of claims 13-15, wherein the image processing model includes a skin extraction layer, the acquiring a skin region of a person in the sample image comprising:
And performing skin detection on the sample image by the skin extraction layer, and extracting the skin region of the person in the sample image.
18. The method of claim 17, wherein the skin detection of the sample image by the skin extraction layer extracts a skin region of a person in the sample image, comprising:
performing human body detection on the sample image to obtain a sample human body detection frame;
detecting the body part in the sample human body detection frame to obtain a sample detection area of the body part;
and performing skin detection in the sample detection area, and extracting the skin area of the person in the sample image.
19. An image processing method, wherein the method comprises:
acquiring an original first image, and inputting the first image into a target image processing model;
performing redness whitening and fuchsin adjustment on the skin area of the person in the first image by the target image processing model to obtain a target whitening image, and performing contrast and saturation adjustment on the skin area in the target whitening image to obtain a second image;
wherein the target image processing model is a model trained by the training method according to any one of claims 13 to 18.
20. An image processing apparatus comprising:
the first acquisition module is used for acquiring the skin area of the person in the original first image;
the second acquisition module is used for performing reddish whitening and fuchsin adjustment on the skin area in the first image to obtain a target whitening image;
and the third acquisition module is used for adjusting the contrast and the saturation of the skin area in the target whitening image to obtain a second image.
21. The apparatus of claim 20, wherein the second acquisition module comprises:
performing reddening whitening on a skin area in the first image based on a color lookup table to obtain a primary whitening image;
and performing fuchsin adjustment on the skin area in the primary whitening image based on color mode conversion to obtain the target whitening image.
22. The apparatus of claim 21, wherein the second acquisition module comprises:
determining a first RGB value of a pixel in a skin area in the first image, and determining a second RGB value of redness in a color lookup table based on the first RGB value;
and replacing the first RGB value of the pixel in the skin area with the second RGB value to obtain the primary whitening image.
23. The apparatus of claim 21, wherein the second acquisition module comprises:
converting pixels in a skin area in the primary whitening image from an RGB color mode to a printing four-color separation mode CMYK color mode to obtain a candidate whitening image;
and inversely converting pixels in a skin area in the candidate whitening image from the CMYK color mode to the RGB color mode to obtain the target whitening image.
24. The apparatus of claim 22, wherein the second acquisition module comprises:
acquiring a first red R value, a first green G value and a first blue B value of a pixel in a skin area in the primary whitening image, and determining a black K value of the pixel based on the maximum value of the first R value, the first G value and the first B value;
determining a cyan C value for the pixel based on the first R value and the K value for the pixel;
determining an initial magenta M value of the pixel based on the first G value and the K value of the pixel, and expanding the initial M value to obtain the M value of the pixel;
determining a yellow Y value for the pixel based on the first B value and the K value for the pixel;
and obtaining the candidate whitening image based on the K value, the C value, the M value and the Y value of the pixel.
25. The apparatus of claim 22, wherein the second acquisition module comprises:
determining a second R value of a pixel in the candidate whitened image based on a C value and a K value of the pixel in the skin area;
determining a second G value of a pixel in the candidate whitening image based on the M value and the K value of the pixel;
determining a second B value of the pixel based on the Y value and the K value of the pixel in the candidate whitening image;
and obtaining the target whitening image based on the second R value, the second G value and the second B value of the pixel.
26. The apparatus of any of claims 20-25, wherein the third acquisition module comprises:
determining a target contrast, and performing contrast adjustment on pixels in a skin area in the target whitening image based on the target contrast to obtain a whitening contrast image;
and determining target saturation and a preset weight vector, and adjusting the saturation of pixels in a skin area in the whitening comparison image based on the weight vector and the target saturation to obtain the second image.
27. The apparatus of claim 26, wherein the third acquisition module comprises:
Determining a third RGB value of pixels in a skin area in the target whitening image, and acquiring a difference value between the third RGB value and a set value as a fourth RGB value;
determining a fifth RGB value based on the fourth RGB value and the target contrast, and determining a sum of the fifth RGB value and the set value as a sixth RGB value;
and replacing the pixels in the skin area with the sixth RGB value from the third RGB value to obtain the whitening comparison image.
28. The apparatus of claim 26, wherein the third acquisition module comprises:
performing dot product on the sixth RGB value of the pixels in the skin region of the whitening comparison image and the weight vector to obtain the image brightness value of the pixels in the skin region;
obtaining an image gray scale value of the pixel in the skin area according to the image brightness value of the pixel in the skin area;
and fusing the sixth RGB value of the pixel and the gray scale value of the image based on the target saturation to obtain the second image.
29. The apparatus of claim 28, wherein the third acquisition module comprises:
determining a fusion duty ratio of a sixth RGB value of pixels in the skin area and the image gray scale value during fusion based on the target saturation;
And according to the fusion duty ratio, fusing the sixth RGB value of the pixels in the skin area with the gray scale value of the image to obtain the second image.
30. The apparatus of any of claims 20-25, wherein the first acquisition module comprises:
and performing skin detection on the first image, and extracting the skin area of the person in the first image.
31. The apparatus of claim 30, wherein the first acquisition module comprises:
performing human body detection on the first image to obtain a human body detection frame;
detecting the body part in the human body detection frame to obtain a body part detection area;
and performing skin detection in the body part detection area, and extracting the skin area of the person.
32. A training apparatus for an image processing model, comprising:
the acquisition module is used for acquiring a sample image and a whitening reference image of the sample image;
the processing module is used for inputting the sample image into an image processing model, extracting a skin area of a person in the sample image, performing reddish whitening and fuchsin adjustment on the skin area in the sample image to obtain a sample whitening image, and performing contrast and saturation adjustment on the skin area in the sample whitening image to obtain a target whitening image;
The adjusting module is used for adjusting the image processing model based on the difference between the whitening reference image and the target whitening image;
and the training module is used for continuing to train the adjusted image processing model by using the next sample image until the training is finished, so as to obtain the target image processing model.
33. The apparatus of claim 32, wherein the adjustment module comprises:
and determining a loss function of the image processing model based on the difference between the whitening reference image and the target whitening image, and adjusting the image processing model based on the loss function.
34. The apparatus of claim 32, wherein the processing module comprises:
performing reddening whitening on a skin area in the sample image by the first adjusting layer based on a color lookup table to obtain a primary sample whitening image;
and performing fuchsin adjustment on the skin area in the primary sample whitening image by the second adjustment layer based on color mode conversion to obtain the sample whitening image.
35. The apparatus of any of claims 32-34, wherein the processing module comprises:
the third adjusting layer carries out contrast adjustment on pixels in a skin area in the sample whitening image based on target contrast to obtain a sample whitening contrast image;
And the fourth adjusting layer adjusts the saturation of pixels in the skin area in the sample whitening comparison image based on the target saturation and a preset weight vector to obtain the target whitening image.
36. The apparatus of any of claims 32-34, wherein the processing module comprises:
and performing skin detection on the sample image by the skin extraction layer, and extracting the skin region of the person in the sample image.
37. The apparatus of claim 36, wherein the processing module comprises:
performing human body detection on the sample image to obtain a sample human body detection frame;
detecting the body part in the sample human body detection frame to obtain a sample detection area of the body part;
and performing skin detection in the sample detection area, and extracting the skin area of the person in the sample image.
38. An image processing apparatus comprising:
the image acquisition module is used for acquiring an original first image and inputting the first image into the target image processing model;
the image processing module is used for extracting the skin area of the person in the first image by the target image processing model, performing reddish whitening and fuchsin adjustment on the skin area in the first image to obtain a target whitening image, and performing contrast and saturation adjustment on the skin area in the target whitening image to obtain a second image;
Wherein the target image processing model is a model trained by the training method according to any one of claims 13 to 18.
39. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-12 or 13-18 or 19.
40. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-12 or 13-18 or 19.
41. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method according to any one of claims 1-12 or 13-18 or 19.
CN202311338029.6A 2023-10-16 2023-10-16 Image processing method, device, electronic equipment and storage medium Pending CN117372615A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311338029.6A CN117372615A (en) 2023-10-16 2023-10-16 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311338029.6A CN117372615A (en) 2023-10-16 2023-10-16 Image processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117372615A true CN117372615A (en) 2024-01-09

Family

ID=89388648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311338029.6A Pending CN117372615A (en) 2023-10-16 2023-10-16 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117372615A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006121416A (en) * 2004-10-21 2006-05-11 Fuji Photo Film Co Ltd Method and apparatus of processing image, program and printer
CN105608677A (en) * 2015-12-28 2016-05-25 成都品果科技有限公司 Image skin color beautifying method and system under any lighting conditions
CN108171648A (en) * 2017-11-27 2018-06-15 北京美摄网络科技有限公司 A kind of method and apparatus of U.S.'s face skin color transition
CN110910309A (en) * 2019-12-05 2020-03-24 广州酷狗计算机科技有限公司 Image processing method, image processing apparatus, electronic device, storage medium, and program product
CN111784611A (en) * 2020-07-03 2020-10-16 厦门美图之家科技有限公司 Portrait whitening method, portrait whitening device, electronic equipment and readable storage medium
CN112634155A (en) * 2020-12-22 2021-04-09 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113610723A (en) * 2021-08-03 2021-11-05 展讯通信(上海)有限公司 Image processing method and related device
WO2022161009A1 (en) * 2021-01-27 2022-08-04 展讯通信(上海)有限公司 Image processing method and apparatus, and storage medium and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006121416A (en) * 2004-10-21 2006-05-11 Fuji Photo Film Co Ltd Method and apparatus of processing image, program and printer
CN105608677A (en) * 2015-12-28 2016-05-25 成都品果科技有限公司 Image skin color beautifying method and system under any lighting conditions
CN108171648A (en) * 2017-11-27 2018-06-15 北京美摄网络科技有限公司 A kind of method and apparatus of U.S.'s face skin color transition
CN110910309A (en) * 2019-12-05 2020-03-24 广州酷狗计算机科技有限公司 Image processing method, image processing apparatus, electronic device, storage medium, and program product
CN111784611A (en) * 2020-07-03 2020-10-16 厦门美图之家科技有限公司 Portrait whitening method, portrait whitening device, electronic equipment and readable storage medium
CN112634155A (en) * 2020-12-22 2021-04-09 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2022161009A1 (en) * 2021-01-27 2022-08-04 展讯通信(上海)有限公司 Image processing method and apparatus, and storage medium and terminal
CN113610723A (en) * 2021-08-03 2021-11-05 展讯通信(上海)有限公司 Image processing method and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗文坚: "PhotoShop通道技术的探讨", 《 电子技术与软件工程》, 8 June 2018 (2018-06-08) *

Similar Documents

Publication Publication Date Title
WO2018161775A1 (en) Neural network model training method, device and storage medium for image processing
CN107507144B (en) Skin color enhancement processing method and device and image processing device
CA3153067C (en) Picture-detecting method and apparatus
KR20170109898A (en) Apparatus and method for extracting object
Cepeda-Negrete et al. Dark image enhancement using perceptual color transfer
CN113436284A (en) Image processing method and device, computer equipment and storage medium
JP2012174273A (en) Image processing apparatus and image processing method
CN109064431B (en) Picture brightness adjusting method, equipment and storage medium thereof
CN113052783A (en) Face image fusion method based on face key points
Zhang et al. A novel color space based on RGB color barycenter
CN109255756B (en) Low-illumination image enhancement method and device
KR102272975B1 (en) Method for simulating the realistic rendering of a makeup product
Kumar et al. Colorization of gray image in Lαβ color space using texture mapping and luminance mapping
JP5824423B2 (en) Illumination light color estimation device, illumination light color estimation method, and illumination light color estimation program
CN117372615A (en) Image processing method, device, electronic equipment and storage medium
Menotti et al. A fast hue-preserving histogram equalization method for color image enhancement using a Bayesian framework
Menotti et al. Fast hue-preserving histogram equalization methods for color image contrast enhancement
KR102334030B1 (en) Method for dyeing hair by using computer device
KR20160001897A (en) Image Processing Method and Apparatus for Integrated Multi-scale Retinex Based on CIELAB Color Space for Preserving Color
KR102215607B1 (en) Electronic device capable of correction to improve the brightness of dark images and operating method thereof
US11615609B2 (en) Learning apparatus, inferring apparatus, learning method, program, and inferring method
CN113724356A (en) Color contrast adjusting method, device and equipment
CN111062862A (en) Color-based data enhancement method and system, computer device and storage medium
JP2003304554A (en) Color signal processing device capable of storing color gamut efficiently and method using the same
Zhou et al. Saliency preserving decolorization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination