CN111556303A - Face image processing method and device, electronic equipment and computer readable medium - Google Patents

Face image processing method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN111556303A
CN111556303A CN202010407588.8A CN202010407588A CN111556303A CN 111556303 A CN111556303 A CN 111556303A CN 202010407588 A CN202010407588 A CN 202010407588A CN 111556303 A CN111556303 A CN 111556303A
Authority
CN
China
Prior art keywords
lip
saturation
face image
adjusted
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010407588.8A
Other languages
Chinese (zh)
Other versions
CN111556303B (en
Inventor
袁知洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010407588.8A priority Critical patent/CN111556303B/en
Publication of CN111556303A publication Critical patent/CN111556303A/en
Application granted granted Critical
Publication of CN111556303B publication Critical patent/CN111556303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/74Circuits for processing colour signals for obtaining special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure provides a face image processing method, a face image processing device, electronic equipment and a computer readable medium, wherein the method comprises the following steps: acquiring a lip makeup adjustment instruction of a user to-be-processed face image, wherein the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup weakening adjustment instruction; correspondingly adjusting the original saturation of each pixel point in the lip region in the face image to be processed based on the lip makeup adjustment instruction to obtain the adjusted saturation; and fusing the original saturation of each pixel point and the adjusted saturation to obtain an adjusted face image. In the embodiment of the disclosure, after the saturation adjustment is performed twice, the lip makeup effect in the obtained face image is combined with the original lip makeup effect in the face image to be processed and the preliminarily adjusted lip makeup effect, so that the lip makeup effect in the adjusted face image is more natural, the whole adjustment process does not need to be manually adjusted by a user, and the user experience of the user is improved.

Description

Face image processing method and device, electronic equipment and computer readable medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for processing a face image, an electronic device, and a computer-readable medium.
Background
In the prior art, users often draw beautiful makeup for themselves, take good pictures through an image shooting device, and then send the taken pictures to a social platform so as to improve the popularity of the users.
However, in the case of makeup by a user, under the influence of a shooting environment, shooting equipment and the like, lip makeup in a picture is usually "eaten", that is, the makeup effect of lips is weakened, which is not acceptable to the user. Based on the condition that the lip makeup is 'made up by eating', after the picture is shot, a user can manually adjust the lip makeup in the shot picture based on some picture beautifying tools, but most of users are not professionals, the range of the lip makeup in the picture can not be accurately selected, and the lip makeup in the picture can not be accurately adjusted, so that the adjusted picture can not meet the requirements of the user, and the user experience is reduced.
Disclosure of Invention
The purpose of this disclosure is to solve at least one of the above technical drawbacks and to improve the user experience. The technical scheme adopted by the disclosure is as follows:
in a first aspect, the present disclosure provides a method for processing a face image, including:
acquiring a lip makeup adjustment instruction of a user to-be-processed face image, wherein the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup weakening adjustment instruction;
correspondingly adjusting the original saturation of each pixel point in the lip region in the face image to be processed based on the lip makeup adjustment instruction to obtain the adjusted saturation;
and fusing the original saturation of each pixel point and the adjusted saturation to obtain an adjusted face image.
In a second aspect, the present disclosure provides a face image processing apparatus, comprising:
the instruction acquisition module is used for acquiring a lip makeup adjustment instruction of a user on a face image to be processed, wherein the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup weakening adjustment instruction;
the preliminary adjustment module is used for correspondingly adjusting the original saturation of each pixel point in the lip region in the face image to be processed based on the lip makeup adjustment instruction to obtain the adjusted saturation;
and the fusion module is used for fusing the original saturation and the adjusted saturation of each pixel point to obtain an adjusted face image.
In a third aspect, the present disclosure provides an electronic device comprising:
a processor and a memory;
a memory for storing operating instructions;
a processor for executing the method as shown in any embodiment of the first aspect of the present disclosure by calling an operation instruction.
In a fourth aspect, the present disclosure provides a computer readable storage medium having stored thereon computer program instructions for causing a computer to execute to implement a method as shown in any embodiment of the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
the human face image processing method, the human face image processing device, the electronic equipment and the computer readable medium of the embodiment of the disclosure, when a lip makeup adjusting instruction of a user in a face image to be processed is received, the original saturation of each pixel point in the lip region in the face image to be processed is adjusted correspondingly based on the instruction to obtain the adjusted saturation, then, the original saturation of each pixel point and the adjusted saturation are fused to obtain an adjusted face image, so that, after the saturation adjustment for two times, the lip makeup effect in the obtained face image is combined with the original lip makeup effect in the face image to be processed and the preliminarily adjusted lip makeup effect, so that the lip makeup effect in the adjusted face image is more natural, and the whole adjusting process does not need to be adjusted manually by a user, so that the user experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings that are required to be used in the description of the embodiments of the present disclosure will be briefly described below.
Fig. 1 is a schematic flow chart of a face image processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a lip region template according to an embodiment of the present disclosure;
FIG. 3a is a diagram of lut illustrating an example of an enhancement in saturation according to an embodiment of the present disclosure;
FIG. 3b is a diagram lut illustrating a reduction in saturation according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a face image to be processed provided in an embodiment of the present disclosure;
FIG. 5 is a schematic illustration of the saturation enhancement effect of lip makeup provided in embodiments of the present disclosure;
FIG. 6 is a schematic illustration of the saturation reduction effect of lip makeup provided in an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a face image processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence relationship of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
In order to solve the technical problems in the prior art, the disclosed embodiment provides a face image processing method, which can perform corresponding adjustment on the original saturation of each pixel point in the lip region of a face image to be processed based on a lip makeup adjustment instruction when the face image to be processed is received by a user, so as to obtain an adjusted saturation, and then fuse the original saturation of each pixel point and the adjusted saturation, so as to obtain an adjusted face image, so that after two times of saturation adjustment, the lip makeup effect in the obtained face image fuses the original lip makeup effect in the face image to be processed and the lip makeup effect after preliminary adjustment, so that the lip makeup effect in the adjusted face image is more natural, and the user does not need to manually select the lip region in the whole adjustment process, and manual adjustment is not needed, so that the user experience of the user is improved. Meanwhile, in the face image adjusted by the scheme disclosed by the invention, the highlight part is reserved while the saturation of each pixel point in the lip region is enhanced or weakened, so that the lip makeup effect of the adjusted face image is more natural.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
The execution main body of the present disclosure may be any electronic device, may be a server, may also be a user terminal, and the like, for example, for an application program that can perform lip makeup adjustment on a face image, a function of adjusting lip saturation is provided for a user, and before the user releases a photographed face image, the user may execute the method to adjust the saturation of lips in the face image, that is, beautify a lip effect in the face image, so that lip makeup is more natural, and the occurrence of a situation of being eaten and made up is avoided.
Fig. 1 shows a schematic flowchart of a face image processing method provided in an embodiment of the present disclosure, and as shown in the diagram, the present disclosure takes a user terminal as an execution subject for description, and the method may include steps S110 to S130, where:
and step S110, obtaining a lip makeup adjustment instruction of the user to the face image to be processed, wherein the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup weakening adjustment instruction.
The face image to be processed comprises corresponding face parts, such as eyes, a nose, a mouth and the like; the lip region needs to be included in the image to be processed. The face image to be processed can be obtained by shooting through a terminal device with a shooting function, wherein the terminal device refers to electronic products with an image shooting function, such as a beauty camera, a smart phone and a tablet computer. The user can input a camera starting instruction through input equipment such as a touch screen or a physical key in the terminal equipment, control the camera of the terminal equipment to be in a photographing mode, and acquire a to-be-processed face image acquired by the camera.
The camera may be a built-in camera of the terminal device, such as a front camera and a rear camera, or an external camera of the terminal device, such as a rotary camera, and optionally a front camera.
The lip makeup adjustment instruction indicates that a user wants to adjust the lip makeup effect in the face image, the lip makeup enhancement adjustment instruction indicates that the user wants to enhance the lip makeup effect in the face image, and the lip makeup weakening adjustment instruction indicates that the user wants to weaken the lip makeup effect in the face image. The lip makeup adjustment instruction may be generated based on a lip adjustment operation performed by a user on a terminal interface, where the lip adjustment operation indicates an operation that the user selects a lip region that wants to adjust a face image to be processed, that is, an action of the user to perform lip adjustment on a user interface of the terminal device, and a specific form of the operation is configured as needed, for example, the operation may be a trigger action of the user at a specific operation position on an interface of an application program of a client.
In practical applications, the operation may be triggered by an associated trigger of the client, such as a specified trigger button or an input box on a client interface, or may be a voice instruction of the user, specifically, for example, a virtual button displayed on a display interface of the client, and an operation of clicking the button by the user is a lip adjustment operation of the user.
And step S120, correspondingly adjusting the original saturation of each pixel point in the lip region in the face image to be processed based on the lip makeup adjustment instruction to obtain the adjusted saturation.
According to the scheme, the lip makeup effect can be adjusted by adjusting the original saturation of each pixel point in the lip region in the face image to be processed. If the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction, the lip makeup effect corresponding to the adjusted saturation is an effect obtained by enhancing the original lip makeup effect (the lip makeup effect corresponding to the face image to be processed). And if the lip makeup adjustment instruction is a lip makeup weakening adjustment instruction, the lip makeup effect corresponding to the adjusted saturation is the effect obtained by weakening the original lip makeup effect.
In practical application, the original saturation of each pixel point can be adjusted based on a pre-configured adjustment strategy, for example, the original saturation of each pixel point is adjusted to a set saturation.
In the scheme of the application, the original lip makeup effect may be corresponding lip makeup under a bare makeup condition, that is, corresponding lip makeup without any makeup treatment, or may be lip makeup after makeup treatment, for example, lip makeup corresponding to lip applied with lipstick and lip gloss.
And step S130, fusing the original saturation of each pixel point with the adjusted saturation to obtain an adjusted face image.
In the scheme of the disclosure, the original saturation of each pixel point and the corresponding adjusted saturation are fused to obtain the saturation of each fused pixel point, and in the adjusted face image, the saturation of each pixel point is the saturation of each fused pixel point.
According to the scheme in the embodiment of the disclosure, when a lip makeup adjustment instruction of a user in a face image to be processed is received, the original saturation of each pixel point in a lip region in the face image to be processed can be adjusted correspondingly based on the instruction to obtain the adjusted saturation, and then the original saturation of each pixel point and the adjusted saturation are fused to obtain the adjusted face image.
In an embodiment of the present disclosure, the method further includes:
acquiring a face image to be processed;
and determining the lip region in the face image to be processed based on the face image to be processed and the pre-configured lip region template.
In order to reduce data processing amount, before the original saturation of each pixel point in the lip region in the face image to be processed is correspondingly adjusted based on the lip makeup adjustment instruction, the lip region in the face image to be processed is determined, and then the corresponding adjustment is performed only for the original saturation of each pixel point in the lip region.
In the scheme of the present disclosure, the determination manner of determining the lip region in the face image to be processed is not limited, for example, the lip region in the face image to be processed may be determined based on a preconfigured lip region template.
As an example, a lip region template as shown in fig. 2 may determine a lip region in the face image to be processed based on a corresponding lip region in the template.
In the embodiment of the present disclosure, based on the lip makeup adjustment instruction, the original saturation of each pixel point in the lip region in the face image to be processed is correspondingly adjusted, so as to obtain the adjusted saturation, including:
determining the original saturation of each pixel point in the lip region in the face image to be processed;
determining an adjusting factor corresponding to each pixel point based on the original saturation of each pixel point and a pre-configured corresponding relationship, wherein the corresponding relationship comprises the corresponding relationship between each saturation and the adjusting factor corresponding to each saturation;
and correspondingly adjusting the original saturation of each pixel point based on the lip makeup adjusting instruction and the adjusting factor corresponding to each pixel point to obtain the adjusted saturation.
The corresponding relationship is configured in advance, for example, a Look-Up Table (Look-Up-Table, lut Table) is used as the corresponding relationship, the corresponding relationship includes a corresponding relationship between each saturation and an adjustment factor corresponding to each saturation, based on the corresponding relationship, an adjustment factor corresponding to each pixel may be determined based on the original saturation of each pixel, the adjustment factor represents the intensity of each saturation to be adjusted, and based on the adjustment factor corresponding to each pixel, the original saturation of each pixel is adjusted correspondingly, so as to obtain the adjusted saturation of each pixel.
In the scheme of the disclosure, based on two adjustment modes of enhancing the saturation and weakening the saturation, the correspondence may include a first correspondence and a second correspondence, where the first correspondence is used to adjust the original saturation of each pixel point based on the correspondence between each saturation and the adjustment factor corresponding to each saturation in the first correspondence when a lip cosmetic enhancement adjustment instruction is received, so that the adjusted saturation corresponds to the lip enhancement effect. And the second corresponding relation is used for adjusting the original saturation of each pixel point based on the corresponding relation between each saturation and the corresponding adjusting factor of each saturation in the second corresponding relation when the lip makeup weakening adjusting instruction is received, so that the adjusted saturation corresponds to the effect of weakening the lips. That is to say, no matter whether the lips in the face image to be processed are dark or light, that is, no matter what the original saturation of each pixel point is, what is achieved based on the first corresponding relationship is the effect of lip enhancement, what is achieved based on the second corresponding relationship is the effect of lip weakening.
As an example, as shown in fig. 3a, lut, which shows a first corresponding relationship, based on lut, the original saturation of each pixel point can be adjusted, so that the adjusted saturation corresponds to the lip enhancement effect. Similarly, as shown in lut of fig. 3b, the graph shows the second corresponding relationship, and based on the lut graph, the original saturation of each pixel point can be adjusted, so that the adjusted saturation corresponds to the effect that the lips are weakened.
Specifically, fig. 3a shows a first corresponding relationship used when lip enhancement adjustment is required, where the corresponding relationship includes a corresponding relationship between each saturation and an adjustment factor corresponding to each saturation, and one cell represents one adjustment factor, and the adjustment factors corresponding to each cell have different sizes. Based on the lut diagram shown in fig. 3a, an adjustment factor corresponding to the saturation of each pixel point can be determined, and then the saturation of each pixel point is enhanced and adjusted based on the determined adjustment factor.
Fig. 3b shows a second corresponding relationship used when lip weakening adjustment is required, where the corresponding relationship includes corresponding relationships between each saturation and an adjustment factor corresponding to each saturation, and one cell represents one adjustment factor, and the adjustment factors corresponding to each cell have different sizes. Based on the lut diagram shown in fig. 3b, an adjustment factor corresponding to the saturation of each pixel point can be determined, and then based on the determined adjustment factor, the saturation of each pixel point is weakened and adjusted.
In the scheme of the present disclosure, the original saturation of each pixel point and the adjusted saturation are fused to obtain an adjusted face image, including:
determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation;
and fusing the original saturation and the adjusted saturation of each pixel point based on the first weight and the second weight to obtain an adjusted face image.
The original saturation of each pixel point and the contribution degree of the adjusted saturation of each pixel point to the adjusted face image are different, so that a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation of each pixel point can be determined, the contribution degree of the original saturation of each pixel point to the adjusted face image can be reflected based on the first weight, and the contribution degree of the adjusted saturation of each pixel point to the adjusted face image can be reflected based on the second weight. Based on the first weight and the second weight, the original saturation of each pixel point and the adjusted saturation are fused, so that the lip makeup effect in the adjusted face image is more natural. It is to be understood that the sum of the first weight and the second weight is 1.
In practical application, the first weight and the second weight may be configured in advance based on the difference between the original saturation of each pixel and the contribution degree of the adjusted saturation of each pixel to the adjusted face image, or the first weight and the second weight may be determined in real time based on the original saturation of each pixel and the adjusted saturation of each pixel. The larger the weight is, the larger the contribution degree is, for example, if the first weight is larger than the second weight, the more influence of the lip makeup effect in the adjusted face image, which takes into account the lip makeup effect corresponding to the original saturation of each pixel point, is.
In an alternative scheme of the present disclosure, determining a first weight corresponding to the original saturation and a second weight corresponding to the adjusted saturation of each pixel point may include at least one of the following schemes:
the first method, the lip makeup adjustment instruction further includes adjustment intensity indication information, determines a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation, and includes:
determining the weight of the facial image to be processed based on the adjustment intensity indication information, and taking the weight of the facial image to be processed as a first weight;
based on the first weight, a second weight is determined.
The adjustment strength indication information indicates whether the user wants to perform enhancement processing or attenuation processing on the lips in the face image to be processed, namely, whether saturation enhancement adjustment or saturation attenuation adjustment is performed on the basis of the original saturation. Considering that the first weight and the second weight are determined based on the adjustment intention of the user, the lip makeup effect in the adjusted face image can be more considered to the intention of the user, and the adjusted lips are more in line with the preference of the user. When the first weight and the second weight are determined based on the adjustment strength indication information, the weight corresponding to the original saturation of each pixel point is the first weight, and the weight corresponding to the adjusted saturation is the second weight.
In an alternative aspect of the disclosure, the weight determined corresponding to the adjustment strength indication information may be used as the second weight, and the larger the value corresponding to the adjustment strength indication information is, the larger the adjustment strength of the adjusted saturation is, and the larger the second weight is. As an example, if the adjustment strength indication information is 0.2, the first weight corresponding to the original saturation of each pixel point is 0.8, and the second weight corresponding to the adjusted saturation is 0.2.
It can be understood that, if the value range of the adjustment strength indication information is-1 to 1, for the adjusted saturation, the adjustment strengths corresponding to the adjustment strength indication information of 0.3 and-0.3 are the same, and the adjustment strength corresponding to 0.6 is greater than the adjustment strength of 0.3.
In the scheme of the disclosure, if the absolute value of the corresponding numerical value of the adjustment strength indication information is between-1 and 1, the adjustment strength indication information can be directly used as the first weight or the second weight, and if the adjustment strength indication information is used as the first weight alpha, the second weight is 1-alpha.
In an alternative aspect of the present disclosure, the intensity indication information may also be adjusted to determine whether the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup reduction adjustment instruction. For example, the adjustment intensity indication information in the first setting range corresponds to a lip makeup enhancement adjustment command, and the adjustment intensity indication information in the second setting range corresponds to a lip makeup reduction adjustment command.
As an example, for example, the adjustment strength indication information has a value range of-1 to 1, the first setting range is-1 to 0, and the second setting range is 0 to 1, where 0 indicates that the original saturation of each pixel point in the lip region is not adjusted. When the adjustment intensity indication information is within-1 to 0, the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction, and when the adjustment intensity indication information is within 0 to 1, the lip makeup adjustment instruction is a lip makeup reduction adjustment instruction.
Secondly, determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation, comprising:
determining a lip reference point in a face image to be processed;
and respectively determining the distance between each pixel point in the lip region in the face image to be processed and the lip reference point, and determining a first weight and a second weight based on each distance.
The specific method for determining the lip reference point in the face image to be processed and the specific key points of which parts need to be detected may be configured in advance according to actual needs, and embodiments of the present disclosure are not limited specifically, for example, the lip reference point in the lip region in the face image to be processed may be directly detected, or the lip reference point may be calculated according to the key points of other face parts by detecting the key points of other face parts in the face image to be processed.
After the lip reference point is determined, the distance between the lip reference point and each pixel point in the lip region can be calculated in the lip region, and the first weight and the second weight are determined according to each distance. Wherein, based on each distance, determining the first weight and the second weight may specifically be: based on the respective distances, a first weight is determined, and based on the first weight, a second weight is determined. The closer the distance is, the closer the saturation between the corresponding pixel point and the lip reference point is, the smaller the adjustment intensity of the pixel point is, and the smaller the corresponding weight is. If the first weight and the second weight are determined based on the distance between each pixel point in the lip region and the lip reference point, the corresponding weights of each pixel point may be different or the same based on the determined weights of each distance, and when the original saturation of each pixel point is adjusted, the original saturation of each pixel point can be adjusted more accurately based on the corresponding weights of each pixel point.
In an example of the present disclosure, the distance between two pixels may be calculated by a correlation algorithm, for example, a gaussian distance between each pixel in the lip region and the lip reference point may be calculated by a gaussian function, and the gaussian distance between two pixels is taken as the distance between two pixels.
In examples of the present disclosure, the lip region includes an upper lip region and a lower lip region, the lip reference points may include an upper lip reference point and/or a lower lip reference point, and the lip reference points may be determined based on the upper lip reference point and/or the lower lip reference point.
Wherein determining the lip reference point based on the upper lip reference point and/or the lower lip reference point comprises any of:
first, when the lip reference point includes only the upper lip reference point, the upper lip reference point may be directly used as the lip reference point. Then the gaussian distance gaussian dist between the upper lip reference point and each pixel point in the lip region can be calculated when determining the distance between each pixel point in the lip region and the lip reference point.
Second, when the lip reference point includes only the lower lip reference point, the lower lip reference point may be directly used as the lip reference point. Then the gaussian distance gaussian dist between the lower lip reference point and each pixel point in the lip region can be calculated when determining the distance between each pixel point in the lip region and the lip reference point.
Thirdly, when the lip reference points include an upper lip reference point and a lower lip reference point, both the upper lip reference point and the lower lip reference point may be used as the lip reference points, and when the distance between each pixel point in the lip region and the lip reference point is determined, the gaussian distance gaussian dist between the upper lip reference point in the lip region and each pixel point in the upper lip region and the gaussian distance gaussian dist between each pixel point in the lower lip region and each pixel point in the lower lip region may be calculated respectively.
Fourthly, when the lip reference points include the upper lip reference point and the lower lip reference point, the lip reference point may be determined based on the upper lip reference point and the lower lip reference point, for example, a key point for determining the middle position of the lip region based on the upper lip reference point and the lower lip reference point is the lip reference point. Then the gaussian distance gaussian dist between the key point at the middle position and each pixel point in the lip region can be calculated when determining the distance between each pixel point in the lip region and the lip reference point.
In practical application, when determining the lip reference point, two reference key points, namely an upper lip reference point and a lower lip reference point, can be respectively determined based on the upper lip region and the lower lip region, so that when adjusting the original saturation of each pixel point in the lip region, the original saturation of each pixel point in the upper lip region and the original saturation of each pixel point in the lower lip region can be correspondingly adjusted, and the upper lip region and the lower lip region can be correspondingly and accurately adjusted.
In the scheme of the disclosure, the lip reference point can be calculated based on the key points of other face parts, as an example, based on 106 face key points detected by a face key point detection tool, in the 106 face key points, each key point can be identified by pi, wherein 106 is greater than or equal to i is greater than or equal to 1. The 106 face key points comprise key points corresponding to all parts of the face, and the face outline and five sense organs can be accurately described through the 106 face key points.
And similarly, based on 106 face key points, selecting a middle key point corresponding to a lower lip region from the face key points as a lip reference point.
Thirdly, the lip makeup adjustment instruction further includes adjustment intensity indication information, determines a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation, and includes:
determining a lip reference point in a face image to be processed;
determining the distance between each pixel point in the lip area and a lip reference point in the face image to be processed;
based on the adjusted intensity indication information and the respective distances, a first weight and a second weight are determined.
In the scheme of the disclosure, the first weight and the second weight may be determined based on the adjustment strength indication information and the distance between each pixel point in the lip region and the lip reference point, that is, the original saturation of each pixel point in the lip region is also considered while the intention of the user is considered, so that the determined first weight and second weight are more accurate.
In determining the first weight and the second weight based on the adjusted intensity indicating information and the respective distances, the weight a may be determined based on the adjusted intensity indicating information, the weight B may be determined based on the respective distances, and the weight a and the weight B may be fused, for example, averaged, to obtain the first weight. The manner of fusing the weight a and the weight B is not limited in the alternative of the present disclosure, and is within the scope of the present disclosure.
For better explanation of the scheme of the present disclosure, the following describes the face image processing method of the present disclosure in detail with reference to fig. 4 to 6:
as shown in fig. 4, before the user releases the face image to be processed, the lip makeup in the face image to be processed may be adjusted based on the method of the present disclosure, so that the lip makeup effect in the adjusted face image is more natural and has no effect of being eaten.
Step 1, obtaining a lip makeup adjustment instruction of a user to a face image to be processed, wherein the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup weakening adjustment instruction.
And 2, correspondingly adjusting the original saturation of each pixel point in the lip region in the face image to be processed based on the lip makeup adjustment instruction to obtain the adjusted saturation.
Step 3, determining a lip reference point in the lip region, such as the face image shown in fig. 5, where point a shown in fig. 5 is the lip reference point. Based on the lip reference point, the distance between each pixel point in the lip region and the lip reference point A can be respectively determined, based on each distance, a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation can be determined, based on the first weight and the second weight, the original saturation of each pixel point and the adjusted saturation are fused, and the adjusted face image is obtained.
If the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction, the adjusted face image may be as shown in the face image in fig. 5. If the lip makeup adjustment instruction is a lip makeup weakening adjustment instruction, the adjusted face image may be as shown in the face image in fig. 6.
Based on the same principle as the method shown in fig. 1, the embodiment of the present disclosure further provides a face image processing apparatus 20, as shown in fig. 7, where the apparatus 20 may include: an instruction acquisition module 210, a preliminary adjustment module 220, and a fusion module 230, wherein,
the instruction obtaining module 210 is configured to obtain a lip makeup adjustment instruction of the face image to be processed by the user, where the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup reduction adjustment instruction;
the preliminary adjustment module 220 is configured to perform corresponding adjustment on the original saturation of each pixel point in the lip region in the face image to be processed based on the lip makeup adjustment instruction, so as to obtain an adjusted saturation;
and the fusion module 230 is configured to fuse the original saturation and the adjusted saturation of each pixel point to obtain an adjusted face image.
According to the scheme in the embodiment of the disclosure, when a lip makeup adjustment instruction of a user in a face image to be processed is received, the original saturation of each pixel point in a lip region in the face image to be processed can be adjusted correspondingly based on the instruction to obtain the adjusted saturation, and then the original saturation of each pixel point and the adjusted saturation are fused to obtain the adjusted face image.
In the embodiment of the present disclosure, the preliminary adjustment module, based on the lip makeup adjustment instruction, performs corresponding adjustment on the original saturation of each pixel point in the lip region in the face image to be processed, and when obtaining the adjusted saturation, is specifically configured to:
determining the original saturation of each pixel point in the lip region in the face image to be processed;
determining an adjusting factor corresponding to each pixel point based on the original saturation of each pixel point and a pre-configured corresponding relationship, wherein the corresponding relationship comprises the corresponding relationship between each saturation and the adjusting factor corresponding to each saturation;
and correspondingly adjusting the original saturation of each pixel point based on the lip makeup adjusting instruction and the adjusting factor corresponding to each pixel point to obtain the adjusted saturation.
In the embodiment of the present disclosure, the fusion module is specifically configured to fuse the original saturation of each pixel point and the adjusted saturation to obtain an adjusted face image:
determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation;
and fusing the original saturation and the adjusted saturation of each pixel point based on the first weight and the second weight to obtain an adjusted face image.
In the embodiment of the present disclosure, the lip makeup adjustment instruction further includes adjustment intensity indication information, and the fusion module is specifically configured to, when determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation:
determining the weight of the facial image to be processed based on the adjustment intensity indication information, and taking the weight of the facial image to be processed as a first weight;
based on the first weight, a second weight is determined.
In the embodiment of the present disclosure, when determining the first weight corresponding to the original saturation and the second weight corresponding to the adjusted saturation of each pixel point, the fusion module is specifically configured to:
determining a lip reference point in a face image to be processed;
respectively determining the distance between each pixel point in the lip area and a lip reference point in the face image to be processed;
based on the distances, a first weight and a second weight are determined.
In the embodiment of the present disclosure, the lip makeup adjustment instruction further includes adjustment intensity indication information, and the fusion module is specifically configured to, when determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation:
determining a lip reference point in a face image to be processed;
determining the distance between each pixel point in the lip area and a lip reference point in the face image to be processed;
based on the adjusted intensity indication information and the respective distances, a first weight and a second weight are determined.
In an embodiment of the present disclosure, the apparatus further includes:
and the lip region determining module is used for determining the lip region in the face image to be processed based on the preconfigured lip region template.
The image processing apparatus of the embodiment of the present disclosure may execute a face image processing method provided by the embodiment of the present disclosure, and the implementation principles thereof are similar, the actions executed by each module in the face image processing apparatus in each embodiment of the present disclosure correspond to the steps in the face image processing method in each embodiment of the present disclosure, and for the detailed functional description of each module of the face image processing apparatus, reference may be specifically made to the description in the corresponding face image processing method shown in the foregoing, and details are not repeated here.
Based on the same principle as the face image processing method in the embodiment of the present disclosure, an embodiment of the present disclosure further provides an electronic device, which may include but is not limited to: a processor and a memory; a memory for storing computer operating instructions; and the processor is used for executing the method shown in the embodiment by calling the computer operation instruction.
Based on the same principle as the face image processing method in the embodiment of the present disclosure, an embodiment of the present disclosure further provides a computer-readable storage medium, where at least one operation, at least one program, a code set, or an operation set is stored in the storage medium, and the at least one operation, the at least one program, the code set, or the operation set is loaded and executed by a processor to implement the method shown in the embodiment, which is not described herein again.
Based on the same principle as the method in the embodiment of the present disclosure, reference is made to fig. 8, which shows a schematic structural diagram of an electronic device (e.g., a terminal device or a server in fig. 1) 600 suitable for implementing the embodiment of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as the processing device 601 hereinafter, and the memory may include at least one of a Read Only Memory (ROM)602, a Random Access Memory (RAM)603 and a storage device 608 hereinafter, which are specifically shown as follows:
as shown in fig. 8, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 8 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (Hyper Text transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a lip makeup adjustment instruction of a user to-be-processed face image, wherein the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup weakening adjustment instruction; correspondingly adjusting the original saturation of each pixel point in the lip region in the face image to be processed based on the lip makeup adjustment instruction to obtain the adjusted saturation; and fusing the original saturation of each pixel point and the adjusted saturation to obtain an adjusted face image.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the designation of a module or unit does not in some cases constitute a limitation of the unit itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided a face image processing method, including:
acquiring a lip makeup adjustment instruction of a user to-be-processed face image, wherein the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup weakening adjustment instruction;
based on the lip makeup adjustment instruction, correspondingly adjusting the original saturation of each pixel point in the lip region in the face image to be processed to obtain the adjusted saturation;
and fusing the original saturation and the adjusted saturation of each pixel point to obtain an adjusted face image.
According to one or more embodiments of the present disclosure, the correspondingly adjusting, based on the lip makeup adjustment instruction, the original saturation of each pixel point in the lip region in the face image to be processed to obtain an adjusted saturation includes:
determining the original saturation of each pixel point in the lip region in the face image to be processed;
determining an adjustment factor corresponding to each pixel point based on the original saturation of each pixel point and a pre-configured corresponding relationship, wherein the corresponding relationship comprises the corresponding relationship between each saturation and the adjustment factor corresponding to each saturation;
and correspondingly adjusting the original saturation of each pixel point based on the lip makeup adjusting instruction and the adjusting factor corresponding to each pixel point to obtain the adjusted saturation.
According to one or more embodiments of the present disclosure, the fusing the original saturation and the adjusted saturation of each pixel point to obtain an adjusted face image includes:
determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation;
and fusing the original saturation and the adjusted saturation of each pixel point based on the first weight and the second weight to obtain an adjusted face image.
According to one or more embodiments of the present disclosure, the lip makeup adjustment instruction further includes adjustment intensity indication information, and the determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation includes:
determining the weight of the facial image to be processed based on the adjustment intensity indication information, and taking the weight of the facial image to be processed as the first weight;
determining the second weight based on the first weight.
According to one or more embodiments of the present disclosure, the determining a first weight corresponding to the original saturation and a second weight corresponding to the adjusted saturation of each pixel includes:
determining a lip reference point in the face image to be processed;
respectively determining the distance between each pixel point in the lip region in the face image to be processed and the lip reference point;
determining the first weight and the second weight based on the respective distances.
According to one or more embodiments of the present disclosure, the lip makeup adjustment instruction further includes adjustment intensity indication information, and the determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation includes:
determining a lip reference point in the face image to be processed;
determining the distance between each pixel point in the lip region in the face image to be processed and the lip reference point;
determining the first weight and the second weight based on the adjusted intensity indication information and the respective distances.
According to one or more embodiments of the present disclosure, the method further comprises:
and determining the lip region in the face image to be processed based on a preconfigured lip region template.
According to one or more embodiments of the present disclosure, [ example two ] there is provided a face image processing apparatus, including:
the instruction acquisition module is used for acquiring a lip makeup adjustment instruction of a user on a face image to be processed, wherein the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup weakening adjustment instruction;
the preliminary adjustment module is used for correspondingly adjusting the original saturation of each pixel point in the lip region in the face image to be processed based on the lip makeup adjustment instruction to obtain the adjusted saturation;
and the fusion module is used for fusing the original saturation and the adjusted saturation of each pixel point to obtain an adjusted face image.
According to one or more embodiments of the present disclosure, the preliminary adjustment module is specifically configured to, based on the lip makeup adjustment instruction, correspondingly adjust the original saturation of each pixel point in the lip region in the to-be-processed face image, and when obtaining the adjusted saturation, perform:
determining the original saturation of each pixel point in the lip region in the face image to be processed;
determining an adjustment factor corresponding to each pixel point based on the original saturation of each pixel point and a pre-configured corresponding relationship, wherein the corresponding relationship comprises the corresponding relationship between each saturation and the adjustment factor corresponding to each saturation;
and correspondingly adjusting the original saturation of each pixel point based on the lip makeup adjusting instruction and the adjusting factor corresponding to each pixel point to obtain the adjusted saturation.
According to one or more embodiments of the present disclosure, the fusion module is specifically configured to, when fusing the original saturation and the adjusted saturation of each pixel point to obtain an adjusted face image:
determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation;
and fusing the original saturation and the adjusted saturation of each pixel point based on the first weight and the second weight to obtain an adjusted face image.
According to one or more embodiments of the present disclosure, the lip makeup adjustment instruction further includes adjustment intensity indication information, and the fusion module is specifically configured to, when determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation:
determining the weight of the facial image to be processed based on the adjustment intensity indication information, and taking the weight of the facial image to be processed as the first weight;
determining the second weight based on the first weight.
According to one or more embodiments of the present disclosure, when determining the first weight corresponding to the original saturation and the second weight corresponding to the adjusted saturation of each pixel point, the fusion module is specifically configured to:
determining a lip reference point in the face image to be processed;
respectively determining the distance between each pixel point in the lip region in the face image to be processed and the lip reference point;
determining the first weight and the second weight based on the respective distances.
According to one or more embodiments of the present disclosure, the lip makeup adjustment instruction further includes adjustment intensity indication information, and the fusion module is specifically configured to, when determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation:
determining a lip reference point in the face image to be processed;
determining the distance between each pixel point in the lip region in the face image to be processed and the lip reference point;
determining the first weight and the second weight based on the adjusted intensity indication information and the respective distances.
According to one or more embodiments of the present disclosure, the apparatus further comprises:
and the lip region determining module is used for determining the lip region in the face image to be processed based on a preconfigured lip region template.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A face image processing method is characterized by comprising the following steps:
acquiring a lip makeup adjustment instruction of a user to-be-processed face image, wherein the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup weakening adjustment instruction;
based on the lip makeup adjustment instruction, correspondingly adjusting the original saturation of each pixel point in the lip region in the face image to be processed to obtain the adjusted saturation;
and fusing the original saturation and the adjusted saturation of each pixel point to obtain an adjusted face image.
2. The method according to claim 1, wherein the correspondingly adjusting the original saturation of each pixel point in the lip region in the face image to be processed based on the lip makeup adjustment instruction to obtain the adjusted saturation comprises:
determining the original saturation of each pixel point in the lip region in the face image to be processed;
determining an adjustment factor corresponding to each pixel point based on the original saturation of each pixel point and a pre-configured corresponding relationship, wherein the corresponding relationship comprises the corresponding relationship between each saturation and the adjustment factor corresponding to each saturation;
and correspondingly adjusting the original saturation of each pixel point based on the lip makeup adjusting instruction and the adjusting factor corresponding to each pixel point to obtain the adjusted saturation.
3. The method according to claim 1, wherein the fusing the original saturation and the adjusted saturation of each pixel point to obtain an adjusted face image comprises:
determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation;
and fusing the original saturation and the adjusted saturation of each pixel point based on the first weight and the second weight to obtain an adjusted face image.
4. The method according to claim 3, wherein the lip cosmetic adjustment instruction further includes adjustment intensity indication information, and the determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation comprises:
determining the weight of the facial image to be processed based on the adjustment intensity indication information, and taking the weight of the facial image to be processed as the first weight;
determining the second weight based on the first weight.
5. The method of claim 3, wherein the determining a first weight corresponding to the original saturation and a second weight corresponding to the adjusted saturation of each pixel point comprises:
determining a lip reference point in the face image to be processed;
respectively determining the distance between each pixel point in the lip region in the face image to be processed and the lip reference point;
determining the first weight and the second weight based on the respective distances.
6. The method according to claim 3, wherein the lip cosmetic adjustment instruction further includes adjustment intensity indication information, and the determining a first weight corresponding to the original saturation of each pixel point and a second weight corresponding to the adjusted saturation comprises:
determining a lip reference point in the face image to be processed;
determining the distance between each pixel point in the lip region in the face image to be processed and the lip reference point;
determining the first weight and the second weight based on the adjusted intensity indication information and the respective distances.
7. The method according to any one of claims 1 to 6, further comprising:
and determining the lip region in the face image to be processed based on a preconfigured lip region template.
8. A face image processing apparatus, comprising:
the instruction acquisition module is used for acquiring a lip makeup adjustment instruction of a user on a face image to be processed, wherein the lip makeup adjustment instruction is a lip makeup enhancement adjustment instruction or a lip makeup weakening adjustment instruction;
the preliminary adjustment module is used for correspondingly adjusting the original saturation of each pixel point in the lip region in the face image to be processed based on the lip makeup adjustment instruction to obtain the adjusted saturation;
and the fusion module is used for fusing the original saturation and the adjusted saturation of each pixel point to obtain an adjusted face image.
9. An electronic device, comprising:
a processor and a memory;
the memory is used for storing computer operation instructions;
the processor is used for executing the method of any one of the claims 1 to 7 by calling the computer operation instruction.
10. A computer-readable medium having computer program instructions stored thereon for causing a computer to perform the method of any of claims 1 to 7.
CN202010407588.8A 2020-05-14 2020-05-14 Face image processing method and device, electronic equipment and computer readable medium Active CN111556303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010407588.8A CN111556303B (en) 2020-05-14 2020-05-14 Face image processing method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010407588.8A CN111556303B (en) 2020-05-14 2020-05-14 Face image processing method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN111556303A true CN111556303A (en) 2020-08-18
CN111556303B CN111556303B (en) 2022-07-15

Family

ID=72004703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010407588.8A Active CN111556303B (en) 2020-05-14 2020-05-14 Face image processing method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111556303B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762212A (en) * 2021-09-27 2021-12-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080013799A1 (en) * 2003-06-26 2008-01-17 Fotonation Vision Limited Method of Improving Orientation and Color Balance of Digital Images Using Face Detection Information
JP2009053981A (en) * 2007-08-28 2009-03-12 Kao Corp Makeup simulation device
CN107800966A (en) * 2017-10-31 2018-03-13 广东欧珀移动通信有限公司 Method, apparatus, computer-readable recording medium and the electronic equipment of image procossing
CN108012081A (en) * 2017-12-08 2018-05-08 北京百度网讯科技有限公司 Intelligence U.S. face method, apparatus, terminal and computer-readable recording medium
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN109639960A (en) * 2017-10-05 2019-04-16 卡西欧计算机株式会社 Image processing apparatus, image processing method and recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080013799A1 (en) * 2003-06-26 2008-01-17 Fotonation Vision Limited Method of Improving Orientation and Color Balance of Digital Images Using Face Detection Information
JP2009053981A (en) * 2007-08-28 2009-03-12 Kao Corp Makeup simulation device
CN109639960A (en) * 2017-10-05 2019-04-16 卡西欧计算机株式会社 Image processing apparatus, image processing method and recording medium
CN107800966A (en) * 2017-10-31 2018-03-13 广东欧珀移动通信有限公司 Method, apparatus, computer-readable recording medium and the electronic equipment of image procossing
CN108012081A (en) * 2017-12-08 2018-05-08 北京百度网讯科技有限公司 Intelligence U.S. face method, apparatus, terminal and computer-readable recording medium
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762212A (en) * 2021-09-27 2021-12-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113762212B (en) * 2021-09-27 2024-06-11 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111556303B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
US11488293B1 (en) Method for processing images and electronic device
CN110418064B (en) Focusing method and device, electronic equipment and storage medium
CN111583103B (en) Face image processing method and device, electronic equipment and computer storage medium
CN111784568A (en) Face image processing method and device, electronic equipment and computer readable medium
CN111047507A (en) Training method of image generation model, image generation method and device
CN110570383B (en) Image processing method and device, electronic equipment and storage medium
CN111833242A (en) Face transformation method and device, electronic equipment and computer readable medium
CN111583102B (en) Face image processing method and device, electronic equipment and computer storage medium
CN112767238A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113542902A (en) Video processing method and device, electronic equipment and storage medium
CN115358919A (en) Image processing method, device, equipment and storage medium
CN110719407A (en) Picture beautifying method, device, equipment and storage medium
CN111556303B (en) Face image processing method and device, electronic equipment and computer readable medium
CN111314620A (en) Photographing method and apparatus
CN110717467A (en) Head pose estimation method, device, equipment and storage medium
CN112819691B (en) Image processing method, device, equipment and readable storage medium
CN113850212A (en) Image generation method, device, equipment and storage medium
CN110619602B (en) Image generation method and device, electronic equipment and storage medium
CN112700385A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110555799A (en) Method and apparatus for processing video
CN116363239A (en) Method, device, equipment and storage medium for generating special effect diagram
CN115358959A (en) Generation method, device and equipment of special effect graph and storage medium
CN115578299A (en) Image generation method, device, equipment and storage medium
CN114119413A (en) Image processing method and device, readable medium and mobile terminal
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder