CN112258605A - Special effect adding method and device, electronic equipment and storage medium - Google Patents

Special effect adding method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112258605A
CN112258605A CN202011110352.4A CN202011110352A CN112258605A CN 112258605 A CN112258605 A CN 112258605A CN 202011110352 A CN202011110352 A CN 202011110352A CN 112258605 A CN112258605 A CN 112258605A
Authority
CN
China
Prior art keywords
region
hair
area
image
special effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011110352.4A
Other languages
Chinese (zh)
Inventor
武珊珊
赵松涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011110352.4A priority Critical patent/CN112258605A/en
Publication of CN112258605A publication Critical patent/CN112258605A/en
Priority to PCT/CN2021/105513 priority patent/WO2022077970A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a special effect adding method, a special effect adding device, electronic equipment and a storage medium, wherein the method comprises the following steps: responding to a color development special effect selection instruction, and acquiring a character image after entering a special effect shooting mode; determining a first hair area and a face area in the person image; determining a second hair region based on the first hair region and the face region; performing guiding filtering based on the second hair area and the person image to determine an area to be dyed; and rendering the area to be dyed in a preset color. By the adoption of the method and the device, the special effect adding efficiency of the electronic equipment to the image can be improved.

Description

Special effect adding method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of special effect adding technologies, and in particular, to a special effect adding method and apparatus, an electronic device, and a storage medium.
Background
With the continuous improvement of the photographing capability of the smart phone, more and more people take photos and videos by using the smart phone to record the wonderful moment in their lives.
When a user uses a smart phone to take videos or photos, various special effects are often added to the taken images by using various shooting software installed on the smart phone, such as adding a hair dyeing special effect to the images of the person. However, when a special hair coloring effect is added to a captured person image, the face skin in the person image is often colored due to the failure to accurately segment the hair region in the person image. Therefore, the user can not obtain the image meeting the special effect adding requirement, so that the person image is subjected to special effect adding processing again, and the special effect adding efficiency of the electronic equipment to the image is reduced.
Disclosure of Invention
The disclosure provides a special effect adding method and device, electronic equipment and a storage medium, which are used for at least solving the problem that the special effect adding efficiency of the electronic equipment to an image is low in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a special effect adding method applied to an electronic device, the method including:
responding to a color development special effect selection instruction, and acquiring a character image after entering a special effect shooting mode; determining a first hair area and a face area in the person image;
determining a second hair region based on the first hair region and the face region;
performing guiding filtering based on the second hair area and the person image to determine an area to be dyed;
and rendering the area to be dyed in a preset color.
In one possible implementation manner, the determining the first hair region and the face region in the person image includes:
inputting the character image into a semantic segmentation model; the semantic segmentation model is used for performing semantic segmentation processing on an input image;
and determining a first hair region and a face region in the character image based on a semantic segmentation result output by the semantic segmentation model.
In one possible implementation, the determining a second hair region based on the first hair region and the face region includes:
determining a first Liuhai area in the character image according to the first hair area and the face area;
performing attenuation treatment on the first bang area to obtain a second bang area;
determining the second hair region based on the second bang region and the first hair region.
In one possible implementation, the determining a first bang area in the person image according to the first hair area and the face area includes:
respectively performing expansion processing on the first hair area and the face area to obtain an expanded first hair area and an expanded face area;
determining an overlapping area between the expanded first hair area and the expanded face area as a first Liuhai area in the character image.
In one possible implementation manner, the expanding the first hair region and the face region respectively to obtain an expanded first hair region and an expanded face region includes:
determining a smallest rectangle that covers the first hair region;
inquiring expansion coefficients corresponding to the lengths of the short sides of the minimum rectangle;
and respectively performing expansion processing on the first hair area and the face area by adopting the expansion coefficient to obtain the expanded first hair area and the expanded face area.
In one possible implementation, the attenuating the first bang zone to obtain a second bang zone includes:
acquiring a pixel mean value of the first Liuhai area;
querying an attenuation coefficient corresponding to the pixel mean value,
and according to the attenuation coefficient, reducing each pixel value in the first Liu Hai area to obtain the second Liu Hai area.
In one possible implementation, the performing guided filtering based on the second hair region and the person image to determine a region to be dyed includes:
determining a target channel image corresponding to a color channel with the maximum variance in the character image;
adjusting the image contrast of the target channel image to obtain an adjusted image; wherein the image contrast of the adjusted image is greater than the image contrast of the target channel image;
and performing guiding filtering processing on the second hair area by adopting the adjusted image to obtain the area to be dyed.
In one possible implementation manner, the rendering the region to be dyed in a preset color includes:
acquiring a target rendering color corresponding to the special effect shooting mode;
performing color rendering on the region to be dyed based on the target rendering color;
and/or the presence of a gas in the gas,
responding to a hair color selection instruction implemented in the hair color selection entrance;
obtaining a target rendering color corresponding to the color selection instruction;
and performing color rendering on the region to be dyed based on the target rendering color.
According to a second aspect of the embodiments of the present disclosure, there is provided a special effect adding apparatus including:
a response unit configured to execute, in response to a color development special effect selection instruction, acquiring a person image after entering a special effect shooting mode;
a segmentation unit configured to perform determining a first hair region and a face region in the person image;
a determination unit configured to perform determining a second hair region based on the first hair region and the face region;
a guide filtering unit configured to perform guide filtering based on the second hair region and the person image, and determine a region to be dyed;
a rendering unit configured to perform rendering the region to be colored in a preset color.
In a possible implementation, the segmentation unit is specifically configured to perform inputting the human figure image to a semantic segmentation model; the semantic segmentation model is used for performing semantic segmentation processing on an input image; and determining a first hair region and a face region in the character image based on a semantic segmentation result output by the semantic segmentation model.
In one possible implementation, the determining unit is specifically configured to determine a first bang area in the person image according to the first hair area and the face area; performing attenuation treatment on the first bang area to obtain a second bang area; determining the second hair region based on the second bang region and the first hair region.
In a possible implementation manner, the determining unit is specifically configured to perform expansion processing on the first hair region and the face region respectively, so as to obtain an expanded first hair region and an expanded face region; determining an overlapping area between the expanded first hair area and the expanded face area as a first Liuhai area in the character image.
In a possible implementation, the determining unit is specifically configured to perform determining a minimum rectangle covering the first hair region; inquiring expansion coefficients corresponding to the lengths of the short sides of the minimum rectangle; and respectively performing expansion processing on the first hair area and the face area by adopting the expansion coefficient to obtain the expanded first hair area and the expanded face area.
In one possible implementation, the determining unit is specifically configured to perform obtaining a pixel mean of the first bang region; inquiring an attenuation coefficient corresponding to the pixel mean value, and reducing each pixel value in the first Liu Hai area according to the attenuation coefficient to obtain the second Liu Hai area.
In one possible implementation, the guidance filtering unit is specifically configured to perform determining a target channel image corresponding to a color channel with a maximum variance in the human image; adjusting the image contrast of the target channel image to obtain an adjusted image; wherein the image contrast of the adjusted image is greater than the image contrast of the target channel image; and performing guiding filtering processing on the second hair area by adopting the adjusted image to obtain the area to be dyed.
In a possible implementation manner, the rendering unit is specifically configured to perform obtaining of a target rendered color corresponding to the special effect shooting mode; performing color rendering on the region to be dyed based on the target rendering color; and/or, in response to a hair color selection instruction implemented in the hair color selection entry; obtaining a target rendering color corresponding to the color selection instruction; and performing color rendering on the region to be dyed based on the target rendering color.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor implements the special effect adding method according to the first aspect or any one of the possible implementation manners of the first aspect when executing the computer program.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements a special effects addition method as described in the first aspect or any one of the possible implementations of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, the program product comprising a computer program, the computer program being stored in a readable storage medium, from which the computer program is read and executed by at least one processor of a device, such that the device performs the special effects addition method of any one of the possible implementations of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: acquiring a character image after entering a special effect shooting mode by responding to a color development special effect selection instruction; determining a first hair area and a face area in a person image; determining a second hair area based on the first hair area and the face area; performing guiding filtering based on the second hair area and the figure image to determine an area to be dyed; finally, rendering the area to be dyed in a preset color; therefore, the condition that boundary lines possibly exist in the to-be-dyed area are absent can be avoided, the to-be-dyed area can be guaranteed to have good edge characteristics, the non-hair area in the to-be-processed image is prevented from being influenced by rendering processing, and the hair area in the human image can be dyed more truly and accurately.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating a special effects addition method according to an example embodiment.
FIG. 2 is a flow diagram illustrating another special effects addition method according to an example embodiment.
Fig. 3 is a block diagram illustrating an effect adding apparatus according to an exemplary embodiment.
Fig. 4 is an internal block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flow chart illustrating a special effects addition method according to an exemplary embodiment, including the following steps. In practice, the electronic device 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
In step S110, in response to the color-developing special effect selection instruction, the person image is acquired after entering the special effect shooting mode.
The color development special effect selection instruction may be an instruction for a user to select the electronic device to enter a special effect shooting mode.
The special effect shooting mode may be a mode in which a special effect of hair dyeing is added to a shot image.
The person image may be an image including a person to be photographed.
In specific implementation, the electronic device provided with the image shooting software can display a shooting page firstly; the shooting page comprises a color special effect selection inlet. In practical application, when a user can perform triggering operation on the color development special effect selection inlet, the color development special effect selection instruction is input into the electronic equipment. And after receiving the color development special effect selection instruction, the electronic equipment responds to the color development special effect selection instruction so as to obtain a special effect shooting mode. When the electronic equipment successfully enters the special-effect shooting mode, the electronic equipment acquires an image including a shot person, namely a person image.
In step S120, a first hair region and a face region in the person image are determined.
In the specific implementation, after the electronic device acquires the character image, the electronic device can perform semantic segmentation on the character image through a pre-trained semantic segmentation model to determine a first hair region and a face region in the character image.
The pre-trained semantic segmentation model is used for performing semantic segmentation processing on the input image.
Specifically, the electronic device specifically includes, in the process of determining the first hair region and the face region in the person image: the electronic device can input the character image into a pre-trained semantic segmentation model; then, the electronic equipment acquires a semantic segmentation result output by the pre-trained semantic segmentation model, and determines a first hair region and a face region in the character image based on the semantic segmentation result output by the pre-trained semantic segmentation model.
In step S130, a second hair region is determined based on the first hair region and the face region.
In a specific implementation, after the electronic device determines the first hair region and the face region in the person image, the electronic device may be based on the first hair region and the face region in the person image. Specifically, the electronic device specifically includes, in the process of determining the second hair region based on the first hair region and the face region: the electronic device can determine a first bang area in the character image according to the first hair area and the face area. Then, the electronic device performs attenuation processing on the first bang area to obtain a second bang area.
The second bang region may be referred to as the first bang region after the attenuation treatment.
Finally, the electronic device determines a second hair region based on the second bang region and the first hair region. For example, the electronic device may adjust the bang region in the first hair region based on the second bang region, resulting in an adjusted hair region as the second hair region.
In step S140, a guiding filter is performed based on the second hair region and the person image, and a region to be dyed is determined.
In specific implementation, the electronic device may perform guiding filtering processing on the second hair region by using the human image, so that the edge feature of the obtained guiding filtered second hair region is obvious; the electronics then determine the second hair region after the guiding filter as the region to be dyed.
In step S150, the region to be dyed is rendered in a preset color.
In a specific implementation, after the electronic device determines the region to be dyed in the character image, the electronic device may perform color adjustment processing on the region to be dyed in a manner corresponding to the color special effect selection instruction, so as to obtain a processed character image. Specifically, the electronic device may determine the target hair color according to the hair color special effect selection instruction. Then, the electronic device adjusts the color level of each pixel in the area to be dyed based on the target hair color, so that the hair color of the area to be dyed is matched with the target hair color, and further a processed person image is obtained.
In the special effect adding method, the character image is obtained after entering the special effect shooting mode by responding to the color development special effect selection instruction; determining a first hair area and a face area in a person image; determining a second hair area based on the first hair area and the face area; performing guiding filtering based on the second hair area and the figure image to determine an area to be dyed; finally, rendering the area to be dyed in a preset color; therefore, the condition that boundary lines possibly exist in the to-be-dyed area are absent can be avoided, the to-be-dyed area can be guaranteed to have good edge characteristics, the non-hair area in the to-be-processed image is prevented from being influenced by rendering processing, and the hair area in the human image can be dyed more truly and accurately.
In an exemplary embodiment, determining the first bang area in the person image according to the first hair area and the face area comprises: respectively performing expansion processing on the first hair area and the face area to obtain an expanded first hair area and an expanded face area; and determining an overlapping area between the expanded first hair area and the expanded face area as a first Liuhai area in the character image.
In specific implementation, the electronic device specifically includes, in the process of determining the first bang area according to the first hair area and the face area: the electronic device may perform expansion processing on the first hair region and the face region respectively to obtain an expanded first hair region and an expanded face region. Specifically, the electronic device may acquire the current hair thickness of the photographed person in the image to be processed. Then, the expansion processing is carried out on the first hair area and the face area based on the current hair thickness.
For example, the electronic device may determine a target expansion coefficient corresponding to the current hair thickness in a pre-established positive correlation between hair thickness and expansion coefficient. Then, the electronic device performs expansion processing on the first hair area and the face area based on the target expansion coefficient to obtain the expanded first hair area and the expanded face area.
And finally, the electronic equipment determines an overlapping area between the expanded first hair area and the expanded face area, and uses the overlapping area as the first Liuhai area in the character image.
According to the technical scheme of the embodiment, the first hair area and the face area are respectively subjected to expansion processing to obtain the expanded first hair area and the expanded face area; the overlapping area between the expanded first hair area and the expanded face area is determined to serve as the first Liuhai area in the character image, and therefore robustness in the process of determining the first Liuhai area can be improved.
In an exemplary embodiment, the expanding the first hair region and the face region respectively to obtain an expanded first hair region and an expanded face region includes: determining a minimum rectangle covering the first hair region; inquiring an expansion coefficient corresponding to the length of the short side of the minimum rectangle; and respectively performing expansion processing on the first hair area and the face area by adopting expansion coefficients to obtain the expanded first hair area and the expanded face area.
In the specific implementation, the electronic device performs expansion processing on the first hair region and the face region respectively to obtain the expanded first hair region and the expanded face region, and the method specifically includes: the electronic device can determine a smallest rectangle in the mask map of the hair region that can cover the first hair region. Then, the electronic device acquires the length of the short side of the minimum rectangle, and determines the expansion coefficient corresponding to the length of the short side of the minimum rectangle in the pre-established positive correlation relationship between the length of the short side and the expansion coefficient. And finally, the electronic equipment respectively performs expansion processing on the first hair area and the face area by adopting the expansion coefficient to obtain the expanded first hair area and the expanded face area.
In the technical scheme of the embodiment, the minimum rectangle covering the first hair area is determined; inquiring an expansion coefficient corresponding to the length of the short side of the minimum rectangle; respectively performing expansion processing on the first hair area and the face area by adopting expansion coefficients to obtain an expanded first hair area and an expanded face area; therefore, the first bang area determined based on the expanded first hair area and the expanded face area can be well adapted to the hair thickness of the shot person in the follow-up process, and the true degree of the hair dyeing rendering result is improved.
In an exemplary embodiment, attenuating the first bang zone to obtain the second bang zone comprises: acquiring a pixel mean value of the Liuhai area; inquiring the attenuation coefficient corresponding to the pixel mean value, and reducing each pixel value in the first Liu Hai area according to the attenuation coefficient to obtain the second Liu Hai area.
Wherein the pixel values of the first Liuhai region after the attenuation processing in the second Liuhai region are less than the pixel values of the first Liuhai region in the mask map.
In the concrete realization, the electronic equipment is carrying out attenuation treatment to first bang region, obtains the in-process in the second bang region, specifically includes: the electronic equipment can acquire a pixel value of the first Liuhai area, and then inquire an attenuation coefficient corresponding to the pixel mean value in the positive correlation relation between the pixel mean value and the attenuation coefficient according to the pixel mean value of the pixel value; according to the attenuation coefficient, carrying out attenuation processing on each pixel value in the first Liu Hai area to obtain a second Liu Hai area; wherein the pixel values of the first Liuhai region after the attenuation processing in the second Liuhai region are less than the pixel values of the first Liuhai region in the mask map.
The first bang region may be a pixel mean value of the first bang region, may also be a pixel variance value of the first bang region, or may also be a pixel median value of the first bang region, which is not specifically limited in this disclosure. For example, when the first bang region has a pixel mean of 120, the attenuation coefficient may be 0.5; when the pixel mean of the first bang region is 150, the attenuation coefficient may be 0.6, and so on. The positive correlation of the pixel mean value and the attenuation coefficient can be determined from experimental results.
According to the technical scheme of the embodiment, the pixel mean value of the first Liuhai area is obtained; inquiring an attenuation coefficient corresponding to the pixel mean value; according to the attenuation coefficient, carrying out attenuation processing on each pixel value in the first Liu Hai area to obtain a second Liu Hai area; so, can carry out decay treatment according to the pixel distribution condition in first bang region each pixel value in the first bang region adaptively for the regional dyeing effect of bang after adding the hair color special effect can not be too hard, has improved the true degree that the hair color of bang region was carried out and is rendered.
In an exemplary embodiment, the determining the region to be dyed based on the second hair region and the person image by the guided filtering includes: determining a target channel image corresponding to a color channel with the maximum variance in the character image; adjusting the image contrast of the target channel image to obtain an adjusted image; and performing guide filtering processing on the second hair area by adopting the adjusted image to obtain an area to be dyed.
And the image contrast of the adjusted image is greater than that of the target channel image.
In a specific implementation, the electronic device specifically includes, in the process of performing guiding filtering based on the second hair region and the person image and determining the region to be dyed: the electronic device can determine a color channel with the maximum variance in RGB three channels corresponding to the image to be processed. Then, the electronic device determines a target channel image corresponding to the color channel with the largest variance in the human image.
Then, the electronic device adjusts the image contrast of the target channel image to obtain an adjusted image with the image contrast larger than that of the target image. Specifically, the electronic equipment eliminates pixel points of specified pixel values in a target channel image; and then stretching the image without the pixel points to a pixel value range, wherein the minimum value of the pixel values of the stretched image without the pixel points is the minimum value of the pixel value range, and the maximum value of the pixel value range is the maximum value of the pixel value range. For example, the pixel value range may be 0 to 255, or may also be a pixel value range of an image of the selected channel. Therefore, a histogram of the target image can be drawn before the pixel points of the designated pixel values are eliminated. And finally, the electronic equipment adopts the adjusted image as a guide image and takes the second hair area as a guided image, so that guide filtering processing is carried out, and the area to be dyed is obtained.
According to the technical scheme of the embodiment, the target channel image corresponding to the color channel with the maximum variance in the character image is determined; the image contrast of the target channel image is adjusted to obtain an adjusted image with high contrast, the data calculation amount of the electronic equipment in the process of conducting guide filtering processing on the second hair area by adopting the adjusted image can be reduced as much as possible, and the real-time performance in the hair dyeing rendering process is improved.
In an exemplary embodiment, rendering the region to be colored in a preset color includes: acquiring a target rendering color corresponding to the special effect shooting mode; performing color rendering on the region to be dyed based on the target rendering color; and/or, in response to a hair color selection instruction implemented in the hair color selection entry; obtaining a target rendering color corresponding to the color selection instruction; and performing color rendering on the region to be dyed based on the target rendering color.
In a specific implementation, the electronic device specifically includes, in a process of rendering a to-be-dyed region with a preset color: the electronic equipment can obtain the target rendering color corresponding to the special effect shooting mode.
Certainly, when the electronic device enters the special effect shooting mode, the color development selection entry is further included in the special effect shooting mode interface currently displayed by the electronic device. The color development selection inlet is used for switching different color development rendering special effects by a user. The electronic equipment can respond to a color development selection instruction implemented by a user at the color development selection inlet, and then obtains a target rendering color development corresponding to the color development selection instruction.
And finally, the electronic equipment renders the color of the area to be dyed based on the target rendering color. Specifically, the electronic device may perform color matching processing on the region to be colored based on the target rendering color, so as to obtain the color-matched region. And the hair color in the toned area is matched with the target rendered hair color.
According to the technical scheme, in the process that the electronic equipment renders the to-be-dyed area by the preset color, the target rendering color corresponding to the special effect shooting mode is obtained, and/or the color selection instruction applied to the color selection inlet is responded to obtain the target rendering color corresponding to the color selection instruction, and the color rendering is carried out on the to-be-dyed area based on the target rendering color, so that the rendered figure image can meet the color special effect adding requirement of a user, the special effect adding treatment does not need to be carried out on the figure image again, and the special effect adding efficiency of the electronic equipment to the image is improved.
Fig. 2 is a flow chart illustrating another special effects addition method according to an example embodiment, including the following steps. In step S202, in response to the color-developing special effect selection instruction, a person image is acquired after entering the special effect shooting mode. In step S204, the person image is input to a semantic segmentation model; the semantic segmentation model is used for performing semantic segmentation processing on the input image. In step S206, a first hair region and a face region in the human image are determined based on the semantic segmentation result output by the semantic segmentation model. In step S208, the first hair region and the face region are respectively expanded to obtain an expanded first hair region and an expanded face region. In step S210, an overlapping region between the expanded first hair region and the expanded face region is determined as a first bang region in the character image. In step S212, the attenuation process is performed on the first bang region to obtain a second bang region. In step S214, the second hair region is determined based on the second bang region and the first hair region. In step S216, a target channel image corresponding to the color channel with the largest variance in the person image is determined. In step S218, adjusting the image contrast of the target channel image to obtain an adjusted image; wherein the image contrast of the adjusted image is greater than the image contrast of the target channel image. In step S220, the adjusted image is used to perform guided filtering processing on the second hair region, so as to obtain a region to be dyed. In step S222, the region to be dyed is rendered in a preset color. It should be noted that, for the specific limitations of the above steps, reference may be made to the specific limitations of the above special effect adding method, which is not described herein again.
It should be understood that although the steps in the flowcharts of fig. 1 and 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1 and 2 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
Fig. 3 is a block diagram illustrating an effect adding apparatus according to an example embodiment. Referring to fig. 3, the apparatus includes:
a response unit 310 configured to execute acquiring a person image after entering a special effect shooting mode in response to a color development special effect selection instruction;
a segmentation unit 320 configured to perform determining a first hair region and a face region in the person image;
a determining unit 330 configured to perform determining a second hair region based on the first hair region and the face region;
a guide filtering unit 340 configured to perform guide filtering based on the second hair region and the person image, and determine a region to be dyed;
a rendering unit 350 configured to perform rendering the region to be colored in a preset color.
In one embodiment, the segmentation unit 320 is specifically configured to perform inputting the human image into a semantic segmentation model; the semantic segmentation model is used for performing semantic segmentation processing on an input image; and determining a first hair region and a face region in the character image based on a semantic segmentation result output by the semantic segmentation model.
In one embodiment, the determining unit 330 is specifically configured to determine the first bang area in the character image according to the first hair area and the face area; performing attenuation treatment on the first bang area to obtain a second bang area; determining the second hair region based on the second bang region and the first hair region.
In one embodiment, the determining unit 330 is specifically configured to perform expansion processing on the first hair region and the face region respectively, so as to obtain an expanded first hair region and an expanded face region; determining an overlapping area between the expanded first hair area and the expanded face area as a first Liuhai area in the character image.
In one embodiment, the determining unit 330 is specifically configured to perform determining a minimum rectangle covering the first hair region; inquiring expansion coefficients corresponding to the lengths of the short sides of the minimum rectangle; and respectively performing expansion processing on the first hair area and the face area by adopting the expansion coefficient to obtain the expanded first hair area and the expanded face area.
In one embodiment, the determining unit 330 is specifically configured to perform obtaining a pixel mean of the first liuh region; inquiring an attenuation coefficient corresponding to the pixel mean value, and reducing each pixel value in the first Liu Hai area according to the attenuation coefficient to obtain the second Liu Hai area.
In one embodiment, the guiding filtering unit 340 is specifically configured to perform determining a target channel image corresponding to a color channel with a maximum variance in the human image; adjusting the image contrast of the target channel image to obtain an adjusted image; wherein the image contrast of the adjusted image is greater than the image contrast of the target channel image; and performing guiding filtering processing on the second hair area by adopting the adjusted image to obtain the area to be dyed.
In one embodiment, the rendering unit 350 is specifically configured to execute obtaining a target rendered color corresponding to the special effect shooting mode; performing color rendering on the region to be dyed based on the target rendering color; and/or, in response to a hair color selection instruction implemented in the hair color selection entry; obtaining a target rendering color corresponding to the color selection instruction; and performing color rendering on the region to be dyed based on the target rendering color.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 4 is a block diagram illustrating an apparatus 400 for performing a special effects addition method according to an example embodiment. For example, the device 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 4, device 400 may include one or more of the following components: a processing component 402, a memory 404, a power component 406, a multimedia component 408, an audio component 410, an interface for input/output (I/O) 412, a sensor component 414, and a communication component 416.
The processing component 402 generally controls the overall operation of the device 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 402 may include one or more processors 420 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the device 400. Examples of such data include instructions for any application or method operating on device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 406 provide power to the various components of device 400. Power components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 400.
The multimedia component 408 includes a screen providing an output interface between the device 400 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 400 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, the audio component 410 includes a Microphone (MIC) configured to receive external audio signals when the device 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing status assessment of various aspects of the device 400. For example, the sensor component 414 can detect an open/closed state of the device 400, the relative positioning of components, such as a display and keypad of the device 400, the sensor component 414 can also detect a change in the position of the device 400 or a component of the device 400, the presence or absence of user contact with the device 400, orientation or acceleration/deceleration of the device 400, and a change in the temperature of the device 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the device 400 and other devices. The device 400 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the device 400 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A special effect adding method is applied to electronic equipment, and is characterized by comprising the following steps:
responding to a color development special effect selection instruction, and acquiring a character image after entering a special effect shooting mode;
determining a first hair area and a face area in the person image;
determining a second hair region based on the first hair region and the face region;
performing guiding filtering based on the second hair area and the person image to determine an area to be dyed;
and rendering the area to be dyed in a preset color.
2. The special effect addition method according to claim 1, wherein the determining a first hair region and a face region in the human image comprises:
inputting the character image into a semantic segmentation model; the semantic segmentation model is used for performing semantic segmentation processing on an input image;
and determining a first hair region and a face region in the character image based on a semantic segmentation result output by the semantic segmentation model.
3. The special effect adding method according to claim 1, wherein the determining a second hair region based on the first hair region and the face region comprises:
determining a first Liuhai area in the character image according to the first hair area and the face area;
performing attenuation treatment on the first bang area to obtain a second bang area;
determining the second hair region based on the second bang region and the first hair region.
4. The special effect addition method of claim 3, wherein the determining the first Liuhai region in the character image according to the first hair region and the face region comprises:
respectively performing expansion processing on the first hair area and the face area to obtain an expanded first hair area and an expanded face area;
determining an overlapping area between the expanded first hair area and the expanded face area as a first Liuhai area in the character image.
5. The special effect adding method according to claim 4, wherein the expanding the first hair region and the face region respectively to obtain an expanded first hair region and an expanded face region comprises:
determining a smallest rectangle that covers the first hair region;
inquiring expansion coefficients corresponding to the lengths of the short sides of the minimum rectangle;
and respectively performing expansion processing on the first hair area and the face area by adopting the expansion coefficient to obtain the expanded first hair area and the expanded face area.
6. The special effect adding method according to claim 1, wherein the determining a region to be colored by performing the guided filtering based on the second hair region and the person image comprises:
determining a target channel image corresponding to a color channel with the maximum variance in the character image;
adjusting the image contrast of the target channel image to obtain an adjusted image; wherein the image contrast of the adjusted image is greater than the image contrast of the target channel image;
and performing guiding filtering processing on the second hair area by adopting the adjusted image to obtain the area to be dyed.
7. The special effect addition method according to claim 1, wherein the rendering the region to be dyed in a preset color comprises:
acquiring a target rendering color corresponding to the special effect shooting mode;
performing color rendering on the region to be dyed based on the target rendering color;
and/or the presence of a gas in the gas,
responding to a hair color selection instruction implemented in the hair color selection entrance;
obtaining a target rendering color corresponding to the color selection instruction;
and performing color rendering on the region to be dyed based on the target rendering color.
8. A special effect adding device is applied to electronic equipment and is characterized by comprising:
a response unit configured to execute, in response to a color development special effect selection instruction, acquiring a person image after entering a special effect shooting mode;
a segmentation unit configured to perform determining a first hair region and a face region in the person image;
a determination unit configured to perform determining a second hair region based on the first hair region and the face region;
a guide filtering unit configured to perform guide filtering based on the second hair region and the person image, and determine a region to be dyed;
a rendering unit configured to perform rendering the region to be colored in a preset color.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the special effects addition method of any of claims 1 to 7.
10. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the special effects addition method of any of claims 1 to 7.
CN202011110352.4A 2020-10-16 2020-10-16 Special effect adding method and device, electronic equipment and storage medium Pending CN112258605A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011110352.4A CN112258605A (en) 2020-10-16 2020-10-16 Special effect adding method and device, electronic equipment and storage medium
PCT/CN2021/105513 WO2022077970A1 (en) 2020-10-16 2021-07-09 Method and apparatus for adding special effects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011110352.4A CN112258605A (en) 2020-10-16 2020-10-16 Special effect adding method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112258605A true CN112258605A (en) 2021-01-22

Family

ID=74244564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011110352.4A Pending CN112258605A (en) 2020-10-16 2020-10-16 Special effect adding method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112258605A (en)
WO (1) WO2022077970A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883821A (en) * 2021-01-27 2021-06-01 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113129319A (en) * 2021-04-29 2021-07-16 北京市商汤科技开发有限公司 Image processing method, image processing device, computer equipment and storage medium
WO2022077970A1 (en) * 2020-10-16 2022-04-21 北京达佳互联信息技术有限公司 Method and apparatus for adding special effects
CN114758027A (en) * 2022-04-12 2022-07-15 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2023109829A1 (en) * 2021-12-17 2023-06-22 北京字跳网络技术有限公司 Image processing method and apparatus, electronic device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110730303A (en) * 2019-10-25 2020-01-24 腾讯科技(深圳)有限公司 Image hair dyeing processing method, device, terminal and storage medium
CN110807780A (en) * 2019-10-23 2020-02-18 北京达佳互联信息技术有限公司 Image processing method and device
CN111127591A (en) * 2019-12-24 2020-05-08 腾讯科技(深圳)有限公司 Image hair dyeing processing method, device, terminal and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808136B (en) * 2017-10-31 2020-06-12 Oppo广东移动通信有限公司 Image processing method, image processing device, readable storage medium and computer equipment
CN110189340B (en) * 2019-06-03 2022-01-21 北京达佳互联信息技术有限公司 Image segmentation method and device, electronic equipment and storage medium
CN112258605A (en) * 2020-10-16 2021-01-22 北京达佳互联信息技术有限公司 Special effect adding method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807780A (en) * 2019-10-23 2020-02-18 北京达佳互联信息技术有限公司 Image processing method and device
CN110730303A (en) * 2019-10-25 2020-01-24 腾讯科技(深圳)有限公司 Image hair dyeing processing method, device, terminal and storage medium
CN111127591A (en) * 2019-12-24 2020-05-08 腾讯科技(深圳)有限公司 Image hair dyeing processing method, device, terminal and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022077970A1 (en) * 2020-10-16 2022-04-21 北京达佳互联信息技术有限公司 Method and apparatus for adding special effects
CN112883821A (en) * 2021-01-27 2021-06-01 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112883821B (en) * 2021-01-27 2024-02-20 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113129319A (en) * 2021-04-29 2021-07-16 北京市商汤科技开发有限公司 Image processing method, image processing device, computer equipment and storage medium
WO2023109829A1 (en) * 2021-12-17 2023-06-22 北京字跳网络技术有限公司 Image processing method and apparatus, electronic device, and storage medium
CN114758027A (en) * 2022-04-12 2022-07-15 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2023197780A1 (en) * 2022-04-12 2023-10-19 北京字跳网络技术有限公司 Image processing method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
WO2022077970A1 (en) 2022-04-21

Similar Documents

Publication Publication Date Title
US10565763B2 (en) Method and camera device for processing image
CN112258605A (en) Special effect adding method and device, electronic equipment and storage medium
CN105095881B (en) Face recognition method, face recognition device and terminal
CN107025419B (en) Fingerprint template inputting method and device
CN107944367B (en) Face key point detection method and device
CN108462833B (en) Photographing method, photographing device and computer-readable storage medium
CN107015648B (en) Picture processing method and device
CN108154466B (en) Image processing method and device
CN112330570B (en) Image processing method, device, electronic equipment and storage medium
CN107730448B (en) Beautifying method and device based on image processing
CN107463052B (en) Shooting exposure method and device
CN107341777B (en) Picture processing method and device
CN110580688B (en) Image processing method and device, electronic equipment and storage medium
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN107507128B (en) Image processing method and apparatus
CN112004020B (en) Image processing method, image processing device, electronic equipment and storage medium
CN108961156B (en) Method and device for processing face image
CN107730443B (en) Image processing method and device and user equipment
CN115914721A (en) Live broadcast picture processing method and device, electronic equipment and storage medium
CN107085822B (en) Face image processing method and device
CN107122356B (en) Method and device for displaying face value and electronic equipment
CN113315904B (en) Shooting method, shooting device and storage medium
CN111586296B (en) Image capturing method, image capturing apparatus, and storage medium
CN111373409A (en) Method and terminal for acquiring color value change
CN114418865A (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination