CN110730303B - Image hair dyeing processing method, device, terminal and storage medium - Google Patents

Image hair dyeing processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN110730303B
CN110730303B CN201911026154.7A CN201911026154A CN110730303B CN 110730303 B CN110730303 B CN 110730303B CN 201911026154 A CN201911026154 A CN 201911026154A CN 110730303 B CN110730303 B CN 110730303B
Authority
CN
China
Prior art keywords
image
rendering
hair
region
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911026154.7A
Other languages
Chinese (zh)
Other versions
CN110730303A (en
Inventor
傅熠君
赵艺
王梦娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911026154.7A priority Critical patent/CN110730303B/en
Publication of CN110730303A publication Critical patent/CN110730303A/en
Application granted granted Critical
Publication of CN110730303B publication Critical patent/CN110730303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D44/00Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
    • A45D44/005Other cosmetic or toiletry articles, e.g. for hairdressers' rooms for selecting or displaying personal cosmetic colours or hairstyle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image hair dyeing processing method, device, terminal and storage medium. Relates to the technical field of image processing. The method comprises the following steps: displaying an image shooting interface; collecting a user image through a camera; respectively carrying out color adjustment processing on a left hair area and a right hair area in the user image to obtain a target image; and displaying the target image in the image shooting interface. Compare in correlation technique, only can carry out monochromatic adjustment to whole hair region, the technical scheme that this application embodiment provided can carry out the color adjustment to left side hair region and right side hair region respectively simultaneously, makes left and right sides hair region have different color effects to hair dyeing effect has been enriched, makes the image effect of the processing back image that finally shows promoted.

Description

Image hair dyeing processing method, device, terminal and storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to an image hair dyeing processing method, an image hair dyeing processing device, a terminal and a storage medium.
Background
With the development of internet technology, various special effect processing functions based on images are also popular. For example, adjusting the color of the user's hair in the image to achieve a virtual hair coloring function is provided in the associated application.
In the related art, the image virtual hair dyeing process may include the following steps: firstly, acquiring a target image; then, extracting a region to be adjusted from the target image; and then, carrying out color development adjustment processing on the area to be adjusted, thereby obtaining a target image after color development adjustment.
In the above related art, the dyeing effect is single.
Disclosure of Invention
The embodiment of the application provides an image hair dyeing processing method, an image hair dyeing processing device, a terminal and a storage medium, which can be used for solving the technical problem that the hair dyeing effect is single in the related technology. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides an image hair dyeing method, where the method includes:
displaying an image shooting interface;
collecting a user image through a camera;
respectively carrying out color adjustment processing on the left hair area and the right hair area in the user image to obtain a target image; wherein the left hair region and the right hair region in the target image have different color effects;
and displaying the target image in the image shooting interface.
On the other hand, the embodiment of the present application provides an image hair dyeing device, which is characterized in that the device comprises:
the interface display module is used for displaying an image shooting interface;
the image acquisition module is used for acquiring a user image through the camera;
the color adjusting module is used for respectively carrying out color adjustment processing on the left hair area and the right hair area in the user image to obtain a target image; wherein the left hair region and the right hair region in the target image have different color effects;
and the image display module is used for displaying the target image in the image shooting interface.
In yet another aspect, an embodiment of the present application provides a terminal, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the image coloring processing method according to the above aspect.
In yet another aspect, an embodiment of the present application provides a computer-readable storage medium, which stores at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method for image coloring treatment according to the above aspect.
In still another aspect, the present application provides a computer program product, which when executed by a processor, is configured to implement the above image hair dyeing method.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the color adjustment processing is respectively carried out on the left hair area and the right hair area in the collected user image, so that the left hair area and the right hair area in the finally displayed target image have different color effects. Compare in correlation technique, only can carry out monochromatic adjustment to whole hair region, the technical scheme that this application embodiment provided can carry out the color adjustment to left side hair region and right side hair region respectively simultaneously, makes left and right sides hair region have different color effects to hair dyeing effect has been enriched, makes the image effect of the processing back image that finally shows promoted.
Drawings
Fig. 1 is a flowchart of a method for image coloring treatment according to an embodiment of the present application;
fig. 2 is a flowchart of a method for image coloring treatment according to another embodiment of the present application;
FIG. 3 is a diagram illustrating an example of determining a rendering area in a user image according to the present application;
FIG. 4 is a schematic diagram illustrating a determined boundary line of the present application;
FIG. 5 is a schematic diagram illustrating a boundary line under different face offset angles according to the present application;
FIG. 6 is a schematic diagram illustrating a first rendered image and a second rendered image of the present application;
fig. 7 is a diagram schematically showing a color conversion table according to the present application;
FIG. 8 is a schematic diagram illustrating the acquisition of a rendered image according to one embodiment of the present disclosure;
fig. 9 is a schematic diagram illustrating a hair dye effect material of the present application;
FIG. 10 is a schematic diagram illustrating a third rendered image according to the present disclosure;
FIG. 11 is a schematic diagram illustrating the acquisition of a target image according to one embodiment of the present disclosure;
FIG. 12 is a diagram illustrating a mapping relationship between a material configuration mask and a rendering region according to the present application;
FIG. 13 is a schematic diagram illustrating an object image of the present application;
FIG. 14 is a schematic diagram illustrating another object image of the present application;
fig. 15 is a block diagram of an image hair dyeing apparatus according to an embodiment of the present application;
fig. 16 is a block diagram of an image hair-dyeing treatment apparatus according to another embodiment of the present application;
fig. 17 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The image hair dyeing processing method can be applied to a terminal, and the terminal can be an electronic device with an image shooting function, such as a mobile phone, a tablet personal computer, a smart camera, a wearable device and the like.
And a target application program is installed in the terminal, and the target application program has an image hair dyeing function. The target application may be a hair styling class application, a video class application, a gaming class application, an instant messaging class application, and the like.
The technical solution of the present application will be described below by means of several embodiments.
Referring to fig. 1, a flowchart of an image coloring method according to an embodiment of the present application is shown. In the present embodiment, the method is mainly applied to the terminal described above for illustration. The method may include the steps of:
and step 101, displaying an image shooting interface.
And the user operates the target application program installed in the terminal and displays the image shooting interface.
Optionally, the image capturing interface includes a capturing preview area, a capturing parameter setting area, and a capturing control area. The shooting preview area is used for displaying a framing picture; the shooting parameter setting area is used for displaying shooting parameter setting controls such as a flash switch control, an HDR (High-Dynamic Range) image switch control, a filter control, and the like; the shooting control area is used for displaying shutter controls, such as a shutter control, a camera switching control, an album display control and the like.
The embodiment of the application does not limit the layout of different functional areas in the image shooting interface, and the setting modes and types of the controls in the different functional areas.
And 102, acquiring a user image through a camera.
The target application program can call the camera, and when the camera is opened, the terminal can collect the user image through the camera.
Optionally, the camera may be a front camera or a rear camera. When the camera is a front camera, the acquired user image can be the user image of the user who uses the terminal at present; when the camera is a rear camera, the acquired user image can be user images of other users.
Optionally, the user image includes a hair region of the user. The hair region refers to a region occupied by the user's hair in the user image.
Optionally, the user image may be a photographed picture, a locally stored picture, or a picture downloaded from a network. The user image may be an image of any frame of a captured video.
And 103, respectively carrying out color adjustment processing on the left hair area and the right hair area in the user image to obtain a target image.
After the user image is acquired, color adjustment processing may be performed on the left hair region and the right hair region in the user image, respectively, to obtain a target image, where the left hair region and the right hair region in the target image have different color effects.
The left hair area refers to a hair area corresponding to the left half face, and the right hair area refers to a hair area corresponding to the right half face. The left half face refers to a face comprising left eyes, a left nose and a left lip; the right half face refers to a face comprising right eyes, a right nose and a right lip.
Alternatively, the color effect of the left hair region may be a monochrome color effect or a multicolor color effect. Similarly, the color effect of the right hair region may be a monochrome color effect or a multicolor color effect.
For example, the left hair region and the right hair region in the captured user image are both black, and then the color of the left hair region may be adjusted to blue and the color of the right hair region may be adjusted to red, that is, the color effect of the left hair region is blue and the color effect of the right hair region is red in the target image.
For another example, the collected user image may have black left and right hair regions, and then the left hair region may be adjusted to have alternate colors of blue and purple, and the right hair region may be adjusted to have alternate colors of red and yellow, that is, the target image may have alternate color effects of blue and purple for the left hair region and alternate color effects of red and yellow for the right hair region.
And 104, displaying the target image in the image shooting interface.
After the target image is acquired, the target image may be displayed in an image capturing interface.
To sum up, according to the technical scheme provided by the embodiment of the application, the color adjustment processing is respectively performed on the left hair area and the right hair area in the acquired user image, so that the left hair area and the right hair area in the finally displayed target image have different color effects. Compare in correlation technique, only can carry out monochromatic adjustment to whole hair region, the technical scheme that this application embodiment provided can carry out the color adjustment to left side hair region and right side hair region respectively simultaneously, makes left and right sides hair region have different color effects to hair dyeing effect has been enriched, makes the image effect of the processing back image that finally shows promoted.
Referring to fig. 2, a flow chart of an image coloring process according to another embodiment of the present application is shown. In the present embodiment, the method is mainly applied to the terminal described above for illustration. . The method may include the steps of:
step 201, displaying an image shooting interface.
This step is the same as or similar to the step 101 in the embodiment of fig. 1, and is not described here again.
Step 202, collecting a user image through a camera.
This step is the same as or similar to the step 102 in the embodiment of fig. 1, and is not repeated here.
Step 203, determine the boundary in the user image.
The boundary line is a line determined based on key points of the face region in the user image. The face area refers to an area where a face in the user image is located. The face includes forehead, eyebrow, eye, nose, mouth, chin and cheek.
The key points refer to key points located on a symmetry line of the human face, and areas on the left and right of the symmetry line are symmetrical or approximately symmetrical. The key points include, but are not limited to, at least one of the following: eyebrow center, person center, nose tip point, lip center, forehead center, chin center, etc.
Alternatively, the line of demarcation may be a line passing through the eyebrow point and the person's midpoint. Of course, in other examples, the dividing line may also refer to a line passing through the tip of the nose and the midpoint of the person, may also refer to a line passing through the center of the lip and the center of the forehead, or may refer to a line passing through the center of the chin and the center of the forehead, which is not limited in this embodiment of the present application. Optionally, the boundary is a line passing through a first key point and a second key point, where the first key point is located on the face symmetry line and located in the upper half of the face region, and the second key point is located on the face symmetry line and located in the lower half of the face region. For example, the first keypoint may be a forehead centre point and the second keypoint may be a chin centre point.
By determining lines passing through key points in the face region as boundary lines, the left-side hair region and the right-side hair region can be more accurately segmented by the boundary lines.
Optionally, the above-mentioned determining the boundary in the user image may include the following sub-steps:
(1) a rendering region in the user image is determined.
The rendering region refers to a rectangular region including a hair region in the user image. That is, the rendering area is a rectangular area that can include all of the hair area in the user image.
Optionally, the rendering area refers to a smallest rectangular area including a hair area in the user image. Therefore, the calculation amount can be reduced, and the rendering efficiency can be improved.
In some other examples, the rendering area may also be a circular area, an elliptical area, or a polygonal area, which is not limited by the embodiment of the present application.
Optionally, the determining the rendering area in the user image may include the following steps:
<1> determining a face offset angle in a user image.
The face offset angle is used for representing the angle of the face central line offset gravity direction.
The face offset angle may be a positive value or a negative value. In one example, when the face offset angle is a positive value, it indicates that the face center line is offset to the right by a certain angle; when the face offset angle is a negative value, the face central line is offset by a certain angle leftwards. In another example, when the face offset angle is a positive value, it indicates that the face center line is offset to the left by a certain angle; when the human face offset angle is a negative value, the human face central line is offset to the right by a certain angle.
Optionally, the face offset angle may be a face roll angle, and the face roll angle has a value range of [0, 180], [0, -180 ].
Alternatively, if the face offset angle does not distinguish between positive and negative, that is, the face offset direction cannot be obtained according to the face offset angle. In this case, the face offset direction in the user image can also be determined.
And <2> determining the center line of the face according to the face offset angle and the gravity direction.
After the face offset angle is determined, the center line of the face in the user image can be determined by combining the gravity direction.
<3> determining a rendering area according to the center line of the face and the hair area.
After the center line of the face is determined, a rendering area can be determined by combining the hair area in the user image, and the rendering area can be a rectangular frame comprising the hair area.
Wherein, the center line of the face is vertical to the top edge and the bottom edge of the rendering area. It should be noted that the top edge of the rendering area is an edge located above the top of the rendering area; the bottom side of the rendering area refers to the side parallel to the top side in the side of the rendering area.
Illustratively, as shown in fig. 3, a schematic diagram of determining a rendering area in a user image is exemplarily shown. As shown in part (a) of fig. 3, the face offset angle is-3 degrees, that is, the face center line is offset to the right by 3 degrees in the gravity direction, so that the face center line can be determined according to the face offset angle and the gravity direction. Then, as shown in part (b) of fig. 3, based on the face center line, a rectangle can be found, such that the face center line is perpendicular to the top and bottom sides of the rectangle and includes the hair region in the user image, and the rectangle frame can be used as the rendering region.
(2) Determining the eyebrow center point, the human middle point and the forehead center point of the face area.
After the rendering area is determined, a brow center point, a person middle point and a forehead center point of the face area can be further determined.
Optionally, the eyebrow center point, the person center point, and the forehead center point of the determined face region may be determined by face detection and feature point positioning. In addition, the determination can also be carried out by calling a face recognition model. The embodiments of the present application do not limit this.
(3) A first intersection and a second intersection are obtained.
The first intersection point is an intersection point of a connecting line of the eyebrow center point and the forehead center point and the top edge of the rendering area. The second intersection point is an intersection point of a straight line passing through the person middle point and perpendicular to the bottom side of the user image and the bottom side of the rendering area.
It should be noted that the top edge of the rendering area is an edge of the rendering area located above the top of the head; the bottom side of the rendering area refers to the side parallel to the top side in the side of the rendering area. The bottom side of the user image is a side (a side located below the head) parallel to the side above the head among the sides of the user image.
(4) And sequentially connecting the first intersection point, the eyebrow center point, the human center point and the second intersection point to obtain a boundary.
And connecting the first intersection point, the eyebrow center point, the person center point and the second intersection point after the first intersection point, the eyebrow center point, the person center point and the second intersection point are obtained, and obtaining the boundary.
Illustratively, as shown in FIG. 4, a schematic diagram of determining the boundary is illustrated. First, a user image A may be determined0C0D0F0The rendering area ACDF; then, determining an eyebrow center point G, a person center point H and a forehead center point K of the face region; further, connecting and extending the eyebrow center G with the forehead center K, and intersecting with the top edge AC of the rendering area ACDF to obtain a first intersection point B; user image A of passing through midpoint H0C0D0F0Bottom edge D of0F0The vertical line (E) intersects with the bottom edge DF of the rendering area ACDF to obtain a second intersection point E; finally, the first intersection point B, the eyebrow center point G, the human center point H andand obtaining a boundary line BGHE at the second intersection point E.
Based on the method for determining the boundary, the boundary can be accurately determined no matter how many face offset angles exist in the user image.
Illustratively, as shown in fig. 5, a schematic diagram of the boundary line under different face offset angles is exemplarily shown. As can be seen from fig. 5, the boundary lines under various face offset angles can be determined by the above method for determining the boundary line, regardless of the face offset angle in the user image.
In step 204, left and right hair regions are determined based on the demarcation line.
After the dividing line is determined, the left hair region and the right hair region may be further determined based on the dividing line. Here, the left hair region refers to a hair region located on the left side of the dividing line, and the right hair region refers to a hair region located on the right side of the dividing line.
With continued reference to fig. 4, the left hair region may refer to the region bordered by ABGHEF; the lateral hair region may refer to the region bordered by CBGHED.
And step 205, adjusting the pixel values of the left hair area and the right hair area respectively to obtain a target image.
After the left hair region and the right hair region are determined, the pixel values of the respective regions may be adjusted to obtain the target image.
In a possible embodiment, the adjusting the pixel values of the left hair region and the right hair region respectively to obtain the target image includes:
(1) performing first rendering processing on the rendering area to obtain a first rendering image; and performing second rendering processing on the rendering area to obtain a second rendering image, wherein the first rendering image and the second rendering image have different color effects.
Illustratively, as shown in fig. 6, a schematic diagram of a first rendered image and a second rendered image is exemplarily shown. Part (a) of fig. 6 is a first rendered image; part (b) of fig. 6 is a second rendered image, and the first rendered image and the second rendered image have different color effects.
Optionally, the performing the first rendering process on the rendering area to obtain the first rendered image may include the following steps:
and <1> converting the pixel value of each pixel in the rendering area according to the first color conversion table to obtain the converted rendering area.
The first color conversion table is used to convert a pixel value of a pixel. And searching the first color conversion table for an input pixel value, namely finding a converted pixel value corresponding to the current pixel value.
Illustratively, as shown in fig. 7, a schematic diagram of a color conversion table is exemplarily shown. The color conversion table may be a 512 × 512-sized picture, and may be composed of 8 × 8 large square grids each composed of 64 × 64 pixels. The 64 large squares have values for the B (blue) channel, for the interior of each large cell, the horizontal axis value is the R (Red) channel value and the vertical axis value is the G (Green) channel value.
R, G and B values are both 0-255, while the horizontal axis and the vertical axis of the current small squares are only 64 pixels, which certainly cannot correspond to each other, so that the difference of R channels of each small square on the horizontal axis is: 256/64 is 4, the set of R channels is [0, 4, 8, 12, 16,.., 255 ]; similarly, the difference of the G channels of each small square is as follows: 256/64 is 4, the set of G channels is [0, 4, 8, 12, 16.
Therefore, the pixel value of the coordinate corresponding to the color conversion table can be found through the pixel value of the pixel before conversion, namely the pixel value after conversion.
And <2> rendering the converted rendering area by adopting the first hair dyeing effect material to obtain a first rendering image.
The first hair dye effect material is a hair dye effect material provided by a designer in advance.
Optionally, the first hair coloring effect material comprises at least one of: soft light material, positive film laminated bottom material, strong light material and laminated material. In addition, other materials, such as the blurring material, may be included, which is not limited in the embodiments of the present application.
Optionally, the rendering the converted rendering region with the first hair dyeing effect material to obtain the first rendered image may include the following steps:
soft light processing is carried out on the converted rendering area by adopting soft light materials to obtain a first intermediate image;
performing positive bottom-stacking processing on the first intermediate image by adopting a positive bottom-stacking material to obtain a second intermediate image;
performing strong light treatment on the second intermediate image by using a strong light material to obtain a third intermediate image;
and performing superposition processing on the third intermediate image by adopting a superposition material to obtain a first rendering image.
Exemplarily, as shown in fig. 8, first, as shown in part (a) of fig. 8, it is a rendering area in the user image; the pixel values in the rendering region are converted according to the first color conversion table, resulting in a converted rendering region as shown in part (b) of fig. 8. Then, a soft light material may be used to perform soft light processing on the converted rendering area, so as to obtain a first intermediate image shown in part (c) in fig. 8; then, the positive film bottom-overlapping material is adopted to carry out positive film bottom-overlapping processing on the first intermediate image, and a second intermediate image shown as part (d) in fig. 8 is obtained; performing highlight processing on the second intermediate image by using highlight materials to obtain a third intermediate image shown as part (e) in fig. 8; finally, the third intermediate image is subjected to superimposition processing using the superimposition material, and a first rendered image shown in part (f) in fig. 8 is obtained.
It should be noted that rendering the converted rendering area by using the first hair dyeing effect material may include at least one of the four steps, and when the first hair dyeing effect material further includes other materials, other processing steps may also be included, which is not limited in this embodiment of the present application.
The step of performing the second rendering processing on the rendering area to obtain the second rendering image is the same as or similar to the step of performing the first rendering processing on the rendering area to obtain the first rendering image. The method mainly comprises the following steps:
and <1> converting the pixel value of each pixel in the rendering area according to the second color conversion table to obtain the converted rendering area.
And <2> rendering the converted rendering area by adopting a second hair dyeing effect material to obtain a second rendering image.
The step of rendering the converted rendering area by using the second hair dyeing effect material to obtain the second rendering image is the same as or similar to the step of rendering the converted rendering area by using the first hair dyeing effect material to obtain the first rendering image, and the steps are not repeated here.
Optionally, performing a first rendering process on the rendering area to obtain a first rendering image; and before the rendering area is subjected to second rendering processing to obtain a second rendering image, a first color conversion table, a second color conversion table, a first hair dyeing effect material and a second hair dyeing effect material are required to be obtained.
Illustratively, as shown in fig. 9, obtaining the first rendered image requires a first color conversion table and first coloring effect materials, wherein the first coloring effect materials include soft light materials, front lamination materials, highlight materials, and overlay materials. Obtaining a second rendered image requires a second color conversion table and second hair dye effect materials, wherein the second hair dye effect materials include soft light materials, positive lamination materials, highlight materials and superposition materials.
(2) Extracting pixel values of a region corresponding to the left hair region in the first rendering image to obtain first region pixel values; extracting pixel values of a region corresponding to the right hair region in the second rendering image to obtain second region pixel values;
after the first rendered image is acquired, the position of the left hair region may be combined to determine a region corresponding to the left hair region in the first rendered image, and further, the pixel value of the region may be extracted to obtain the pixel value of the first region.
Similarly, after the second rendered image is acquired, a region corresponding to the right hair region in the second rendered image may be determined according to the position of the right hair region, and further, a pixel value of the region may be extracted to obtain a second region pixel value.
(3) And filling the pixel values of the first area into the left hair area, and filling the pixel values of the second area into the right hair area to obtain a third rendering image.
After determining the first region pixel values and the second region pixel values, the first region pixel values may be filled into the left hair region, and the second region pixel values may be filled into the right hair region, so as to obtain a third rendered image. In the third rendered image, the left and right sides of the borderline have different color effects.
Illustratively, as shown in fig. 10, a schematic diagram of a third rendered image is illustrated. Part (a) of fig. 10 shows a first rendered image having a first color effect; part (b) of fig. 10 shows a second rendered image having a second color effect; part (c) of fig. 10 shows a third rendered image having a first color effect on the left side and a second color effect on the right side.
(4) And obtaining a target image according to the third rendering image and the hair segmentation reference image.
The hair segmentation reference image is an image corresponding to the user image and segmented with a hair region and a non-hair region. The hair segmentation reference map may also be referred to as a hair region mask map.
Optionally, an image segmentation model can be called to process the image of the user to obtain a hair segmentation reference image; the image segmentation model is used for segmenting hair regions and non-hair regions in the user image. The image segmentation model may be a MobileNetV2 model, a Resnet50 model, a MobileNetV1 model, a depeplabv 3+ model, or the like, which is not limited in this embodiment of the present application.
Illustratively, as shown in fig. 11, a schematic diagram of acquiring a target image is exemplarily shown. Part (a) in fig. 11 is a third rendered image, part (b) in fig. 11 is a hair segmentation reference map, and according to the third rendered image and the hair segmentation reference map, a hair region can be determined from the third rendered image, and a target image as shown in part (c) in fig. 11 is further obtained.
In another possible embodiment, the adjusting the pixel values of the left hair region and the right hair region respectively to obtain the target image includes:
(1) performing first rendering processing on the rendering area to obtain a first rendering image; and performing second rendering processing on the rendering area to obtain a second rendering image, wherein the first rendering image and the second rendering image have different color effects.
This step is the same as or similar to that described above and will not be described further herein.
(2) And determining a pixel value calculation mode of each pixel in the rendering area according to the mapping relation between the material configuration mask graph and the rendering area.
The material configuration mask map is used for providing a pixel value calculation mode of each pixel.
Illustratively, as shown in fig. 12, a schematic diagram of a mapping relationship between the material configuration mask diagram and the rendering region is exemplarily shown. As shown in part (a) of fig. 12, the material arrangement mask may be a rectangular image. The material configuration mask graph comprises four vertexes A ', C', D 'and F'; top center point B ', bottom center point E', top center point B 'is one-third location point G' that is the line connecting bottom center point E ', and top center point B' is two-thirds location point H 'that is the line connecting bottom center point E'. The rendering region includes four vertices A, C, D and F; a first intersection point B, an eyebrow center point G, a person center point H, and a second intersection point E.
The four vertexes A ', C', D 'and F' of the material configuration mask graph have mapping relations with the four vertexes A, C, D and F of the rendering area; a top edge center point B' of the material configuration mask graph has a mapping relation with the first intersection point B; a bottom edge center point E' of the material configuration mask graph and a second intersection point E have a mapping relation; a third position point G ' of a connecting line of a top side central point B ' and a bottom side central point E ' of the material configuration mask graph and the eyebrow center point G have a mapping relation; and a two-thirds position point H ' of a connecting line of a top side central point B ' and a bottom side central point E ' of the material configuration mask graph has a mapping relation with the person middle point H.
Optionally, the determining the pixel value of each pixel in the rendering area according to the pixel value calculation mode of each pixel in the rendering area, the first rendering image and the second rendering image includes the following steps:
for the ith pixel in the rendering area, acquiring the pixel value of the corresponding pixel of the ith pixel in the first rendering image and the pixel value of the corresponding pixel in the second rendering image; determining the pixel value of the ith pixel according to the pixel value calculation mode of the ith pixel, the pixel value of the corresponding pixel of the ith pixel in the first rendering image and the pixel value of the corresponding pixel in the second rendering image; wherein i is a positive integer.
The pixel value of the ith pixel may be calculated by a weighted sum of the pixel value of the corresponding pixel of the ith pixel in the first rendered image and the pixel value of the corresponding pixel of the ith pixel in the second rendered image.
For example, assuming that the pixel value of the corresponding pixel of the ith pixel in the first rendered image is X1, and the pixel value of the corresponding pixel of the ith pixel in the second rendered image is X2, the pixel value of the ith pixel may be calculated as (1-W) × X1+ W × X2, where W represents the weight occupied by the pixel value of the corresponding pixel in the second rendered image in the pixel value of the ith pixel.
(3) And determining the pixel value of each pixel in the rendering area according to the pixel value calculation mode of each pixel in the rendering area, the first rendering image and the second rendering image.
(4) And filling the pixel value of each pixel in the rendering area into the rendering area to obtain a fourth rendering image.
This step is the same as or similar to that described above and will not be described further herein.
(5) And obtaining a target image according to the fourth rendering image and the hair segmentation reference image.
The hair segmentation reference image corresponds to the user image and is an image segmented by a hair region and a non-hair region.
This step is the same as or similar to that described above and will not be described further herein.
By the method of the pixel value calculation mode, the obtained target image is calculated according to the pixel values of the corresponding pixels in the first rendering image and the second rendering image, so that the transition of two color effects between the left hair area and the left hair area is more natural.
In yet another possible embodiment, the target image may be obtained by:
(1) and converting the pixel value of each pixel in the rendering area according to the first color conversion table to obtain a converted rendering area. (2) And rendering the converted rendering area by adopting the first hair dyeing effect material to obtain a second rendering image. (3) And (3) obtaining a first target image according to the converted rendering area and the hair segmentation reference image obtained in the step (2), wherein the hair area of the first target image has a first color effect. (4) And rendering the converted rendering area by adopting a second hair dyeing effect material to obtain a second rendering image. (5) And converting the pixel value of each pixel in the rendering area according to the second color conversion table to obtain the converted rendering area. (6) And (5) obtaining a second target image according to the converted rendering area and the hair segmentation reference image obtained in the step (5), wherein the hair area of the second target image has a second color effect. (7) Extracting pixel values of a region corresponding to the left hair region in the first target image to obtain first region pixel values; extracting pixel values of a region corresponding to the right hair region in the second target image to obtain second region pixel values; (8) and filling the pixel values of the first area into the left hair area, and filling the pixel values of the second area into the right hair area to obtain the target image.
In yet another possible embodiment, the target image may be obtained by:
(1) and converting the pixel value of each pixel in the rendering area according to the first color conversion table to obtain a converted rendering area. (2) And rendering the converted rendering area by adopting the first hair dyeing effect material to obtain a second rendering image. (3) And (3) obtaining a first target image according to the converted rendering area and the hair segmentation reference image obtained in the step (2), wherein the hair area of the first target image has a first color effect. (4) And rendering the converted rendering area by adopting a second hair dyeing effect material to obtain a second rendering image. (5) And converting the pixel value of each pixel in the rendering area according to the second color conversion table to obtain a converted rendering area. (6) And (5) obtaining a second target image according to the converted rendering area and the hair segmentation reference image obtained in the step (5), wherein the hair area of the second target image has a second color effect. (7) And determining a pixel value calculation mode of each pixel in the rendering area according to the mapping relation between the material configuration mask graph and the rendering area. (8) Determining the pixel value of each pixel in the hair area according to the pixel value calculation mode of each pixel in the rendering area, the first target image and the second target image; (9) and filling the pixel values of all pixels in the hair area into the hair area to obtain a target image.
And step 206, displaying the target image in the image shooting interface.
This step is the same as or similar to the step 104 in the embodiment of fig. 1, and is not described here again.
In summary, according to the technical solution provided by the embodiment of the present application, by accurately determining a boundary in an image of a user, a left hair region and a right hair region are further determined according to the boundary, and pixel values of the left hair region and the right hair region are respectively adjusted, so as to obtain images with different color effects in the left hair region and the right hair region. Because the boundary is a line determined according to key points (such as eyebrow points, human key points and forehead center points) of the face region, the boundary can be accurately determined no matter how the face offset angle changes, so that the left hair region and the right hair region can be accurately determined, and the left hair region and the right hair region can be further ensured to have different color effects respectively.
In addition, by determining lines passing through key points in the face region as boundary lines, the left hair region and the right hair region can be more accurately segmented by the boundary lines, and the boundary lines can be accurately determined no matter how many face offset angles are in the user image.
In addition, by the method of the pixel value calculation mode, the obtained target image is calculated according to the pixel values of the corresponding pixels in the first rendering image and the second rendering image, so that the transition of two color effects between the left hair area and the left hair area can be more natural.
The following describes advantageous effects of the technical solutions provided by the embodiments of the present application from the terminal side.
As shown in fig. 13, a schematic diagram of an object image is exemplarily shown. As shown in fig. 13, the user can adjust the camera to the self-timer mode. The image capturing interface 130 includes an image capturing control 131, and a user can capture an image by clicking the image capturing control 131. The image capturing interface 130 further includes an image coloring control 132, and after the user clicks the image coloring control 132, a target image is obtained, where the left hair area and the right hair area in the target image have different color effects. Optionally, the user may also trigger the hair coloring treatment by voice or expression. In other embodiments, the hair coloring process may be performed upon detection of a human face. The embodiments of the present application do not limit this. As shown in fig. 14, when the face offset angle of the user changes, by using the technical solution provided in the embodiment of the present application, the boundary line can still be accurately determined, and it is ensured that the left hair region and the right hair region have different color effects, and are not mixed. Even if the user twists the head at will when shooting, it can be guaranteed that the left hair region and the right hair region each have a different color effect.
The technical scheme provided by the embodiment of the application provides a brand-new hair dyeing effect and provides more choices for users.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 15, a block diagram of an image hair dyeing processing device according to an embodiment of the present application is shown. The device has the function of realizing the image hair dyeing processing method example, and the function can be realized by hardware or by hardware executing corresponding software. The device may be the terminal described above, or may be provided on the terminal. The apparatus 1500 may include: an interface display module 1510, an image acquisition module 1520, a color adjustment module 1530, and an image display module 1540.
And an interface display module 1510 configured to display an image capturing interface.
And an image acquisition module 1520, configured to acquire a user image through the camera.
The color adjusting module 1530 is configured to perform color adjustment processing on the left hair region and the right hair region in the user image, respectively, to obtain a target image; wherein the left hair region and the right hair region in the target image have different color effects.
An image display module 1540, configured to display the target image in the image capturing interface.
To sum up, according to the technical scheme provided by the embodiment of the application, the color adjustment processing is respectively performed on the left hair area and the right hair area in the acquired user image, so that the left hair area and the right hair area in the finally displayed target image have different color effects. Compare in correlation technique, only can carry out monochromatic adjustment to whole hair region, the technical scheme that this application embodiment provided can carry out the color adjustment to left side hair region and right side hair region respectively simultaneously, makes left and right sides hair region have different color effect to hair dyeing effect has been enriched, makes the image effect of the processing back image that finally shows promoted.
In some possible designs, as shown in fig. 16, the color adjustment module 1530 includes:
a boundary determining unit 1531 configured to determine a boundary in the user image, where the boundary is a line determined based on a key point of a face region in the user image.
An area determination unit 1532 configured to determine the left hair area and the right hair area according to the dividing line, the left hair area being a hair area located on the left side of the dividing line, and the right hair area being a hair area located on the right side of the dividing line.
The pixel adjusting unit 1533 is configured to adjust pixel values of the left hair region and the right hair region respectively to obtain the target image.
In some possible designs, the boundary determining unit 1531 is configured to determine a rendering region in the user image, where the rendering region refers to a rectangular region including a hair region in the user image; determining a eyebrow center point, a human center point and a forehead center point of the human face region; acquiring a first intersection point and a second intersection point; wherein the first intersection point is an intersection point of a connecting line of the eyebrow center point and the forehead center point and the top edge of the rendering area, and the second intersection point is an intersection point of a straight line passing through the person center point and being perpendicular to the bottom edge of the user image and the bottom edge of the rendering area; and sequentially connecting the first intersection point, the eyebrow center point, the human center point and the second intersection point to obtain the boundary.
In some possible designs, the boundary determining unit 1531 is configured to determine a face offset angle in the user image, where the face offset angle is used to represent an angle of a face centerline offset from a gravity direction; determining the center line of the face according to the face offset angle and the gravity direction; determining the rendering area according to the face center line and the hair area; wherein the face centerline is perpendicular to the top and bottom edges of the rendering region.
In some possible designs, the pixel adjustment unit 1533 is configured to perform a first rendering process on the rendering region to obtain a first rendered image; performing second rendering processing on the rendering area to obtain a second rendering image, wherein the first rendering image and the second rendering image have different color effects; extracting pixel values of a region corresponding to the left hair region in the first rendering image to obtain first region pixel values; extracting pixel values of a region corresponding to the right hair region in the second rendering image to obtain second region pixel values; filling the first region pixel values into the left hair region, and filling the second region pixel values into the right hair region to obtain a third rendered image; obtaining the target image according to the third rendering image and the hair segmentation reference image; the hair segmentation reference image is an image which corresponds to the user image and is segmented with a hair region and a non-hair region.
In some possible designs, the pixel adjustment unit 1533 is configured to perform a first rendering process on the rendering region to obtain a first rendered image; performing second rendering processing on the rendering area to obtain a second rendering image, wherein the first rendering image and the second rendering image have different color effects; determining a pixel value calculation mode of each pixel in the rendering area according to a mapping relation between a material configuration mask graph and the rendering area; determining the pixel value of each pixel in the rendering area according to the pixel value calculation mode of each pixel in the rendering area, the first rendering image and the second rendering image; filling the pixel value of each pixel in the rendering area to obtain a fourth rendering image; obtaining the target image according to the fourth rendering image and the hair segmentation reference image; the hair segmentation reference image is an image which corresponds to the user image and is segmented with a hair region and a non-hair region.
In some possible designs, the material configuration mask is a rectangular image; the four vertexes of the material configuration mask graph and the four vertexes of the rendering area have the mapping relation; the top edge center point of the material configuration mask graph and the first intersection point have the mapping relation; the bottom edge center point of the material configuration mask graph and the second intersection point have the mapping relation; the material configuration mask map is characterized in that a mapping relation is formed between a top edge center point and a one-third position point of a connecting line of the bottom edge center point and the eyebrow center point; and the material configuration mask graph has the mapping relation with the middle point of the person at two-thirds position points of the connecting line of the top side central point and the bottom side central point.
In some possible designs, the pixel adjusting unit 1533 is configured to, for an ith pixel in the rendering region, obtain a pixel value of a corresponding pixel of the ith pixel in the first rendering image and a pixel value of a corresponding pixel in the second rendering image; determining the pixel value of the ith pixel according to the pixel value calculation mode of the ith pixel, the pixel value of the corresponding pixel of the ith pixel in the first rendering image and the pixel value of the corresponding pixel in the second rendering image; wherein i is a positive integer.
In some possible designs, the pixel adjusting unit 1533 is configured to convert the pixel value of each pixel in the rendering region according to a first color conversion table to obtain a converted rendering region; and rendering the converted rendering area by adopting a first hair dyeing effect material to obtain a first rendering image.
In some possible designs, the pixel adjusting unit 1533 is configured to perform soft light processing on the converted rendering region by using a soft light material to obtain a first intermediate image; performing positive bottom-stacking processing on the first intermediate image by adopting a positive bottom-stacking material to obtain a second intermediate image; performing highlight processing on the second intermediate image by adopting highlight materials to obtain a third intermediate image; and performing superposition processing on the third intermediate image by adopting a superposition material to obtain the first rendering image.
In some possible designs, as shown in fig. 16, the apparatus 1500 further comprises: model processing module 1550.
The model processing module 1550 is configured to invoke an image segmentation model to process the user image, so as to obtain the hair segmentation reference map; wherein the image segmentation model is used to segment the hair region and the non-hair region in the user image.
In some possible designs, as shown in fig. 16, the apparatus 1500 further comprises: a dynamic generation module 1560.
A dynamic generation module 1560, configured to generate a dynamic picture or video including the target image.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 17, a block diagram of a terminal according to an embodiment of the present application is shown. In general, terminal 1700 includes: a processor 1701 and a memory 1702.
The processor 1701 may include one or more processing cores, such as 4-core processors, 8-core processors, and the like. The processor 1701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1701 may also include a main processor, which is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and rendering content that the display screen needs to display. In some embodiments, the processor 1701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1702 is used to store at least one instruction, at least one program, a set of codes, or a set of instructions for execution by the processor 1701 to implement the image coloring method provided by the method embodiments of the present application.
In some embodiments, terminal 1700 may also optionally include: a peripheral interface 1703 and at least one peripheral. The processor 1701, memory 1702 and peripheral interface 1703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1703 by a bus, signal line, or circuit board. Specifically, the peripheral device may include: at least one of a communication interface 1704, a display screen 1705, audio circuitry 1706, a camera assembly 1707, a positioning assembly 1708, and a power supply 1709.
Those skilled in the art will appreciate that the architecture shown in fig. 17 is not intended to be limiting with respect to terminal 1700, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
In an exemplary embodiment, there is also provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions which, when executed by a processor, implement the above-mentioned image coloring treatment method.
In an exemplary embodiment, there is also provided a computer program product for implementing the above image coloring treatment method when the computer program product is executed by a processor.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. An image hair dyeing method, characterized in that the method comprises:
displaying an image shooting interface;
collecting a user image through a camera;
determining a rendering area in the user image; wherein the rendering region refers to a rectangular region including a hair region in the user image;
determining a brow center point, a person middle point and a forehead center point of a face region in the user image;
acquiring a first intersection point and a second intersection point; the first intersection point is an intersection point of a connecting line of the eyebrow center point and the forehead center point and the top edge of the rendering area; the second intersection point is an intersection point of a straight line which passes through the middle point of the person and is vertical to the bottom side of the user image and the bottom side of the rendering area;
sequentially connecting the first intersection point, the eyebrow center point, the person center point and the second intersection point to obtain a boundary in the user image;
determining a left hair region and a right hair region according to the dividing line, the left hair region being a hair region of the rendered area located to the left of the dividing line, the right hair region being a hair region of the rendered area located to the right of the dividing line;
respectively carrying out color adjustment processing on the left hair area and the right hair area in the user image to obtain a target image; wherein the left hair region and the right hair region in the target image have different color effects;
and displaying the target image in the image shooting interface.
2. The method according to claim 1, wherein the performing color adjustment processing on the left hair region and the right hair region in the user image to obtain a target image comprises: and respectively adjusting the pixel values of the left hair area and the right hair area to obtain the target image.
3. The method of claim 1, wherein the determining a rendering region in the user image comprises:
determining a face offset angle in the user image, wherein the face offset angle is used for representing an angle of a face central line offset in the gravity direction;
determining the center line of the face according to the face offset angle and the gravity direction;
determining the rendering area according to the face center line and the hair area; wherein the face centerline is perpendicular to the top and bottom edges of the rendering region.
4. The method according to claim 1, wherein the adjusting the pixel values of the left hair region and the right hair region respectively to obtain the target image comprises:
performing first rendering processing on the rendering area to obtain a first rendering image; performing second rendering processing on the rendering area to obtain a second rendering image, wherein the first rendering image and the second rendering image have different color effects;
extracting pixel values of a region corresponding to the left hair region in the first rendering image to obtain first region pixel values; extracting pixel values of a region corresponding to the right hair region in the second rendering image to obtain second region pixel values;
filling the first region pixel values into the left hair region, and filling the second region pixel values into the right hair region to obtain a third rendered image;
obtaining the target image according to the third rendering image and the hair segmentation reference image; the hair segmentation reference image is an image which corresponds to the user image and is segmented with a hair region and a non-hair region.
5. The method according to claim 1, wherein the adjusting the pixel values of the left hair region and the right hair region respectively to obtain the target image comprises:
performing first rendering processing on the rendering area to obtain a first rendering image; performing second rendering processing on the rendering area to obtain a second rendering image, wherein the first rendering image and the second rendering image have different color effects;
determining a pixel value calculation mode of each pixel in the rendering area according to a mapping relation between a material configuration mask graph and the rendering area;
determining the pixel value of each pixel in the rendering area according to the pixel value calculation mode of each pixel in the rendering area, the first rendering image and the second rendering image;
filling the pixel value of each pixel in the rendering area to obtain a fourth rendering image;
obtaining the target image according to the fourth rendering image and the hair segmentation reference image; the hair segmentation reference image is an image which corresponds to the user image and is segmented with a hair region and a non-hair region.
6. The method of claim 5, wherein the material placement mask is a rectangular image;
the four vertexes of the material configuration mask graph and the four vertexes of the rendering area have the mapping relation;
the top edge center point of the material configuration mask graph and the first intersection point have the mapping relation;
the bottom edge center point of the material configuration mask graph and the second intersection point have the mapping relation;
the top side central point and the bottom side central point of the material configuration mask graph are connected with one third of the position point of the connecting line, and the mapping relation is formed between the top side central point and the bottom side central point of the material configuration mask graph and the eyebrow center point;
and the material configuration mask graph has the mapping relation with the middle point of the person at two-thirds position points of the connecting line of the top side central point and the bottom side central point.
7. The method of claim 5, wherein determining the pixel value of each pixel in the rendering region according to the pixel value calculation manner of each pixel in the rendering region, the first rendering image and the second rendering image comprises:
for the ith pixel in the rendering area, acquiring the pixel value of the corresponding pixel of the ith pixel in the first rendering image and the pixel value of the corresponding pixel in the second rendering image;
determining the pixel value of the ith pixel according to the pixel value calculation mode of the ith pixel, the pixel value of the corresponding pixel of the ith pixel in the first rendering image and the pixel value of the corresponding pixel in the second rendering image;
wherein i is a positive integer.
8. The method according to claim 4 or 5, wherein the first rendering processing on the rendering area to obtain a first rendering image comprises:
converting the pixel value of each pixel in the rendering area according to a first color conversion table to obtain a converted rendering area;
and rendering the converted rendering area by adopting a first hair dyeing effect material to obtain a first rendering image.
9. The method of claim 8, wherein rendering the converted rendering area with a first hair coloring effect material to obtain the first rendered image comprises:
performing soft light processing on the converted rendering area by adopting a soft light material to obtain a first intermediate image;
performing positive bottom-stacking processing on the first intermediate image by adopting a positive bottom-stacking material to obtain a second intermediate image;
performing strong light treatment on the second intermediate image by using a strong light material to obtain a third intermediate image;
and performing superposition processing on the third intermediate image by adopting a superposition material to obtain the first rendering image.
10. The method according to claim 4 or 5, further comprising:
calling an image segmentation model to process the user image to obtain the hair segmentation reference image;
wherein the image segmentation model is used to segment the hair region and the non-hair region in the user image.
11. The method of claim 1, further comprising, after displaying the target image in the image capture interface:
generating a dynamic picture or video including the target image.
12. An image hair dyeing apparatus, comprising:
the interface display module is used for displaying an image shooting interface;
the image acquisition module is used for acquiring a user image through the camera;
a color adjustment module to determine a rendering region in the user image; wherein the rendering region refers to a rectangular region including a hair region in the user image; determining a brow center point, a person middle point and a forehead center point of a face region in the user image; acquiring a first intersection point and a second intersection point; wherein the first intersection point is an intersection point of a connecting line of the eyebrow center point and the forehead center point and the top edge of the rendering area; the second intersection point is an intersection point of a straight line passing through the middle point of the person and being perpendicular to the bottom edge of the user image and the bottom edge of the rendering area; sequentially connecting the first intersection point, the eyebrow center point, the person center point and the second intersection point to obtain a boundary in the user image; determining a left hair region and a right hair region according to the boundary line, the left hair region being a hair region of the rendered area located to the left of the boundary line, the right hair region being a hair region of the rendered area located to the right of the boundary line; respectively carrying out color adjustment processing on a left hair area and a right hair area in the user image to obtain a target image; wherein the left hair region and the right hair region in the target image have different color effects;
and the image display module is used for displaying the target image in the image shooting interface.
13. A terminal, characterized in that it comprises a processor and a memory in which at least one instruction, at least one program, set of codes or set of instructions is stored, which is loaded and executed by the processor to implement the method according to any one of claims 1 to 11.
14. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of any of claims 1 to 11.
CN201911026154.7A 2019-10-25 2019-10-25 Image hair dyeing processing method, device, terminal and storage medium Active CN110730303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911026154.7A CN110730303B (en) 2019-10-25 2019-10-25 Image hair dyeing processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911026154.7A CN110730303B (en) 2019-10-25 2019-10-25 Image hair dyeing processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110730303A CN110730303A (en) 2020-01-24
CN110730303B true CN110730303B (en) 2022-07-12

Family

ID=69223203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911026154.7A Active CN110730303B (en) 2019-10-25 2019-10-25 Image hair dyeing processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110730303B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586444B (en) * 2020-06-05 2022-03-15 广州繁星互娱信息科技有限公司 Video processing method and device, electronic equipment and storage medium
CN112258605A (en) * 2020-10-16 2021-01-22 北京达佳互联信息技术有限公司 Special effect adding method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003263492A (en) * 1999-10-29 2003-09-19 Kao Corp Hair color advice system
CN101458763A (en) * 2008-10-30 2009-06-17 中国人民解放军国防科学技术大学 Automatic human face identification method based on image weighting average
CN107705248A (en) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108629819A (en) * 2018-05-15 2018-10-09 北京字节跳动网络技术有限公司 Image hair dyeing treating method and apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484664B (en) * 2014-12-31 2018-03-20 小米科技有限责任公司 Face picture treating method and apparatus
CN109658330B (en) * 2018-12-10 2023-12-26 广州市久邦数码科技有限公司 Color development adjusting method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003263492A (en) * 1999-10-29 2003-09-19 Kao Corp Hair color advice system
CN101458763A (en) * 2008-10-30 2009-06-17 中国人民解放军国防科学技术大学 Automatic human face identification method based on image weighting average
CN107705248A (en) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108629819A (en) * 2018-05-15 2018-10-09 北京字节跳动网络技术有限公司 Image hair dyeing treating method and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
trent1985.深度学习AI美颜系列—-AI美发算法.《深度学习AI美颜系列—-AI美发算法》.2018, *
深度学习AI美颜系列—-AI美发算法;trent1985;《深度学习AI美颜系列—-AI美发算法》;20180727;第7-10页 *

Also Published As

Publication number Publication date
CN110730303A (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN111127591B (en) Image hair dyeing processing method, device, terminal and storage medium
CN111445564B (en) Face texture image generation method, device, computer equipment and storage medium
WO2019101113A1 (en) Image fusion method and device, storage medium, and terminal
US9142054B2 (en) System and method for changing hair color in digital images
KR102290985B1 (en) Image lighting method, apparatus, electronic device and storage medium
EP3992919A1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN104282002B (en) A kind of quick beauty method of digital picture
CN111369644A (en) Face image makeup trial processing method and device, computer equipment and storage medium
US20140254939A1 (en) Apparatus and method for outputting information on facial expression
KR102386642B1 (en) Image processing method and apparatus, electronic device and storage medium
CN108550185A (en) Beautifying faces treating method and apparatus
CN113723385B (en) Video processing method and device and neural network training method and device
CN111652123B (en) Image processing and image synthesizing method, device and storage medium
CN110730303B (en) Image hair dyeing processing method, device, terminal and storage medium
EP4276754A1 (en) Image processing method and apparatus, device, storage medium, and computer program product
CN110503599B (en) Image processing method and device
CN112686800B (en) Image processing method, device, electronic equipment and storage medium
US20240013358A1 (en) Method and device for processing portrait image, electronic equipment, and storage medium
US8971636B2 (en) Image creating device, image creating method and recording medium
CN112634155A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111652792B (en) Local processing method, live broadcasting method, device, equipment and storage medium for image
KR101513931B1 (en) Auto-correction method of composition and image apparatus with the same technique
WO2022022260A1 (en) Image style transfer method and apparatus therefor
CN112150387B (en) Method and device for enhancing stereoscopic impression of five sense organs on human images in photo
CN105160329B (en) A kind of tooth recognition methods, system and camera terminal based on YUV color spaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40020822

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant