CN113781293A - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113781293A
CN113781293A CN202111051761.6A CN202111051761A CN113781293A CN 113781293 A CN113781293 A CN 113781293A CN 202111051761 A CN202111051761 A CN 202111051761A CN 113781293 A CN113781293 A CN 113781293A
Authority
CN
China
Prior art keywords
target object
key point
position information
point position
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111051761.6A
Other languages
Chinese (zh)
Inventor
沈翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202111051761.6A priority Critical patent/CN113781293A/en
Publication of CN113781293A publication Critical patent/CN113781293A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to an image processing method, an image processing device, an electronic device and a computer readable storage medium, and belongs to the technical field of image processing. The method comprises the steps of obtaining a modification material of a target object and key point position information of the modification material in an image, determining a position conversion relation between the modification material and the target object based on the key point position information of the target object and the key point position information of the modification material, carrying out position conversion processing on the key point position information of the modification material based on the position conversion relation to obtain target key point position information, adjusting the shape of the target object by taking the target key point position information as new key point position information of the target object, and attaching the modification material to the target object with the adjusted shape to obtain a processed image. Therefore, the shape of the target object is adjusted first, and then the decoration material is attached to the target object after the shape is adjusted, so that the shape of the decoration material can be kept better, and the attaching effect of the decoration material and the target object can be improved.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The beauty effect is very commonly applied in short video, live broadcast and other applications, and the beauty effect is also called as makeup pasting because the beauty effect is to paste a decoration material to a corresponding decoration part in an image.
Taking an eyebrow dressing as an example, the eyebrow dressing aims at dressing an eyebrow material at the position of the eyebrow of a human face in an image, and in the related art, the eyebrow material is directly dressed at the position of the eyebrow in the image, but the eyebrow in the image is usually not consistent with the eyebrow shape of the eyebrow material, so that the problem that the eyebrow material is not dressed with the eyebrow in the image is easily caused. Other cosmetic effects also have similar problems.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium, to at least solve a problem in the related art that a modification effect of a modification material is poor. The technical scheme of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:
acquiring modification material information of a target object in an image, wherein the modification material information comprises modification materials and key point position information of the modification materials;
determining a position transformation relation from the modified material to the target object based on the key point position information of the target object and the key point position information of the modified material;
based on the position conversion relation, performing position conversion processing on the key point position information of the modification material to obtain target key point position information;
adjusting the shape of the target object by taking the target key point position information as new key point position information of the target object;
and pasting the decoration material to the target object with the adjusted shape to obtain a processed image.
In some possible embodiments, determining the positional transformation relationship between the decoration material and the target object based on the key point position information of the target object and the key point position information of the decoration material includes:
determining N first preset key points based on the key point position information of the target object, and performing region division based on the N first preset key points to obtain at least two first regions, wherein N is an integer not less than 4;
determining N second preset key points based on the key point position information of the modified material, and performing region division based on the N second preset key points to obtain at least two second regions, wherein the at least two second regions correspond to the at least two first regions one to one;
determining the position conversion relation between each second area and the corresponding first area;
and determining the position conversion relation between each second area and the corresponding first area as the position conversion relation between the decoration material and the target object.
In some possible embodiments, performing position transformation processing on the key point position information of the modification material based on the position transformation relationship to obtain target key point position information includes:
determining a second area corresponding to each key point information of the decoration material;
and performing position conversion processing on the key point information based on the position conversion relation between the second area and the corresponding first area to obtain the position information of the target key point.
In some possible embodiments, determining the positional transformation relationship between the decoration material and the target object based on the key point position information of the target object and the key point position information of the decoration material includes:
determining M third preset key points based on the key point position information of the target object, and performing region construction based on the M third preset key points to obtain a third region, wherein M is an integer not less than 3;
determining M fourth preset key points based on the key point position information of the modified material, and performing region construction based on the M fourth preset key points to obtain a fourth region;
and determining the position conversion relation between the fourth area and the third area as the position conversion relation between the decoration materials and the target object.
In some possible embodiments, adjusting the shape of the target object with the target keypoint location information as new keypoint location information of the target object includes:
based on the corresponding relation between each subdivision grid of the target object and the key points of the target object, selecting first key point position information corresponding to the subdivision grid from the key point position information of the target object, and selecting second key point position information corresponding to the subdivision grid from the target key point position information;
generating a first grid corresponding to the subdivision grid based on the position information of the first key point, and generating a second grid corresponding to the subdivision grid based on the position information of the second key point;
and pasting a part of the target object corresponding to the first grid to the second grid to obtain the target object with the shape adjusted.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
an acquisition unit configured to perform acquisition of decoration material information of a target object in an image, the decoration material information including decoration material and key point position information of the decoration material;
a determination unit configured to perform determination of a positional transformation relationship between the decoration material to the target object based on the key point position information of the target object and the key point position information of the decoration material;
the transformation unit is configured to execute position transformation processing on the key point position information of the modification material based on the position transformation relation to obtain target key point position information;
an adjusting unit configured to perform adjusting a shape of the target object with the target keypoint location information as new keypoint location information of the target object;
and the mapping unit is configured to carry out the pasting of the decoration materials to the target object after the shape adjustment to obtain a processed image.
In some possible embodiments, the determining unit is specifically configured to perform:
determining N first preset key points based on the key point position information of the target object, and performing region division based on the N first preset key points to obtain at least two first regions, wherein N is an integer not less than 4;
determining N second preset key points based on the key point position information of the modified material, and performing region division based on the N second preset key points to obtain at least two second regions, wherein the at least two second regions correspond to the at least two first regions one to one;
determining the position conversion relation between each second area and the corresponding first area;
and determining the position conversion relation between each second area and the corresponding first area as the position conversion relation between the decoration material and the target object.
In some possible embodiments, the transformation unit is specifically configured to perform:
determining a second area corresponding to each key point information of the decoration material;
and performing position conversion processing on the key point information based on the position conversion relation between the second area and the corresponding first area to obtain the position information of the target key point.
In some possible embodiments, the determining unit is specifically configured to perform:
determining M third preset key points based on the key point position information of the target object, and performing region construction based on the M third preset key points to obtain a third region, wherein M is an integer not less than 3;
determining M fourth preset key points based on the key point position information of the modified material, and performing region construction based on the M fourth preset key points to obtain a fourth region;
and determining the position conversion relation between the fourth area and the third area as the position conversion relation between the decoration materials and the target object.
In some possible embodiments, the adjusting unit is specifically configured to perform:
based on the corresponding relation between each subdivision grid of the target object and the key points of the target object, selecting first key point position information corresponding to the subdivision grid from the key point position information of the target object, and selecting second key point position information corresponding to the subdivision grid from the target key point position information;
generating a first grid corresponding to the subdivision grid based on the position information of the first key point, and generating a second grid corresponding to the subdivision grid based on the position information of the second key point;
and pasting a part of the target object corresponding to the first grid to the second grid to obtain the target object with the shape adjusted.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement any of the image processing methods described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions of the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform any one of the above-mentioned image processing methods.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product, including program code therein, for causing an electronic device to perform any one of the above-mentioned image processing methods when the program code runs on the electronic device.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
acquiring modification material information of a target object in an image, wherein the modification material information comprises modification materials and key point position information of the modification materials, determining a position conversion relation between the modification materials and the target object based on the key point position information of the target object and the key point position information of the modification materials, performing position conversion processing on the key point position information of the modification materials based on the position conversion relation to obtain target key point position information, adjusting the shape of the target object by taking the target key point position information as new key point position information of the target object, and then pasting the modification materials on the target object with the adjusted shape to obtain a processed image. Therefore, the shape of the target object is adjusted to enable the shape of the target object to be consistent with that of the decoration material, and then the decoration material is attached to the target object, so that the shape of the decoration material can be well maintained, and the attaching effect of the decoration material and the target object can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
Fig. 2 is a flow diagram illustrating a method of determining a positional transformation relationship between embellished material to a target object according to an exemplary embodiment.
Fig. 3 is a schematic diagram illustrating division of an eyebrow region according to an embodiment of the present disclosure.
Fig. 4 is a flow diagram illustrating yet another method of determining a positional transformation relationship between embellished material to a target object in accordance with an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating division of an area of an eyebrow according to another embodiment of the present disclosure.
FIG. 6 is a flow chart illustrating a method of adjusting the shape of a target object, according to an example embodiment.
Fig. 7 is a schematic diagram of a mesh of an eyebrow dissection according to an embodiment of the disclosure.
Fig. 8 is a schematic diagram illustrating an eyebrow material according to an exemplary embodiment.
FIG. 9 is a diagram illustrating an eyebrow in an image, according to an example embodiment.
FIG. 10 is a diagram illustrating key points of an eyebrow in an image, according to an example embodiment.
Fig. 11a is a schematic diagram illustrating division of an eyebrow region in an image according to an exemplary embodiment.
Fig. 11b is a schematic diagram illustrating division of regions of eyebrow material according to an exemplary embodiment.
FIG. 12 is a diagram illustrating the location of key points of an eyebrow in accordance with an exemplary embodiment.
Fig. 13 is a schematic diagram illustrating an adjusted shape of an eyebrow in an image according to an embodiment of the disclosure.
Fig. 14 is a schematic diagram illustrating an effect of using the eyebrow material of fig. 8 to map the eyebrow of fig. 9 according to an embodiment of the disclosure.
Fig. 15 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
Fig. 16 is a schematic diagram illustrating an electronic device for implementing an image processing method according to an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The scheme of the embodiment of the disclosure is suitable for a scene that a user uses the beauty effect when shooting images or videos, and is also suitable for a scene that technicians test the beauty effect of the beauty effect at the background.
The aspects of the present disclosure are described below with reference to specific embodiments.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment, the flowchart including the following steps.
S101: and acquiring decoration material information of the target object in the image, wherein the decoration material information comprises the decoration material and the key point position information of the decoration material.
The decoration material is a material having a beautifying effect on the target object. For example, when the target object is an eyebrow, the material is modified into an eyebrow material; when the target object is a lip, modifying the material into a lip material; when the target object is a nail, the decoration material is a nail material.
S102: and determining the position conversion relation from the modified material to the target object based on the key point position information of the target object and the key point position information of the modified material.
In some embodiments, the positional transformation relationship between the embellished material to the target object may be determined according to the process illustrated in FIG. 2, which includes the following steps.
S201 a: determining N first preset key points based on the key point position information of the target object, and performing region division based on the N first preset key points to obtain at least two first regions, wherein N is an integer not less than 4.
Taking a target object as an eyebrow and a decoration material as an eyebrow material as an example, fig. 3 is a schematic diagram illustrating division of an eyebrow region according to an embodiment of the present disclosure, in fig. 3, a small circle represents a key point of an eyebrow (the key points of an eyebrow head and an eyebrow tail are covered by a large circle), and there are 10 key points of a single eyebrow.
In specific implementation, for a single-side eyebrow, 6 keypoints (i.e., 6 first preset keypoints) can be selected from the 10 keypoints of the single-side eyebrow: then, based on the 6 key points, region division can be performed to obtain two first regions corresponding to the lateral eyebrows.
Specifically, a vertex may be determined based on the top 2 key points in the eyebrow (for example, the coordinate of the vertex is taken as the average value of the coordinates of the top 2 key points in the eyebrow), a vertex may be determined based on the bottom 2 key points in the eyebrow (for example, the coordinate of the vertex is taken as the average value of the coordinates of the bottom 2 key points in the eyebrow), the key points of the brow head and the brow tail are respectively taken as a vertex, so as to determine 4 vertices (4 large dots of the single eyebrow in fig. 3), and then, the region formed by the 4 vertices may be divided into two left and right regions (i.e., two first regions).
It should be noted that the key points of the single-side eyebrow shown in fig. 3 are only examples, and if the key points at the positions of the 4 great circles of the single-side eyebrow shown in fig. 3 can be directly detected, the corresponding key points can be directly selected, that is, at least 4 first preset key points can be selected.
S202 a: and determining N second preset key points based on the key point position information of the modified material, and performing region division based on the N second preset key points to obtain at least two second regions, wherein the at least two second regions correspond to the at least two first regions one to one.
In specific implementation, the second preset key points determined from the modification material and the first preset key points determined from the target object may be in one-to-one correspondence, and the same region division rule may be used for the second preset key points and the first preset key points, so that the finally obtained second regions and the first regions are also in one-to-one correspondence.
In addition, the division manner of the regions of the modification material is similar to that of the target object, and is not described herein again.
S203 a: and determining the position conversion relation between each second area and the corresponding first area.
For example, a position transformation matrix between each second region to the corresponding first region is determined.
S204 a: and determining the position conversion relation between each second area and the corresponding first area as the position conversion relation between the decoration material and the target object.
Therefore, the target object and the modification material are divided into at least two regions which are in one-to-one correspondence, then the position transformation relation from the modification material to the target object is determined region by region, the subsequent position adjustment can be carried out on key points at different positions in the modification material by adopting different position transformation relations, the shape of the modification material is kept as much as possible, and the modification effect of the modification material on the target object is improved.
In some embodiments, the positional transformation relationship between the embellished material to the target object may be determined according to the process illustrated in FIG. 4, which includes the following steps.
S401 a: and determining M third preset key points based on the key point position information of the target object, and performing region construction based on the M third preset key points to obtain a third region, wherein M is an integer not less than 3.
Taking an example that a target object is an eyebrow and a modification material is an eyebrow material, fig. 5 is a schematic diagram illustrating region division of another eyebrow provided by the embodiment of the present disclosure, in fig. 5, a small circle represents a key point of the eyebrow (the key points of the eyebrow head and the eyebrow tail are covered by a large circle), and there are 10 key points of the eyebrow on one side.
In specific implementation, for a single-side eyebrow, 6 keypoints (i.e., 6 third preset keypoints) can be selected from the 10 keypoints of the single-side eyebrow: then, region construction can be carried out based on the 6 key points to obtain a third region corresponding to the lateral eyebrow.
Specifically, a vertex may be determined based on 4 keypoints around the eyebrow (for example, an average of coordinates of 4 keypoints around the eyebrow is used as the coordinates of the vertex), the keypoints of the eyebrow head and the eyebrow tail are respectively used as one vertex, so as to determine 3 vertices (3 large dots of a single eyebrow in fig. 5), and then, a region formed by the 3 vertices is used as the third region.
It should be noted that the key points of the single-side eyebrow shown in fig. 5 are only examples, and if the key points at the 3 large circle points of the single-side eyebrow shown in fig. 5 can be directly detected, the corresponding key points can be directly selected, that is, at least 3 third preset key points can be selected.
S402 a: and determining M fourth preset key points based on the key point position information of the modified material, and performing region construction based on the M fourth preset key points to obtain a fourth region.
In specific implementation, the fourth preset key point determined from the modification material and the third preset key point determined from the target object may be in one-to-one correspondence, and the region construction manner for the modification material is similar to that for the target object, and is not described herein again.
S403 a: and determining the position conversion relation from the fourth area to the third area as the position conversion relation from the decoration material to the target object.
Therefore, the target object and the modification material respectively construct a region, the position conversion relation from the region corresponding to the modification material to the region corresponding to the target object is determined as the position conversion relation from the modification material to the target object, the key points at different positions in the modification material can be subjected to position adjustment by adopting the same position conversion relation, the position conversion speed is favorably improved, and the modification speed of the modification material on the target object is improved.
S103: and performing position conversion processing on the key point position information of the modification material based on the position conversion relation to obtain the target key point position information.
In some embodiments, there are multiple position transformation relationships between the decoration material and the target object, at this time, a second region corresponding to each piece of keypoint information of the decoration material may be determined, and then, based on the position transformation relationship between the second region and the corresponding first region, position transformation processing is performed on the keypoint information to obtain target keypoint position information.
Therefore, the key points at different positions in the modification material are subjected to position conversion processing by adopting different position conversion relations, so that the shape of the modification material is kept to the maximum extent, and the modification effect of the modification material on the target object is improved.
In some embodiments, there are 1 position transformation relations between the decoration material and the target object, and at this time, the 1 position transformation relation is directly used to perform position transformation processing on the key point position information of the decoration material to obtain the target key point position information.
Therefore, the key points at different positions in the decoration material are subjected to position adjustment by adopting the same position transformation relation, so that the position transformation speed is favorably improved, and the decoration speed of the decoration material on the target object is improved.
S104: and adjusting the shape of the target object by taking the position information of the target key point as the position information of a new key point of the target object.
In specific implementation, the shape of the target object may be adjusted according to the process shown in fig. 6, where the process includes the following steps:
s601 a: based on the corresponding relation between each subdivision grid of the target object and the key points of the target object, selecting first key point position information corresponding to the subdivision grid from the key point position information of the target object, and selecting second key point position information corresponding to the subdivision grid from the key point position information of the target object.
Taking a target object as an eyebrow as an example, fig. 7 is a schematic diagram of the mesh generation of the eyebrow provided in the embodiment of the present disclosure, where each triangle is a mesh generation mesh, each mesh generation mesh has at least one vertex as a key point of the eyebrow, and a positional relationship between a vertex that is not a key point and a vertex that is a key point in each mesh generation.
For each of the subdivision grids in fig. 7, based on the pre-established correspondence between the subdivision grid and the key points of the target object, the first key point position information corresponding to the subdivision grid is selected from the key point position information of the target object, and the second key point position information corresponding to the subdivision grid is selected from the target key point position information.
S602 a: and generating a first grid corresponding to the subdivision grid based on the position information of the first key point, and generating a second grid corresponding to the subdivision grid based on the position information of the second key point.
In specific implementation, the first mesh corresponding to the split mesh can be generated based on the position information of the first key point and the construction rule corresponding to the split mesh, and similarly, the second mesh corresponding to the split mesh can be generated based on the position information of the second key point and the construction rule corresponding to the split mesh.
S603 a: and pasting a part of the target object corresponding to the first grid to the second grid to obtain the target object with the adjusted shape.
In this way, the key points corresponding to each subdivision grid are selected from the original key points and the new key points of the target object respectively, and part of the target object corresponding to the grid formed by the original key points is pasted on the grid formed by the new key points, which is equivalent to that the target object before the shape adjustment is used for pasting the grid area corresponding to the target object after the shape adjustment grid by grid, so that the real effect and the natural effect of the target object after the shape adjustment are improved.
S105: and attaching the decoration material to the target object with the adjusted shape to obtain a processed image.
In view of the fact that the target object is consistent with the shape of the decoration material (for example, the similarity of the shape is higher than a preset value such as 85%), the decoration material can be integrally attached to the target object after the shape adjustment in step S105, so as to increase the speed of decoration of the target object by the decoration material.
In the embodiment of the disclosure, the shape of the target object is adjusted to make the shape of the target object consistent with the shape of the modification material, and then the modification material is attached to the target object after the shape adjustment, so that not only can the shape of the modification material be better maintained, but also the attachment effect of the modification material and the target object can be improved. In addition, the scheme of the embodiment of the disclosure is simple, and has low requirements on the operation performance of the execution main body, so that the method is friendly to the scene of acquiring images at the mobile terminal in real time.
The following describes a scheme of the embodiment of the present disclosure, taking an example in which a target object is an eyebrow and a material for modifying the eyebrow is an eyebrow material.
Fig. 8 is a schematic diagram of an eyebrow material provided by an embodiment of the disclosure, and fig. 9 is a schematic diagram of an eyebrow in an image provided by an embodiment of the disclosure. In specific implementation, the eyebrow material shown in fig. 8 can be attached to the position of the eyebrow in fig. 9 according to the following steps:
the first step is as follows: the key points of the eyebrows in the image are detected to obtain a key point set P0, and referring to fig. 10, 10 white points (white points at the eyebrows and the eyebrows are covered by black points) in fig. 10 are the key point set P0. For convenience of subsequent description, the key point set in the eyebrow material map is denoted as P1.
The second step is that: and determining the affine transformation relation from the eyebrow material to the eyebrow in the image.
For each single eyebrow, 6 keypoints as shown in fig. 11a are selected from the keypoint set P0: key point 1 to key point 6, where key point 1 and key point 6 may be directly used as vertices, a vertex is generated based on key point 3 and key point 4, for example, an average value of coordinates of key point 3 and key point 4 is used as coordinates of the vertex, a vertex is generated based on key point 2 and key point 5, for example, an average value of coordinates of key point 2 and key point 5 is used as coordinates of the vertex, and then, an area formed by the 4 vertices (4 black dots in fig. 11 a) is divided, so as to obtain two triangular areas: region 1 (left triangular region) and region 2 (right triangular region).
Similarly, 6 keypoints as shown in FIG. 11b are selected from the set of keypoints P1: the key points 1 'to 6' may be directly used as vertices, and a vertex is generated based on the key points 3 'and 4', for example, an average value of coordinates of the key points 3 'and 4' is used as coordinates of the vertex, and a vertex is generated based on the key points 2 'and 5', for example, an average value of coordinates of the key points 2 'and 5' is used as coordinates of the vertex, and then, an area formed by the 4 vertices is divided to obtain two triangular areas: region 1 '(left triangular region) and region 2' (right triangular region).
Since the number of key points of the eyebrow and eyebrow material in the image is the same, and the positions represented by the key points are also the same, the regions obtained by dividing the target object and the modification material in the above process are also in one-to-one correspondence, that is, the region 1 corresponds to the region 1', and the region 2 corresponds to the region 2'.
Further, based on the position information of the vertex of the region 1 'and the position information of the vertex of the region 1, an affine transformation matrix M1 between the region 1' and the region 1 is calculated (M1 is used to characterize the affine transformation relationship between the region 1 'and the region 1), and based on the position information of the vertex of the region 2' and the position information of the vertex of the region 2, an affine transformation matrix M2 between the region 2 'and the region 2 is calculated (M2 is used to characterize the affine transformation relationship between the region 2' and the region 2).
Through the steps, two affine transformation matrixes corresponding to the left eyebrow and the right eyebrow can be obtained.
The third step: based on the affine transformation relationship between the eyebrow material and the eyebrow in the image, a new key point set P2 of the eyebrow in the image is calculated.
In specific implementation, affine transformation may be performed on the keypoints corresponding to the region 1 'in fig. 11b (i.e., the 5 keypoints on the left half of the left eyebrow) by using the affine transformation matrix M1, affine transformation may be performed on the keypoints corresponding to the region 2' in fig. 11b (i.e., the 5 keypoints on the right half of the left eyebrow) by using the affine transformation matrix M2, and the keypoints obtained after the radial transformation are used as the new keypoint set P2 of the eyebrow in the image.
Therefore, different affine transformation matrixes are used for carrying out affine transformation on key points at different positions in the eyebrow material, the key points after the radial transformation are used as new key points of the eyebrows, the positions of the eyebrows and the eyebrow tails can be kept as unchanged as possible, the width of the eyebrows is kept basically unchanged, and the follow-up eyebrow material can be better pasted to the positions of the eyebrows in the image.
Fig. 12 is a schematic diagram of positions of key points of an eyebrow, where white points represent original key points of the eyebrow in an image, and black points represent new key points of the eyebrow in the image obtained after affine transformation.
The fourth step: based on the key point set P0 and the key point set P2, the shape of the eyebrows in the image is adjusted so that the shape of the eyebrows in the image matches the shape of the eyebrow material.
In specific implementation, based on the correspondence between each of the split meshes of the eyebrow in fig. 7 and the key points of the eyebrow, a first key point corresponding to the split mesh is selected from the key point set P0, a second key point corresponding to the split mesh is selected from the key point set P2, a first mesh corresponding to the split mesh is generated based on the position information of the first key point, a second mesh corresponding to the split mesh is generated based on the position information of the second key point, and then, part of the target objects corresponding to the first mesh is attached to the second mesh, thereby completing the adjustment of the eyebrow shape in the image.
Fig. 13 is a schematic diagram of an adjusted shape of an eyebrow in an image according to an embodiment of the disclosure, and comparing fig. 13 and fig. 9, it can be seen that the shape of the eyebrow in the image has changed greatly and is consistent with the eyebrow shape of an eyebrow material.
The fifth step: and pasting the eyebrow material to the position of the eyebrow in the image.
Because the shape of the eyebrows in the image is adjusted to be consistent with the shape of the eyebrow material, the eyebrow material can be directly and integrally pasted at the position of the eyebrows in the image, and the dressing speed is improved on the premise of ensuring the dressing effect.
Fig. 14 is a schematic diagram illustrating an effect of using the eyebrow material of fig. 8 to paste the eyebrow of fig. 9 according to an embodiment of the present disclosure, which shows that the shape of the eyebrow in the image is the same as the shape of the eyebrow material, and the pasting effect of the eyebrow material and the eyebrow in the image is also good.
In the embodiment of the disclosure, before the eyebrow material is pasted, the shape of the eyebrow in the image is adjusted region by region, so that the shape of the eyebrow in the image is consistent with the shape of the eyebrow material, and then the eyebrow material is pasted at the position of the eyebrow in the image, so that the eyebrow in the image can keep the shape of the eyebrow material and the eyebrow material in the image can be well pasted with the eyebrow in the image. In addition, the scheme of the embodiment of the disclosure is simple, and the requirement on the calculation performance of the execution main body is low, so that the scheme is also suitable for scenes shot at the mobile terminal.
Fig. 15 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment, including an acquisition unit 1501, a determination unit 1502, a transformation unit 1503, an adjustment unit 1504, and a map unit 1505.
An acquisition unit 1501 configured to perform acquisition of decoration material information of a target object in an image, the decoration material information including a decoration material and key point position information of the decoration material;
a determining unit 1502 configured to perform determining a positional transformation relationship between the decoration material to the target object based on the key point position information of the target object and the key point position information of the decoration material;
a transformation unit 1503 configured to perform position transformation processing on the key point position information of the modification material based on the position transformation relationship to obtain target key point position information;
an adjusting unit 1504 configured to perform adjusting the shape of the target object with the target keypoint location information as new keypoint location information of the target object;
a pasting unit 1505 configured to perform pasting of the decoration material to the shape-adjusted target object, resulting in a processed image.
In some possible embodiments, the determining unit 1502 is specifically configured to perform:
determining N first preset key points based on the key point position information of the target object, and performing region division based on the N first preset key points to obtain at least two first regions, wherein N is an integer not less than 4;
determining N second preset key points based on the key point position information of the modified material, and performing region division based on the N second preset key points to obtain at least two second regions, wherein the at least two second regions correspond to the at least two first regions one to one;
determining the position conversion relation between each second area and the corresponding first area;
and determining the position conversion relation between each second area and the corresponding first area as the position conversion relation between the decoration material and the target object.
In some possible implementations, the transformation unit 1503 is specifically configured to perform:
determining a second area corresponding to each key point information of the decoration material;
and performing position conversion processing on the key point information based on the position conversion relation between the second area and the corresponding first area to obtain the position information of the target key point.
In some possible embodiments, the determining unit 1502 is specifically configured to perform:
determining M third preset key points based on the key point position information of the target object, and performing region construction based on the M third preset key points to obtain a third region, wherein M is an integer not less than 3;
determining M fourth preset key points based on the key point position information of the modified material, and performing region construction based on the M fourth preset key points to obtain a fourth region;
and determining the position conversion relation between the fourth area and the third area as the position conversion relation between the decoration materials and the target object.
In some possible embodiments, the adjusting unit 1504 is specifically configured to perform:
based on the corresponding relation between each subdivision grid of the target object and the key points of the target object, selecting first key point position information corresponding to the subdivision grid from the key point position information of the target object, and selecting second key point position information corresponding to the subdivision grid from the target key point position information;
generating a first grid corresponding to the subdivision grid based on the position information of the first key point, and generating a second grid corresponding to the subdivision grid based on the position information of the second key point;
and pasting a part of the target object corresponding to the first grid to the second grid to obtain the target object with the shape adjusted.
With regard to the apparatus in the above embodiments, the specific manner in which each module performs operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
Fig. 16 is a schematic structural diagram of an electronic device 1600 provided in the embodiment of the present disclosure, which includes a transceiver 1601, a processor 1602 and other physical devices, where the processor 1602 may be a Central Processing Unit (CPU), a microprocessor, an application specific integrated circuit, a programmable logic circuit, a large scale integrated circuit, or a digital Processing Unit. The transceiver 1601 is used for data transmission and reception between an electronic device and other devices.
The electronic device may further comprise a memory 1603 for storing software instructions to be executed by the processor 1602, but may also store some other data required by the electronic device, such as identification information of the electronic device, encryption information of the electronic device, user data, etc. Memory 1603 may be a Volatile Memory (Volatile Memory), such as a Random-Access Memory (RAM); memory 1603 may also be a Non-Volatile Memory (Non-Volatile Memory), such as a Read-Only Memory (ROM), Flash Memory (Flash Memory), Hard Disk Drive (HDD) or Solid-State Drive (SSD), or Memory 1603 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. Memory 1603 may be a combination of the above.
The specific connection medium between the processor 1602, the memory 1603 and the transceiver 1601 is not limited in the embodiments of the present disclosure. In fig. 16, the embodiment of the present disclosure is described by taking only the case where the memory 1603, the processor 1602, and the transceiver 1601 are connected by the bus 1604 as an example, the bus is shown by a thick line in fig. 16, and the connection manner between other components is merely schematically described and is not limited thereto. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 16, but this is not intended to represent only one bus or type of bus.
The processor 1602 may be dedicated hardware or a processor running software, and when the processor 1602 runs software, the processor 1602 reads software instructions stored in the memory 1603 and executes the image processing method involved in the foregoing embodiment under the drive of the software instructions.
In an exemplary embodiment, the present disclosure also provides a computer-readable storage medium, in which instructions are capable of performing any of the image processing methods described above when executed by a processor, such as the processor 1602 of the electronic device 1600. Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In some possible implementations, various aspects of the image processing method provided by the present disclosure may also be implemented in the form of a program product, which includes program code, and when the program product is run on an electronic device, the electronic device may be caused to execute the image processing method referred to in the foregoing embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring modification material information of a target object in an image, wherein the modification material information comprises modification materials and key point position information of the modification materials;
determining a position transformation relation from the modified material to the target object based on the key point position information of the target object and the key point position information of the modified material;
based on the position conversion relation, performing position conversion processing on the key point position information of the modification material to obtain target key point position information;
adjusting the shape of the target object by taking the target key point position information as new key point position information of the target object;
and pasting the decoration material to the target object with the adjusted shape to obtain a processed image.
2. The method of claim 1, wherein determining a positional transformation relationship between the decoration material to the target object based on the keypoint location information of the target object and the keypoint location information of the decoration material comprises:
determining N first preset key points based on the key point position information of the target object, and performing region division based on the N first preset key points to obtain at least two first regions, wherein N is an integer not less than 4;
determining N second preset key points based on the key point position information of the modified material, and performing region division based on the N second preset key points to obtain at least two second regions, wherein the at least two second regions correspond to the at least two first regions one to one;
determining the position conversion relation between each second area and the corresponding first area;
and determining the position conversion relation between each second area and the corresponding first area as the position conversion relation between the decoration material and the target object.
3. The method according to claim 2, wherein performing a position transformation process on the keypoint location information of the decoration material based on the position transformation relationship to obtain target keypoint location information comprises:
determining a second area corresponding to each key point information of the decoration material;
and performing position conversion processing on the key point information based on the position conversion relation between the second area and the corresponding first area to obtain the position information of the target key point.
4. The method of claim 1, wherein determining a positional transformation relationship between the decoration material to the target object based on the keypoint location information of the target object and the keypoint location information of the decoration material comprises:
determining M third preset key points based on the key point position information of the target object, and performing region construction based on the M third preset key points to obtain a third region, wherein M is an integer not less than 3;
determining M fourth preset key points based on the key point position information of the modified material, and performing region construction based on the M fourth preset key points to obtain a fourth region;
and determining the position conversion relation between the fourth area and the third area as the position conversion relation between the decoration materials and the target object.
5. The method of claim 1, wherein adjusting the shape of the target object with the target keypoint location information as new keypoint location information for the target object comprises:
based on the corresponding relation between each subdivision grid of the target object and the key points of the target object, selecting first key point position information corresponding to the subdivision grid from the key point position information of the target object, and selecting second key point position information corresponding to the subdivision grid from the target key point position information;
generating a first grid corresponding to the subdivision grid based on the position information of the first key point, and generating a second grid corresponding to the subdivision grid based on the position information of the second key point;
and pasting a part of the target object corresponding to the first grid to the second grid to obtain the target object with the shape adjusted.
6. An image processing apparatus characterized by comprising:
an acquisition unit configured to perform acquisition of decoration material information of a target object in an image, the decoration material information including decoration material and key point position information of the decoration material;
a determination unit configured to perform determination of a positional transformation relationship between the decoration material to the target object based on the key point position information of the target object and the key point position information of the decoration material;
the transformation unit is configured to execute position transformation processing on the key point position information of the modification material based on the position transformation relation to obtain target key point position information;
an adjusting unit configured to perform adjusting a shape of the target object with the target keypoint location information as new keypoint location information of the target object;
and the mapping unit is configured to carry out the pasting of the decoration materials to the target object after the shape adjustment to obtain a processed image.
7. The apparatus according to claim 6, characterized in that the determining unit is specifically configured to perform:
determining N first preset key points based on the key point position information of the target object, and performing region division based on the N first preset key points to obtain at least two first regions, wherein N is an integer not less than 4;
determining N second preset key points based on the key point position information of the modified material, and performing region division based on the N second preset key points to obtain at least two second regions, wherein the at least two second regions correspond to the at least two first regions one to one;
determining the position conversion relation between each second area and the corresponding first area;
and determining the position conversion relation between each second area and the corresponding first area as the position conversion relation between the decoration material and the target object.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 5.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of any of claims 1 to 5.
10. A computer program product, characterized in that program code is included in the computer program product for causing an electronic device to carry out the image processing method according to any one of claims 1 to 5, when the program code is run on the electronic device.
CN202111051761.6A 2021-09-08 2021-09-08 Image processing method, image processing device, electronic equipment and computer readable storage medium Pending CN113781293A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111051761.6A CN113781293A (en) 2021-09-08 2021-09-08 Image processing method, image processing device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111051761.6A CN113781293A (en) 2021-09-08 2021-09-08 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113781293A true CN113781293A (en) 2021-12-10

Family

ID=78841940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111051761.6A Pending CN113781293A (en) 2021-09-08 2021-09-08 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113781293A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829930A (en) * 2019-01-15 2019-05-31 深圳市云之梦科技有限公司 Face image processing process, device, computer equipment and readable storage medium storing program for executing
CN109859098A (en) * 2019-01-15 2019-06-07 深圳市云之梦科技有限公司 Facial image fusion method, device, computer equipment and readable storage medium storing program for executing
CN110223218A (en) * 2019-05-16 2019-09-10 北京达佳互联信息技术有限公司 Face image processing process, device, electronic equipment and storage medium
CN111563855A (en) * 2020-04-29 2020-08-21 百度在线网络技术(北京)有限公司 Image processing method and device
US20210201458A1 (en) * 2019-08-28 2021-07-01 Beijing Sensetime Technology Development Co., Ltd. Face image processing method and apparatus, image device, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829930A (en) * 2019-01-15 2019-05-31 深圳市云之梦科技有限公司 Face image processing process, device, computer equipment and readable storage medium storing program for executing
CN109859098A (en) * 2019-01-15 2019-06-07 深圳市云之梦科技有限公司 Facial image fusion method, device, computer equipment and readable storage medium storing program for executing
CN110223218A (en) * 2019-05-16 2019-09-10 北京达佳互联信息技术有限公司 Face image processing process, device, electronic equipment and storage medium
US20210201458A1 (en) * 2019-08-28 2021-07-01 Beijing Sensetime Technology Development Co., Ltd. Face image processing method and apparatus, image device, and storage medium
CN111563855A (en) * 2020-04-29 2020-08-21 百度在线网络技术(北京)有限公司 Image processing method and device

Similar Documents

Publication Publication Date Title
CN109859098B (en) Face image fusion method and device, computer equipment and readable storage medium
CN108986016B (en) Image beautifying method and device and electronic equipment
CN109063560B (en) Image processing method, image processing device, computer-readable storage medium and terminal
CN109829930B (en) Face image processing method and device, computer equipment and readable storage medium
EP3839879B1 (en) Facial image processing method and apparatus, image device, and storage medium
US20200234034A1 (en) Systems and methods for face reenactment
CN110852949B (en) Point cloud data completion method and device, computer equipment and storage medium
CN110223218B (en) Face image processing method and device, electronic equipment and storage medium
WO2020101960A1 (en) Pose-variant 3d facial attribute generation
CN109584327B (en) Face aging simulation method, device and equipment
CN111275650B (en) Beauty treatment method and device
CN104715447A (en) Image synthesis method and device
CN108765265B (en) Image processing method, device, terminal equipment and storage medium
CN110021000B (en) Hairline repairing method and device based on layer deformation
CN111383232A (en) Matting method, matting device, terminal equipment and computer-readable storage medium
CN115239861A (en) Face data enhancement method and device, computer equipment and storage medium
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
CN113658035B (en) Face transformation method, device, equipment, storage medium and product
CN111652792B (en) Local processing method, live broadcasting method, device, equipment and storage medium for image
KR20210041534A (en) Image processing method, device and electronic device
CN113781293A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111652807A (en) Eye adjustment method, eye live broadcast method, eye adjustment device, eye live broadcast device, electronic equipment and storage medium
CN115239856A (en) Animation generation method and device for 3D virtual object, terminal device and medium
CN114820988A (en) Three-dimensional modeling method, device, equipment and storage medium
CN114519663A (en) Image-based deformation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination