WO2020001013A1 - 图像处理方法、装置、计算机可读存储介质和终端 - Google Patents

图像处理方法、装置、计算机可读存储介质和终端 Download PDF

Info

Publication number
WO2020001013A1
WO2020001013A1 PCT/CN2019/073074 CN2019073074W WO2020001013A1 WO 2020001013 A1 WO2020001013 A1 WO 2020001013A1 CN 2019073074 W CN2019073074 W CN 2019073074W WO 2020001013 A1 WO2020001013 A1 WO 2020001013A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
triangulation mesh
key point
triangle
included angle
Prior art date
Application number
PCT/CN2019/073074
Other languages
English (en)
French (fr)
Inventor
邓涵
赖锦锋
刘志超
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Priority to US16/980,323 priority Critical patent/US11017580B2/en
Publication of WO2020001013A1 publication Critical patent/WO2020001013A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Definitions

  • the present disclosure relates to the field of image technology, and in particular, to an image processing method, device, computer-readable storage medium, and terminal.
  • the more mature method is the face image grid method, which includes triangulation.
  • the idea of the triangulation method is to mark a number of corresponding feature points on the source image and the target image, and divide the entire image into several triangular regions according to the feature points.
  • Delaunay triangulation is usually used by people.
  • the technical problem solved by the present disclosure is to provide an image processing method to at least partially solve the technical problem of how to improve the user experience effect.
  • an image processing device an image processing hardware device, a computer-readable storage medium, and an image processing terminal are also provided.
  • An image processing method includes: identifying eye key points on a face image; interpolation auxiliary points are obtained; wherein the auxiliary key points and the eye key points form a predetermined eye position on the face image A first triangulation mesh at the location; transforming the eye makeup effect image to a predetermined position of the eye according to the first triangulation mesh.
  • the eye makeup effect image includes at least one of eyelashes, double eyelids, single eyelids, eye shadows, and eyeliners.
  • the method further includes: detecting the face image in response to a user's selection of the eye makeup effect image.
  • the interpolation to obtain auxiliary key points includes: obtaining a corresponding second triangulation mesh on a standard template; wherein the eye makeup effect image is drawn on the standard template; and according to the second triangulation
  • the sub-grid determines the auxiliary key points on the first triangulation mesh; wherein the similarity between the corresponding triangles in the first triangulation mesh and the second triangulation mesh is in the third Within a preset error range.
  • determining the auxiliary key point on the first triangulation mesh according to the second triangulation mesh includes: determining a third connection line and a third connection point according to the second triangulation mesh.
  • a first angle between four lines wherein the third line is a line between the second eye key point and the third eye key point, and the second eye key point and Adjacent to the third eye pendant; the fourth line is the line between the second eye key point and the second auxiliary key point; the second eye key point, the third eye key point, and The second auxiliary key point is three vertices of the second triangle in the first triangulation mesh; and a second clip between the third connection line and the fourth connection line is determined according to the second triangulation mesh.
  • the third line is a line between the second eye key point and the third eye key point, and the second eye key point and the third eye key point are adjacent;
  • the fourth line is a line between the second eye key point and the second auxiliary key point; the third eye key point, the fourth eye key point, and the first eye point
  • the auxiliary key points are three vertices of a second triangle in the second triangulation mesh; the first auxiliary is determined according to the first included angle, the second included angle, and the second triangulated mesh key point.
  • determining a first angle between a first connection line and a second connection line in the first triangulation mesh according to the second triangulation mesh includes determining the second triangulation mesh.
  • a first difference between the first corresponding included angles is within a second preset error range.
  • determining the second included angle between the third connection line and the fourth connection line according to the second triangulation mesh includes: determining the second triangle on the second triangulation mesh and the second triangle. A corresponding second corresponding triangle; determining the second included angle; wherein a second between the second included angle and a second corresponding included angle on the second corresponding triangle corresponding to the second included angle The difference is within a second preset error range.
  • determining the first auxiliary key point and the second auxiliary key point according to the first included angle, the second included angle, and the second triangulation mesh includes: determining the first connection line and A first ratio between first corresponding lines corresponding to the first line in the second triangulation mesh; determining the first auxiliary key point according to the first ratio and the first included angle .
  • determining the first auxiliary key point and the second auxiliary key point according to the first included angle, the second included angle, and the second triangulation mesh includes: determining the third connecting line and A second ratio between edges of the second triangulation mesh corresponding to a third line; and determining the second auxiliary key point according to the second ratio and the second included angle.
  • the minimum value of the second preset error range is 0.
  • it further comprises: determining the degree of opening and closing of the eyes on the face image according to the eye point; and determining the first difference and the second difference according to the degree of opening and closing.
  • determining the first difference and the second difference according to the opening and closing degree includes: when the opening and closing degree reaches a preset maximum value, the first difference and the second difference Set to the minimum value of the second preset error range; when the opening and closing degree reaches the preset minimum value, the first difference and the second difference value are set to the maximum of the second preset error range value.
  • triangles in the second triangulation mesh are equilateral triangles.
  • transforming the eye makeup effect image to a predetermined position of the eye according to the first triangulation mesh includes: determining between the first triangulation mesh and the second triangulation mesh. Corresponding relationship between the eye makeup effect image in the second triangulation mesh to a predetermined eye position on the face image in the first triangulation mesh according to the correspondence relationship.
  • An image processing device includes:
  • An interpolation module configured to obtain auxiliary key points through interpolation; wherein the auxiliary key points and the eye key points form a first triangulation mesh at a predetermined eye position on the face image;
  • a transformation module configured to transform the eye makeup effect image to a predetermined position of the eye according to the first triangulation mesh.
  • the eye makeup effect image includes at least one of eyelashes, double eyelids, single eyelids, eye shadows, and eyeliners.
  • the method further includes:
  • the response module is configured to detect the face image in response to a selected event of the eye makeup effect image by a user.
  • the interpolation module includes:
  • a first determining submodule configured to determine the auxiliary key point on the first triangulation mesh according to the second triangulation mesh; wherein the first triangulation mesh and a second triangulation mesh The similarity between corresponding triangles in the triangulation mesh is within a first preset error range.
  • the first determining submodule includes:
  • a second determining submodule configured to determine a first angle between a third connection line and a fourth connection line according to the second triangulation mesh; wherein the third connection line is the second eye The line between the second eye point and the third eye point, the second eye point is adjacent to the third eye pendant; the fourth line is the second eye point and the second eye point A line between auxiliary key points; the second eye key point, the third eye key point, and the second auxiliary key point are three vertices of a second triangle in the first triangulation mesh;
  • a third determining submodule configured to determine a second angle between the third connecting line and the fourth connecting line according to the second triangulation mesh; wherein the third connecting line is the second eye
  • the second eye key point is adjacent to the third eye key point;
  • the fourth line is the second eye point and the second eye key point A line between auxiliary key points;
  • the third eye key point, the fourth eye key point, and the second auxiliary key point are three vertices of a second triangle in the second triangulation mesh;
  • a fourth determining sub-module is configured to determine the first auxiliary key point according to the first included angle, the second included angle, and the second triangulation mesh.
  • the second determining submodule includes:
  • a fifth determining submodule configured to determine a first corresponding triangle corresponding to the first triangle on the second triangulation mesh
  • a sixth determining submodule configured to determine the first included angle; wherein the first between the first included angle and the first corresponding included angle on the first corresponding triangle corresponding to the first included angle A difference is within the second preset error range.
  • the third determining submodule includes:
  • a seventh determining submodule configured to determine a second corresponding triangle corresponding to the second triangle on the second triangulation mesh
  • An eighth determining sub-module is used to determine the second included angle; wherein the first between the second included angle and the second corresponding included angle on the second corresponding triangle corresponding to the second included angle The two differences are within a second preset error range.
  • the fourth determining submodule includes:
  • a ninth determining submodule configured to determine a first ratio between the first connection line and a first corresponding connection line corresponding to the first connection line in the second triangulation mesh;
  • a tenth determination sub-module is configured to determine the first auxiliary key point according to the first ratio and the first included angle.
  • the fourth determining submodule includes:
  • An eleventh determining submodule is configured to determine a second ratio between the third line and an edge corresponding to the third line in the second triangulation mesh;
  • a twelfth determination sub-module is configured to determine the second auxiliary key point according to the second ratio and the second included angle.
  • the minimum value of the second preset error range is 0.
  • a first determining module configured to determine the degree of opening and closing of the eyes on the face image according to the eye point
  • a second determining module is configured to determine the first difference value and the second difference value according to the opening and closing degree.
  • the second determining module includes:
  • a first setting submodule configured to set the first difference and the second difference to the minimum of the second preset error range when the opening and closing degree reaches a preset maximum
  • a second setting submodule is configured to set the first difference value and the second difference value to a maximum value of the second preset error range when the opening and closing degree reaches a preset minimum value.
  • triangles in the second triangulation mesh are equilateral triangles.
  • the transformation module includes:
  • a thirteenth determining submodule is configured to determine a correspondence between the first triangulation mesh and the second triangulation mesh
  • a transformation submodule configured to transform the eye makeup effect image in the second triangulation mesh to a predetermined eye position on the face image in the first triangulation mesh according to the corresponding relationship Office.
  • An image processing hardware device includes:
  • Memory for storing non-transitory computer-readable instructions
  • a processor configured to run the computer-readable instructions, so that when executed by the processor, the steps according to any one of the foregoing technical solutions of the image processing method are implemented.
  • a computer-readable storage medium is configured to store non-transitory computer-readable instructions, and when the non-transitory computer-readable instructions are executed by a computer, cause the computer to execute any one of the image processing method technical solutions described above A step of.
  • An image processing terminal includes any one of the image processing devices described above.
  • Embodiments of the present disclosure provide an image processing method, an image processing device, an image processing hardware device, a computer-readable storage medium, and an image processing terminal.
  • the image processing method includes identifying an eye key point on a face image; an interpolation key point is obtained; and the auxiliary key point and the eye key point form a predetermined eye position on the face image.
  • a first triangulated mesh of the eye; the eye makeup effect image is transformed to a predetermined position of the eye according to the first triangulated mesh.
  • the embodiment of the present disclosure can obtain auxiliary key points by interpolating around the face and eye points according to the key points of the face and eye, and according to the triangulation mesh formed by the key points and auxiliary key points of the face and eye
  • the standard eye makeup effect image is transformed at a predetermined position on the face of the human eye to solve the problem of large differences in the triangulation mesh shape due to different people and different conditions of the eyes, thereby achieving different people and different conditions of the eyes. All can better add the technical effect of the expected eye makeup effect map to the eyes, thereby improving the user experience effect.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure
  • step S2 is a schematic flowchart of step S2 in the embodiment shown in FIG. 1;
  • step S22 in the embodiment shown in FIG. 2;
  • FIG. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an image processing hardware device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an image processing terminal according to an embodiment of the present disclosure.
  • the source image (face image) and the target image (standard template where the eye makeup effect image is located) are usually combined with eye keypoints and eyebrow keypoints.
  • a triangulation mesh is formed, and the target image is transformed to the source image based on the triangles at the corresponding positions on the triangulation mesh of the two.
  • the biggest problem with this method is that each person's eyebrows have different shapes, especially some people's eyebrows may have a severe picking situation, that is, the key points of the eyebrows on this part and the key points of the eyebrows on other parts
  • the position difference changes greatly, which will cause the triangle shape formed by the key points of the eye and eyebrow to be different from the shapes of other triangles.
  • the shapes of the triangles corresponding to the positions in the triangular mesh on the standard template do not correspond, and the differences are large, and the eye makeup effect map is drawn based on the triangular mesh on the standard template. Therefore, when the eye makeup effect image is transformed to a predetermined position of the eye on the face image according to the triangulation principle, it is easy to cause distortion; and when the online beauty is applied to the eye, because the state of the eye is constantly changing, the eye and The positional relationship between the eyebrows is also constantly changing, especially when the change is severe (such as when the user raises an eyebrow), the eye makeup effect image cannot follow the face image. Changes in eye condition and dynamic adjustment will affect the user experience.
  • an embodiment of the present disclosure provides an image processing method.
  • the image processing method mainly includes the following steps S1 to S3. among them:
  • Step S1 Identify eye key points on the face image.
  • the face image may be an offline face image obtained through face recognition or a face image recognized online, which is not limited in this disclosure.
  • the key points of the eyes may be the key points at the positions of the eyes obtained by detecting the key points of the facial features, for example, two key points at the left and right corners of the eye; one or more key points distributed on the upper eyelid; One or more key points distributed on the eyelid. Eye keypoints can be used to identify eye contours.
  • Step S2 Interpolation to obtain auxiliary key points; wherein the auxiliary key points and the eye key points form a first triangulation mesh at a predetermined eye position on the face image.
  • triangulation refers to marking a number of corresponding key points on a face image, and dividing the entire face image into a plurality of triangle regions according to the key points, and the triangle regions are connected to form a triangulation mesh.
  • the auxiliary keypoints and eye keypoints obtained by interpolation are the vertices on the triangle region in the triangulation mesh.
  • the auxiliary key points can be obtained by interpolation using the triangulation method according to actual needs.
  • the eye key point or the auxiliary key point may be a vertex of a triangle region, may also be a vertex of two adjacent triangle regions, or may be a vertex of three adjacent triangle regions at the same time. The vertex is determined according to the position of the eye key point or the auxiliary key point in the first triangulation mesh.
  • Step S3 transform the eye makeup effect image to a predetermined position of the eye according to the first triangulation mesh.
  • the eye makeup effect image may be an image preset by the system.
  • the eye makeup effect image is pre-triangulated.
  • the triangle may be used.
  • the relationship between the corresponding triangles obtained by the segmentation transforms the eye makeup effect image to a predetermined position of the eye on the face image.
  • the image processing system may also automatically apply eye makeup to the face image, which is not limited in this disclosure.
  • the image processing system first obtains a face image to be processed, and then performs face detection on the face image. After detecting the face area, perform key point detection on the face, and obtain eye key points on the face image.
  • all key points on the face image can be detected, including key points on eyebrows, nose, mouth, eyes, contours of the face, etc. In another embodiment, only key points at predetermined positions of the eyes may be detected.
  • the key points of the eye may include two key points at the left and right corners of the eye, a key point at the highest point of the upper eyelid and two key points at the left and right sides of the key point, and a key point at the lowest point of the lower eyelid. Point and the two key points on the left and right sides of the key point, which can be a total of 8 key points.
  • fewer or more eye keypoints may be obtained according to actual needs and the detection method of the face keypoints, which is not limited in the present disclosure.
  • the auxiliary key points can be obtained according to the principle of triangulation and the interpolation of the eye makeup effect image selected by the user.
  • the position of the auxiliary key point can be selected based on the position of the eye key point.
  • the auxiliary key point can be selected around the contour of the eye, for example, on the upper eyelid, lower eyelid, and the lateral extension of the eye corner.
  • the first triangulation mesh includes a plurality of triangles, and a vertex of each triangle is an eye key point or an auxiliary key point.
  • the eyebrow picking action on the face image will not cause the triangle in the first triangulation mesh to undergo a large deformation.
  • the triangulated mesh transforms the eye makeup effect image to a predetermined position of the eye, no distortion similar to that in the prior art is generated, and the user experience effect is greatly improved.
  • the auxiliary key points can be obtained by interpolation around the eyes of the face, and according to the triangulation mesh formed by the key points of the face and the auxiliary points,
  • the standard eye makeup effect image is transformed at a predetermined position on the face of the human eye to solve the problem of large differences in the triangular mesh shape due to different people and different conditions of the eyes, thereby achieving different people and different conditions of the eyes.
  • the technical effect of the expected eye makeup effect map can be better pasted on the eyes, thereby improving the user experience effect.
  • the eye makeup effect image includes at least one of eyelashes, double eyelids, single eyelids, eye shadows, and eyeliners.
  • At least one of eyelashes, double eyelids, single eyelids, eye shadows, eyeliners, etc. can be automatically changed for a face image through an image processing system, and the transformed effect is the same as the effect on the standard template, and will not produce Distortion greatly improves the user experience.
  • the method may further include:
  • the face image is detected.
  • the image processing system may provide a variety of eye makeup effect images in advance, and the eye makeup effect images are designed on a standard template preset by the image processing system.
  • Users can add eye makeup effects to face images through image processing systems.
  • the image processing system may first obtain a picture or a video frame of the eye makeup effect to be added by the user.
  • the user can upload pictures including face images through the interface provided by the image processing system, and process the face images on the pictures offline, or obtain the user's avatar video frame in real time through the camera, and process the avatar video frame online. Whether it is offline processing or online processing, after the user selects an eye makeup effect image, a face image is detected from a picture or a video frame.
  • the process of detecting a face image is to determine whether a face exists in the picture or video frame to be detected, and if it exists, return information such as the size and position of the face.
  • detection methods of face images such as skin color detection, motion detection, edge detection, etc.
  • Any method for detecting a face image can be combined with the embodiments of the present disclosure to complete the detection of a face image.
  • a face image is generated for each face.
  • the user selects an eye makeup effect image as a trigger event and performs image processing to add an eye makeup effect image to a face image specified by the user, which can add interest to the user and provide a user experience effect.
  • step S2 that is, the step of obtaining auxiliary key points by interpolation may include:
  • Step S21 Obtain a corresponding second triangular mesh on the standard template; wherein the eye makeup effect image is drawn on the standard template;
  • Step S22 Determine the auxiliary key point on the first triangulation mesh according to the second triangulation mesh; wherein the first triangulation mesh and the second triangulation mesh The similarity between the corresponding triangles in the Chinese is within the first preset error range.
  • the eye makeup effect image is drawn on a standard template of an image processing system.
  • the standard template includes a standard face image, and the standard face image is triangulated in advance to form a second triangulation mesh. That is, the eye makeup effect image is correspondingly drawn in the second triangulated mesh.
  • the eye makeup effect image is caused. Distortion occurs.
  • it can be determined based on the second triangulation mesh on the standard template, so that the corresponding triangles in the first triangulation mesh and the second triangulation mesh are as similar as possible. That is, the similarity between the two is controlled within the first preset error range.
  • the corresponding triangle refers to a triangle on a detected part of the face image and a triangle on a corresponding part on the standard face image.
  • the detected eye key points on the outer corners of the face image, the auxiliary key points on the lateral extension line of the outer eye corners, and another auxiliary key point above the auxiliary key point form a triangle a.
  • a standard face An eye key point on the outer corner of the image, an auxiliary key point on the lateral extension line of the outer eye corner, and another auxiliary key point above the auxiliary key point constitute a triangle b, then the triangle a and the triangle b are corresponding triangles.
  • the value of the first preset error range can be set based on the actual situation. Make restrictions.
  • the corresponding triangles in the first triangulation mesh and the second triangulation mesh are made as similar as possible, so that the second triangulation mesh is drawn.
  • the eye makeup effect image is added to the eye position of the face image where the first triangulation mesh is located, the eye makeup effect image will not be distorted due to the difference in eyes or the state of the eyes in the face image, which improves the User experience.
  • step S22 that is, the step of determining the auxiliary key point on the first triangulation mesh according to the second triangulation mesh may include:
  • Step S31 Determine a first angle between a first connection line and a second connection line in the first triangulation mesh according to the second triangulation mesh; wherein the first connection is The line between the first eye key point and the second eye key point, the first eye key point and the second eye key point are adjacent; the second line is the second eye key point and the first eye point A line between an auxiliary key point; the first eye key point, the second eye key point, and the first auxiliary key point are three vertices of a first triangle in the first triangulation mesh;
  • Step S32 Determine a second included angle between the third connection line and the fourth connection line according to the second triangulation mesh; wherein the third connection line is the second eye key point and the first connection point.
  • the line between the three eye points, the second eye point is adjacent to the third eye pendant; the fourth line is between the second eye point and the second auxiliary key point
  • the second eye key point, the third eye key point, and the second auxiliary key point are three vertices of a second triangle in the first triangulation mesh;
  • Step S33 Determine the first auxiliary key point and the second auxiliary key point according to the first included angle, the second included angle, and the second triangulation mesh.
  • each of the second triangulation mesh may be determined first.
  • the size of the vertex angles of the triangles is further determined according to the principle that the corresponding angles in similar triangles are equal.
  • the auxiliary key point is determined.
  • the first triangle and the second triangle in the first triangulation mesh are adjacent triangles, and the two vertices of the first triangle are detected eye key points, which are the first eye key point and the first eye point respectively.
  • Two eye points, the other vertex of the first triangle is the first auxiliary key point to be determined, and the first line in the first triangle is the line between the first eye point and the second eye point
  • the second connection is a connection between the second eye key point and the first auxiliary key point.
  • the second triangle is adjacent to the first triangle, where two vertices are auxiliary key points, which are the first auxiliary key point and the second auxiliary key point, respectively, and the other vertex is the second eye key point, that is, the first triangle Have a common apex.
  • the second triangle grid has two triangles corresponding to the first triangle and the second triangle.
  • the two vertices of the first corresponding triangle corresponding to the first triangle are the key points of the eye on the standard face image. Establish a standard template and detect it through keypoint detection during triangulation.
  • the other vertex is the first corresponding auxiliary keypoint selected around the contour of the eye.
  • the selection principle can be based on actual conditions, such as Based on the principle that the second triangle is an equilateral triangle or an isosceles triangle, the first corresponding auxiliary key point is selected.
  • the second object triangle corresponding to the second triangle and the first corresponding triangle have two vertices, one eye key point and the first corresponding auxiliary key point in the first corresponding triangle, and the other vertex is the selected second point.
  • Corresponding auxiliary key points the selection principle is the same as the first corresponding auxiliary key point.
  • the second triangulation mesh is established in advance, that is, the corresponding auxiliary key points in the second triangulation mesh are selected and defined in advance. Then when determining the auxiliary key points on the first triangulation mesh, as long as the two included angles of the corresponding triangles in the second triangulation mesh are determined, the above-mentioned first triangulation mesh can be determined. The first and second angles in the first and second triangles in the grid.
  • the first auxiliary key point and the second auxiliary key can be determined according to the first included angle, the second included angle, and the second triangular mesh. point. Then the auxiliary key points in other triangles in the first triangulation mesh can also be determined according to the same principle.
  • step S31 that is, determining the first angle between the first connection line and the second connection line according to the second triangulation mesh may include:
  • Determining the first included angle wherein a first difference between the first included angle and an angle corresponding to the first included angle on the first corresponding triangle is within a second preset error range.
  • the first triangulation mesh and the second triangulation mesh have corresponding triangles, that is, the triangles of corresponding parts in the face image are almost or completely corresponding to each other, then the second triangle is meshed.
  • the first corresponding triangle corresponding to the first triangle in the first triangulation mesh is determined first, and then the first included angle in the first triangle can be determined according to the similarity principle of the triangle and the first corresponding triangle. the size of. For example, when the first triangle and the first corresponding triangle are completely similar, the first difference between the first included angle and the first corresponding included angle in the first corresponding triangle may be 0.
  • the second preset error range can be set according to the actual situation.
  • the second preset error range can be between [0, ⁇ ] and ⁇ can be 20 degrees, which is not specifically limited here.
  • step S32 that is, determining the second angle between the third connection line and the fourth connection line according to the second triangulation mesh may include:
  • the determination method of the second included angle is similar to that of the first included angle. Since the first triangulation mesh and the second triangulation mesh have corresponding triangles, that is, the triangles of the corresponding parts in the face image are almost or completely corresponding to each other, then in the case where the second triangulation mesh has been determined First, determine a second corresponding triangle corresponding to the second triangle in the first triangulation mesh, and then determine the size of the second included angle in the second triangle according to the similarity principle of the triangle and the second corresponding triangle. For example, when the second triangle and the second corresponding triangle are completely similar, the first difference between the second included angle and the second corresponding included angle in the second corresponding triangle may be 0.
  • the second preset error range can be set according to the actual situation.
  • the second preset error range can be between [0, ⁇ ] and ⁇ can be 20 degrees, which is not specifically limited here.
  • step S33 is a step of determining the first auxiliary key point and the second auxiliary key point according to the first included angle, the second included angle, and the second triangulation mesh.
  • the first auxiliary key point is determined according to the first ratio and the first included angle.
  • the other side forming the included angle may be determined according to the ratio between corresponding sides between similar triangles. determine.
  • the first line is a line between two eye key points on the first triangle in the first triangulation grid, so the length of the first line is determined; corresponding to the first triangle
  • the length of the side of the first corresponding triangle corresponding to the first line is also determined, that is, the first ratio between the first line and the corresponding side can be determined. Therefore, the exact position of the first auxiliary key point can be determined according to the principle of the similar triangle, the first proportion, and the size of the first included angle on the first line.
  • step S33 is a step of determining the first auxiliary key point and the second auxiliary key point according to the first included angle, the second included angle, and the second triangulation mesh.
  • the second auxiliary key point is determined according to the second ratio and the second included angle.
  • the other side forming the included angle may be It is determined according to the ratio between corresponding sides between similar triangles.
  • the third line is the line between an eye key point on the second triangle in the first triangulation mesh and the first auxiliary key point, so the first auxiliary key point is determined
  • the length of the third line is determined
  • the length of the side corresponding to the third line on the second corresponding triangle corresponding to the second triangle is also determined, that is, the third line between the third line and the corresponding side.
  • Two ratios can be determined. Therefore, the exact position of the second auxiliary key point can be determined according to the principle of similar triangles, the second proportion, and the size of the second included angle on the third line.
  • the minimum value of the second preset error range is zero.
  • the eye makeup effect image can be transformed into a face image At the time, the effect is the best.
  • the first triangle and the corresponding first corresponding triangle are completely similar, then the error between the first included angle and the first corresponding included angle is 0; the second three swordsman and The corresponding second corresponding triangles are also completely similar, so the error between the second included angle and the second corresponding included angle is also 0.
  • the eyes on the standard face image on the standard template are always open, but in the actual application process, because the detected eye status on the face image is constantly changing, it is open at a certain time.
  • the similarity between the corresponding triangles in the first triangulation mesh and the corresponding triangle in the second triangulation mesh may not be completely similar, so the two correspond.
  • the error between the angles is also not zero.
  • the error can be kept within the second preset error range.
  • the image processing method may further include:
  • the first difference value and the second difference value are determined according to the opening and closing degree.
  • the eyes are fully opened, and the eye opening and closing degree can be set to the maximum in this state.
  • the closed state of the eyes can be considered the smallest degree of opening and closing. Therefore, based on the standard template, when the degree of opening and closing of the eyes on the detected face image is consistent with the degree of opening and closing of the eyes on the standard face image, the first triangulation mesh and the second triangle can be set.
  • the corresponding triangles in the mesh are the most similar, so the error between the corresponding angles is the smallest, that is, the first difference and the second difference are the smallest, and the smaller the opening and closing of the eyes on the detected face image,
  • the first difference and the second difference may be equal or unequal, as long as both are guaranteed to be within the second preset error range.
  • the degree of eye opening and closing can be determined by the position of the key points of the eye. For example, it is determined by the difference between the vertical coordinate between the eye critical point on the eyelid and the eye critical point on the corner of the eye with the largest vertical coordinate among the key points of the eye. The smaller the value, the smaller the degree of opening and closing.
  • the step of determining the first difference value and the second difference value according to the opening and closing degree may include:
  • the first difference value and the second difference value are set to a minimum value of the second preset error range
  • the first difference value and the second difference value are set to a maximum value of the second preset error range.
  • the degree of opening and closing of the eyes on the detected face image is the largest, that is, when the degree of opening and closing of the eyes in the standard template is consistent, the first triangle can be divided into the clips of each triangle in the mesh.
  • the size of the angle is set to the difference between the corresponding angle in the corresponding triangle in the second triangulation mesh and the minimum value of the second preset error range, that is, the first triangulation mesh and the second triangulation mesh.
  • the corresponding triangles in the grid are most similar; and when the degree of opening and closing of the eyes on the detected face image is the smallest, the angle of each triangle in the first triangulation mesh can be set to be the same as that of the second triangulation mesh.
  • the difference between the corresponding angles in the corresponding triangles in the grid is the maximum value of the second preset error range, that is, the similarity error between the first triangular mesh and the corresponding triangle in the second triangular mesh reaches the maximum value. . Therefore, when the eyes are fully opened on the face image, the first difference and the second difference can be set to the minimum value. When the eyes in the face image are in a closed state, the first difference and the second difference can be set to the maximum values.
  • the triangles in the second triangulation mesh are equilateral triangles.
  • the triangles in the second triangulation mesh on the standard template are all equilateral triangles, that is, each angle of the triangles is 60 degrees.
  • the angle with the eye key point as the vertex can be set to 60 degrees plus an error, the error is within the second preset error range, and the error is based on the detection
  • the degree of opening and closing of the eyes on the face image varies.
  • using the image processing method of this embodiment can make the transformation of the eye makeup effect image achieve a good effect, and it is not easy to cause distortion.
  • step S3 that is, transforming the eye makeup effect image to a predetermined position of the eye according to the first triangulation mesh may include:
  • a first triangulation mesh is formed at a predetermined position of the eye on the face image.
  • the corresponding relationship between the first triangulation mesh and the second triangulation mesh that is, each of the triangles Correspondence between vertex coordinates, and transform the images in each triangle region in the second triangulation mesh to the corresponding triangle regions in the first triangulation mesh according to the coordinate correspondence, so as to transform the eye makeup effect image.
  • the following is a device embodiment of the present disclosure.
  • the device embodiment of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure.
  • Only parts related to the embodiments of the present disclosure are shown. Specific technical details are not disclosed. Reference is made to the method embodiments of the present disclosure.
  • an embodiment of the present disclosure provides an image processing apparatus.
  • the apparatus may perform the steps described in the above-mentioned embodiment of the image processing method.
  • the device mainly includes: an identification module 41, an interpolation module 42, and a transformation module 43.
  • the recognition module 41 is used for identifying eye key points on the face image
  • the interpolation module 42 is used for interpolation to obtain auxiliary key points; wherein the auxiliary key points and the eye key points form the face image A first triangulation mesh at a predetermined position of the eye
  • a transformation module 43 is configured to transform an eye makeup effect image to a predetermined position of the eye according to the first triangulation mesh;
  • the face image may be an offline face image obtained through face recognition or a face image recognized online, which is not limited in this disclosure.
  • the key points of the eyes may be the key points at the positions of the eyes obtained by detecting the key points of the facial features, for example, two key points at the left and right corners of the eye; one or more key points distributed on the upper eyelid; One or more key points distributed on the eyelid. Eye keypoints can be used to identify eye contours.
  • triangulation refers to marking a number of corresponding key points on a face image, and dividing the entire face image into a plurality of triangle regions according to the key points, and the triangle regions are connected to form a triangulation mesh.
  • the auxiliary keypoints and eye keypoints obtained by interpolation are the vertices on the triangle region in the triangulation mesh.
  • the interpolation module 42 may use triangulation method interpolation to obtain auxiliary keypoints according to actual needs.
  • the eye key point or the auxiliary key point may be a vertex of a triangle region, may also be a vertex of two adjacent triangle regions, or may be a vertex of three adjacent triangle regions at the same time. The vertex is determined according to the position of the eye key point or the auxiliary key point in the first triangulation mesh.
  • the eye makeup effect image may be an image preset by the system.
  • the eye makeup effect image is pre-triangulated.
  • the triangle may be used.
  • the relationship between the corresponding triangles obtained by the segmentation is to copy the eye makeup effect image to a predetermined position of the eye on the face image.
  • the image processing system may also automatically apply eye makeup to the face image, which is not limited in this disclosure.
  • the image processing system first obtains a face image to be processed, and then performs face detection on the face image. After detecting the face area, perform key point detection on the face, and obtain eye key points on the face image.
  • the recognition module 41 can detect all key points on the face image, including eyebrows, nose, mouth, eyes, contours of the face, and the like. In another embodiment, the recognition module 41 may also detect only the key points of the eye position.
  • the key points of the eye may include two key points at the left and right corners of the eye, a key point at the highest point of the upper eyelid and two key points at the left and right sides of the key point, and a key point at the lowest point of the lower eyelid. Point and the two key points on the left and right sides of the key point, which can be a total of 8 key points.
  • fewer or more eye keypoints may be obtained according to actual needs and the detection method of the face keypoints, which is not limited in the present disclosure.
  • the interpolation module 42 can obtain the auxiliary key points according to the principle of triangulation and the eye makeup effect image selected by the user.
  • the position of the auxiliary key point can be selected based on the position of the eye key point.
  • the auxiliary key point can be selected around the contour of the eye, for example, on the upper eyelid, lower eyelid, and the lateral extension of the eye corner.
  • the first triangulation mesh includes a plurality of triangles, and a vertex of each triangle is an eye key point or an auxiliary key point.
  • the eyebrow picking action on the face image will not cause the triangle in the first triangulation mesh to undergo a large deformation.
  • the triangulated mesh transforms the eye makeup effect image to a predetermined position of the eye, no distortion similar to that in the prior art is generated, and the user experience effect is greatly improved.
  • the auxiliary key points can be obtained by interpolation around the eyes of the face, and according to the triangulation mesh formed by the key points of the face and the auxiliary points,
  • the standard eye makeup effect image is transformed at a predetermined position on the face of the human eye to solve the problem of large differences in the triangular mesh shape due to different people and different conditions of the eyes, thereby achieving different people and different conditions of the eyes.
  • the technical effect of the expected eye makeup effect map can be better pasted on the eyes, thereby improving the user experience effect.
  • the eye makeup effect image includes at least one of eyelashes, double eyelids, single eyelids, eye shadows, and eyeliners.
  • At least one of eyelashes, double eyelids, single eyelids, eye shadows, eyeliners, etc. can be automatically changed for a face image through an image processing system, and the transformed effect is the same as the effect on the standard template, and will not produce Distortion greatly improves the user experience.
  • the identification module 51 may further include:
  • the response module is configured to detect the face image in response to a selected event of the eye makeup effect image by a user.
  • the image processing system may provide a variety of eye makeup effect images in advance, and the eye makeup effect images are designed on a standard template preset by the image processing system.
  • Users can add eye makeup effects to face images through image processing systems. After the user selects an eye makeup effect image provided by the image processing system, the image processing system may first obtain a picture or a video frame of the eye makeup effect to be added by the user. The user can upload pictures including face images through the interface provided by the image processing system, and process the face images on the pictures offline, or obtain the user's avatar video frame in real time through the camera, and process the avatar video frame online.
  • the response module detects a face image from a picture or a video frame.
  • the process of detecting a face image is to determine whether a face exists in the picture or video frame to be detected, and if it exists, return information such as the size and position of the face.
  • detection methods of face images such as skin color detection, motion detection, edge detection, etc.
  • Any method for detecting a face image can be combined with the embodiments of the present disclosure to complete the detection of a face image.
  • a face image is generated for each face.
  • the user selects an eye makeup effect image as a trigger event and performs image processing to add an eye makeup effect image to a face image specified by the user, which can add interest to the user and provide a user experience effect.
  • the interpolation module 42 may include:
  • a first determining submodule configured to determine the auxiliary key point on the first triangulation mesh according to the second triangulation mesh; wherein the first triangulation mesh and a second triangulation mesh The similarity between corresponding triangles in the triangulation mesh is within a first preset error range.
  • the eye makeup effect image is drawn on a standard template of an image processing system.
  • the standard template includes a standard face image, and the standard face image is triangulated in advance to form a second triangulation mesh. That is, the eye makeup effect image is correspondingly drawn in the second triangulated mesh.
  • the eye makeup effect image is caused. Distortion occurs.
  • it can be determined based on the second triangulation mesh on the standard template, so that the corresponding triangles in the first triangulation mesh and the second triangulation mesh are as similar as possible. That is, the similarity between the two is controlled within the first preset error range.
  • the corresponding triangle refers to a triangle on a detected part of the face image and a triangle on a corresponding part on the standard face image.
  • the detected eye key points on the outer corners of the face image, the auxiliary key points on the lateral extension line of the outer eye corners, and another auxiliary key point above the auxiliary key point form a triangle a.
  • a standard face An eye key point on the outer corner of the image, an auxiliary key point on the lateral extension line of the outer eye corner, and another auxiliary key point above the auxiliary key point constitute a triangle b, then the triangle a and the triangle b are corresponding triangles.
  • the value of the first preset error range can be set based on the actual situation. Make restrictions.
  • the corresponding triangles in the first triangulation mesh and the second triangulation mesh are made as similar as possible, so that the second triangulation mesh is drawn.
  • the eye makeup effect image is added to the eye position of the face image where the first triangulation mesh is located, the eye makeup effect image will not be distorted due to the difference in eyes or the state of the eyes in the face image, which improves the User experience.
  • the first determining module may include:
  • a second determining submodule determines a first angle between a first line and a second line in the first triangulation mesh according to the second triangulation mesh; wherein the first The line is a line between the first eye key point and the second eye key point, and the first eye key point is adjacent to the second eye key point; the second line is the second eye key point A line between a point and a first auxiliary key point; the first eye key point, the second eye key point, and the first auxiliary key point are three of the first triangle in the first triangulation mesh Vertices
  • a third determining sub-module determines a second included angle between a third connection line and a fourth connection line according to the second triangulation mesh; wherein the third connection line is a key of the second eye The line between the second eye point and the third eye key point, the second eye point and the third eye pendant are adjacent; the fourth line is the second eye point and the second auxiliary key A line between the points; the second eye key point, the third eye key point, and the second auxiliary key point are three vertices of a second triangle in the first triangulation mesh;
  • a fourth determining sub-module determines the first auxiliary key point and the second auxiliary key point according to the first included angle, the second included angle, and the second triangulation mesh.
  • each of the second triangulation mesh may be determined first.
  • the size of the vertex angles of the triangles is further determined according to the principle that the corresponding angles in similar triangles are equal.
  • the auxiliary key point is determined.
  • the first triangle and the second triangle in the first triangulation mesh are adjacent triangles, and the two vertices of the first triangle are detected eye key points, which are the first eye key point and the first eye point respectively.
  • Two eye points, the other vertex of the first triangle is the first auxiliary key point to be determined, and the first line in the first triangle is the line between the first eye point and the second eye point
  • the second connection is a connection between the second eye key point and the first auxiliary key point.
  • the second triangle is adjacent to the first triangle, where two vertices are auxiliary key points, which are the first auxiliary key point and the second auxiliary key point, respectively, and the other vertex is the second eye key point, that is, the first triangle Have a common apex.
  • the second triangle grid has two triangles corresponding to the first triangle and the second triangle.
  • the two vertices of the first corresponding triangle corresponding to the first triangle are the key points of the eye on the standard face image. Establish a standard template and detect it through keypoint detection during triangulation.
  • the other vertex is the first corresponding auxiliary keypoint selected around the contour of the eye.
  • the selection principle can be based on actual conditions, such as Based on the principle that the second triangle is an equilateral triangle or an isosceles triangle, the first corresponding auxiliary key point is selected.
  • the second object triangle corresponding to the second triangle and the first corresponding triangle have two vertices, one eye key point and the first corresponding auxiliary key point in the first corresponding triangle, and the other vertex is the selected second point.
  • Corresponding auxiliary key points the selection principle is the same as the first corresponding auxiliary key point.
  • the second triangulation mesh is established in advance, that is, the corresponding auxiliary key points in the second triangulation mesh are selected and defined in advance. Then when determining the auxiliary key points on the first triangulation mesh, as long as the two included angles of the corresponding triangles in the second triangulation mesh are determined, the above-mentioned first triangulation mesh can be determined. The first and second angles in the first and second triangles in the grid.
  • the first auxiliary key point and the second auxiliary key can be determined according to the first included angle, the second included angle, and the second triangular mesh. point. Then the auxiliary key points in other triangles in the first triangulation mesh can also be determined according to the same principle.
  • the second determining sub-module may include:
  • a fifth determining submodule determining a first corresponding triangle corresponding to the first triangle on the second triangulation mesh
  • a fifth determination sub-module determines the first included angle; wherein a first difference between the first included angle and an angle corresponding to the first included angle on the first corresponding triangle is in a second Within the preset error range.
  • the first triangulation mesh and the second triangulation mesh have corresponding triangles, that is, the triangles of corresponding parts in the face image are almost or completely corresponding to each other, then the second triangle is meshed.
  • the fifth determination sub-module first determines the first corresponding triangle corresponding to the first triangle in the first triangulation mesh
  • the sixth determination sub-module can further according to the triangle similarity principle and the first correspondence
  • the triangle determines the size of the first included angle in the first triangle. For example, when the first triangle and the first corresponding triangle are completely similar, the first difference between the first included angle and the first corresponding included angle in the first corresponding triangle may be 0.
  • the second preset error range can be set according to the actual situation.
  • the second preset error range can be between [0, ⁇ ] and ⁇ can be 20 degrees, which is not specifically limited here.
  • the third determining sub-module may include:
  • a seventh determining submodule configured to determine a second corresponding triangle corresponding to the second triangle on the second triangulation mesh
  • An eighth determining sub-module is used to determine the second included angle; wherein the first between the second included angle and the second corresponding included angle on the second corresponding triangle corresponding to the second included angle The two differences are within a second preset error range.
  • the determination method of the second included angle is similar to that of the first included angle. Since the first triangulation mesh and the second triangulation mesh have corresponding triangles, that is, the triangles of the corresponding parts in the face image are almost or completely corresponding to each other, then in the case where the second triangulation mesh has been determined
  • the seventh determination sub-module first determines the second corresponding triangle corresponding to the second triangle in the first triangulation mesh, and the eighth determination sub-module can further determine the second triangle in the second triangle according to the triangle similarity principle and the second corresponding triangle.
  • the size of the two included angles For example, when the second triangle and the second corresponding triangle are completely similar, the first difference between the second included angle and the second corresponding included angle in the second corresponding triangle may be 0.
  • the second preset error range can be set according to the actual situation.
  • the second preset error range can be between [0, ⁇ ] and ⁇ can be 20 degrees, which is not specifically limited here.
  • the fourth determining sub-module may include:
  • a ninth determining submodule configured to determine a first ratio between the first connection line and a first corresponding connection line corresponding to the first connection line in the second triangulation mesh;
  • a tenth determination sub-module is configured to determine the first auxiliary key point according to the first ratio and the first included angle.
  • the other side forming the included angle may be determined according to the ratio between corresponding sides between similar triangles. determine.
  • the first line is a line between two eye key points on the first triangle in the first triangulation grid, so the length of the first line is determined; corresponding to the first triangle
  • the length of the side of the first corresponding triangle corresponding to the first line is also determined, that is, the first ratio between the first line and the corresponding side can be determined. Therefore, the tenth determination sub-module can determine the exact position of the first auxiliary key point according to the principle of the similar triangle, the first proportion, and the size of the first included angle on the first connection line.
  • the fourth determining sub-module may include:
  • An eleventh determining submodule is configured to determine a second ratio between the third line and an edge corresponding to the third line in the second triangulation mesh;
  • a twelfth determination sub-module is configured to determine the second auxiliary key point according to the second ratio and the second included angle.
  • the other side forming the included angle may be It is determined according to the ratio between corresponding sides between similar triangles.
  • the third line is the line between an eye key point on the second triangle in the first triangulation mesh and the first auxiliary key point, so the first auxiliary key point is determined
  • the length of the third line is determined
  • the length of the side corresponding to the third line on the second corresponding triangle corresponding to the second triangle is also determined, that is, the third line between the third line and the corresponding side.
  • Two ratios can be determined. Therefore, the twelfth determination sub-module can determine the exact position of the second auxiliary key point according to the principle of similar triangles, the second proportion, and the size of the second included angle on the third line.
  • the minimum value of the second preset error range is zero.
  • the eye makeup effect image can be transformed into a face image At the time, the effect is the best.
  • the first triangle and the corresponding first corresponding triangle are completely similar, then the error between the first included angle and the first corresponding included angle is 0; the second three swordsman and The corresponding second corresponding triangles are also completely similar, so the error between the second included angle and the second corresponding included angle is also 0.
  • the eyes on the standard face image on the standard template are always open, but in the actual application process, because the detected eye status on the face image is constantly changing, it is open at a certain time.
  • the similarity between the corresponding triangles in the first triangulation mesh and the corresponding triangle in the second triangulation mesh may not be completely similar, so the two correspond
  • the error between the angles is also not zero.
  • the error can be kept within the second preset error range.
  • the image processing apparatus may further include:
  • a first determining module configured to determine the degree of opening and closing of the eyes on the face image according to the eye point
  • a second determining module is configured to determine the first difference value and the second difference value according to the opening and closing degree.
  • the eyes are fully opened, and the eye opening and closing degree can be set to the maximum in this state.
  • the closed state of the eyes can be considered the smallest degree of opening and closing. Therefore, based on the standard template, when the degree of opening and closing of the eyes on the detected face image is consistent with the degree of opening and closing of the eyes on the standard face image, the first triangulation mesh and the second triangle can be set.
  • the corresponding triangles in the mesh are the most similar, so the error between the corresponding angles is the smallest, that is, the first difference and the second difference are the smallest, and the smaller the opening and closing of the eyes on the detected face image,
  • the first difference and the second difference may be equal or unequal, as long as both are guaranteed to be within the second preset error range.
  • the degree of eye opening and closing can be determined by the position of the key points of the eye. For example, it is determined by the difference between the vertical coordinate between the eye critical point on the eyelid and the eye critical point on the corner of the eye with the largest vertical coordinate among the key points of the eye. The smaller the value, the smaller the degree of opening and closing.
  • the second determining module may include:
  • a first setting submodule configured to set the first difference and the second difference to the minimum of the second preset error range when the opening and closing degree reaches a preset maximum
  • a second setting submodule is configured to set the first difference value and the second difference value to a maximum value of the second preset error range when the opening and closing degree reaches a preset minimum value.
  • the degree of opening and closing of the eyes on the detected face image is the largest, that is, when the degree of opening and closing of the eyes in the standard template is consistent, the first setting sub-module can divide the first triangle into a grid
  • the size of the included angle of each triangle in the second triangular mesh is set to the difference between the corresponding included angle of the corresponding triangle in the second triangulation mesh, which is the minimum value of the second preset error range, that is, the first triangulated mesh and the
  • the corresponding triangles in the two triangulation meshes are the most similar; and when the degree of opening and closing of the eyes on the detected face image is the smallest, the second setting sub-module can set the angle of each triangle in the first triangulation mesh.
  • the size is set to be the difference between the corresponding angle in the corresponding triangle in the second triangulation mesh and the maximum value of the second preset error range, that is, in the first triangulation mesh and the second triangulation mesh.
  • the similarity error between the corresponding triangles reaches a maximum. Therefore, when the eyes are fully opened on the face image, the first difference and the second difference can be set to the minimum value. When the eyes in the face image are in a closed state, the first difference and the second difference can be set to the maximum values.
  • the triangles in the second triangulation mesh are equilateral triangles.
  • the triangles in the second triangulation mesh on the standard template are all equilateral triangles, that is, each angle of the triangles is 60 degrees.
  • the angle with the eye key point as the vertex can be set to 60 degrees plus an error, the error is within the second preset error range, and the error is based on the detection
  • the degree of opening and closing of the eyes on the face image varies.
  • using the image processing method of this embodiment can make the transformation of the eye makeup effect image achieve a good effect, and it is not easy to cause distortion.
  • the transformation module 43 may include:
  • a thirteenth determining submodule is configured to determine a correspondence between the first triangulation mesh and the second triangulation mesh
  • a transformation submodule configured to transform the eye makeup effect image in the second triangulation mesh to a predetermined eye position on the face image in the first triangulation mesh according to the corresponding relationship Office.
  • the thirteenth determination sub-module can determine the correspondence between the first triangulation mesh and the second triangulation mesh. That is, the correspondence relationship between the coordinates of the vertices between the triangles.
  • the transformation submodule transforms the images in the triangle areas in the second triangulation mesh to the corresponding triangle areas in the first triangulation mesh according to the coordinate correspondence. Eye makeup effect image transformation.
  • FIG. 5 is a hardware block diagram illustrating an image processing hardware device according to an embodiment of the present disclosure.
  • an image processing hardware device 50 according to an embodiment of the present disclosure includes a memory 51 and a processor 52.
  • the memory 51 is configured to store non-transitory computer-readable instructions.
  • the memory 51 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • the processor 52 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the image processing hardware device 50 to perform desired functions.
  • the processor 52 is configured to execute the computer-readable instructions stored in the memory 51, so that the image processing hardware device 50 executes all or all of the foregoing image processing methods of the embodiments of the present disclosure. Some steps.
  • this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure. within.
  • FIG. 6 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • a computer-readable storage medium 60 according to an embodiment of the present disclosure has non-transitory computer-readable instructions 61 stored thereon.
  • the non-transitory computer-readable instruction 61 is executed by a processor, all or part of the steps of the image processing method of the foregoing embodiments of the present disclosure are performed.
  • the computer-readable storage medium 60 includes, but is not limited to, optical storage media (for example, CD-ROM and DVD), magneto-optical storage media (for example, MO), magnetic storage media (for example, magnetic tape or mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
  • optical storage media for example, CD-ROM and DVD
  • magneto-optical storage media for example, MO
  • magnetic storage media for example, magnetic tape or mobile hard disk
  • Non-volatile memory rewritable media for example: memory card
  • media with built-in ROM for example: ROM box
  • FIG. 7 is a schematic diagram illustrating a hardware structure of an image processing terminal according to an embodiment of the present disclosure. As shown in FIG. 7, the image processing terminal 70 includes the foregoing image processing apparatus embodiment.
  • the terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like, and fixed terminal devices such as digital TVs, desktop computers, and the like.
  • PMPs portable multimedia players
  • navigation devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like
  • fixed terminal devices such as digital TVs, desktop computers, and the like.
  • the terminal may further include other components.
  • the image processing terminal 70 may include a power supply unit 71, a wireless communication unit 72, an A / V (audio / video) input unit 73, a user input unit 74, a sensing unit 75, an interface unit 76, and a controller. 77, output unit 78, memory 79, and so on.
  • FIG. 7 shows a terminal having various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the wireless communication unit 72 allows radio communication between the terminal 70 and a wireless communication system or network.
  • the A / V input unit 73 is used to receive audio or video signals.
  • the user input unit 74 may generate key input data according to a command input by the user to control various operations of the terminal device.
  • the sensing unit 75 detects the current state of the terminal 70, the position of the terminal 70, the presence or absence of a user's touch input to the terminal 70, the orientation of the terminal 70, the acceleration or deceleration movement and direction of the terminal 70, and the like, and generates a control for the terminal Command or signal of operation of 70.
  • the interface unit 76 functions as an interface through which at least one external device can connect with the terminal 70.
  • the output unit 78 is configured to provide an output signal in a visual, audio, and / or tactile manner.
  • the memory 79 may store software programs and the like for processing and control operations performed by the controller 77, or may temporarily store data that has been output or is to be output.
  • the memory 79 may include at least one type of storage medium.
  • the terminal 70 can cooperate with a network storage device that performs a storage function of the memory 79 through a network connection.
  • the controller 77 generally controls the overall operation of the terminal device.
  • the controller 77 may include a multimedia module for reproducing or playing back multimedia data.
  • the controller 77 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 71 receives external power or internal power under the control of the controller 77 and provides appropriate power required to operate each element and component.
  • Various embodiments of the image processing method proposed by the present disclosure may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof.
  • various embodiments of the image processing method proposed by the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), and a programmable logic device (PLD).
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • PLD programmable logic device
  • FPGA Field programmable gate array
  • processor controller
  • microcontroller microprocessor
  • electronic unit designed to perform the functions described herein and in some cases, the present disclosure
  • Various embodiments of the proposed image processing method may be implemented in the controller 77.
  • various embodiments of the image processing method proposed by the present disclosure may be implemented with a separate software module that allows execution of at least one function or operation.
  • the software codes may be implemented by a software application (or program) written in any suitable programming language, and the software codes may be stored in the memory 79 and executed by the controller 77.
  • an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B, or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
  • the word "exemplary” does not mean that the described example is preferred or better than other examples.
  • each component or each step can be disassembled and / or recombined.
  • These decompositions and / or recombinations should be considered as equivalent solutions of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开公开一种图像处理方法、图像处理装置、图像处理硬件装置、计算机可读存储介质和图像处理终端。其中,该图像处理方法包括:识别人脸图像上的眼部关键点;插值得到辅助关键点;其中,所述辅助关键点与所述眼部关键点形成所述人脸图像上眼部预定位置处的第一三角剖分网格;根据所述第一三角剖分网格将眼妆效果图像变换到眼部预定位置处。本公开实施例可以解决由于不同人、在眼睛不同状态下时三角剖分网格形状差距较大的问题,从而实现不同人、在眼睛不同状态下都能够较好地为眼部添加预期眼妆效果图的技术效果,从而提高了用户体验效果。

Description

图像处理方法、装置、计算机可读存储介质和终端
交叉引用
本公开引用于2018年06月28日递交的名称为“图像处理方法、装置、计算机可读存储介质和终端”的、申请号为201810687841.2的中国专利申请,其通过引用被全部并入本申请。
技术领域
本公开涉及一种图像技术领域,特别是涉及一种图像处理方法、装置、计算机可读存储介质和终端。
背景技术
近年来,人们越来越重视容貌的美丑,整形美容学科迅速发展。但人脸整容存在很大的风险性,医生不知整容后的结果到底是什么样的,而通过对人脸进行虚拟的不同程度的部件整形,可以解决上述问题。
在人脸图像变形方面,目前比较成熟的是人脸图像网格方法,这种方法包括三角剖分法。三角剖分方法的思路是在源图像和目标图像上标注若干对应的特征点,按照特征点把整张图像分割成若干块三角形区域。为了避免剖分出形状不优的三角形,Delaunay三角剖分通常为人们所使用。
对此,提供一种可获得良好用户体验效果的图像处理方法是亟需解决的技术问题。
发明内容
本公开解决的技术问题是提供一种图像处理方法,以至少部分地解决如何提高用户体验效果的技术问题。此外,还提供一种图像处理装置、图像处理硬件装置、计算机可读存储介质和图像处理终端。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种图像处理方法,包括:识别人脸图像上的眼部关键点;插值得到 辅助关键点;其中,所述辅助关键点与所述眼部关键点形成所述人脸图像上眼部预定位置处的第一三角剖分网格;根据所述第一三角剖分网格将眼妆效果图像变换到眼部预定位置处。进一步地,所述眼妆效果图像包括眼睫毛、双眼皮、单眼皮、眼影、眼线中的至少一个。
进一步地,所述识别人脸图像上的眼部关键点之前,还包括:响应于用户对所述眼妆效果图像的选定事件,检测所述人脸图像。
进一步地,所述插值得到辅助关键点,包括:获取标准模板上对应的第二三角剖分网格;其中,所述眼妆效果图像绘制在所述标准模板上;根据所述第二三角剖分网格确定所述第一三角剖分网格上的所述辅助关键点;其中,所述第一三角剖分网格和第二三角剖分网格中对应三角形之间的相似度在第一预设误差范围内。
进一步地,根据所述第二三角剖分网格确定所述第一三角剖分网格上的所述辅助关键点,包括:根据所述第二三角剖分网格确定第三连线与第四连线之间的第一夹角;其中,所述第三连线为所述第二眼部关键点与第三眼部关键点之间的连线,所述第二眼部关键点和第三眼部挂件的相邻;第四连线为所述第二眼部关键点与第二辅助关键点之间的连线;所述第二眼部关键点、第三眼部关键点和第二辅助关键点为所述第一三角剖分网格中第二三角形的三个顶点;根据所述第二三角剖分网格确定第三连线与第四连线之间的第二夹角;其中,所述第三连线为所述第二眼部关键点与第三眼部关键点之间的连线,所述第二眼部关键点和第三眼部关键点相邻;第四连线为所述第二眼部关键点与第二辅助关键点之间的连线;所述第三眼部关键点、第四眼部关键点和第二辅助关键点为所述第二三角剖分网格中第二三角形的三个顶点;根据所述第一夹角、第二夹角以及所述第二三角剖分网格确定所述第一辅助关键点。
进一步地,根据所述第二三角剖分网格确定所述第一三角剖分网格中第一连线与第二连线之间的第一夹角,包括:确定所述第二三角剖分网格上与所述第一三角形对应的第一对应三角形;确定所述第一夹角;其中,所述第一夹角与所述第一对应三角形上对应于所述第一夹角的第一对应夹角之间的第一差值在第二预设误差范围内。
进一步地,根据所述第二三角剖分网格确定第三连线与第四连线之间的第二夹角,包括:确定所述第二三角剖分网格上与所述第二三角形对应的第二对应三角形;确定所述第二夹角;其中,所述第二夹角与所述第二对应三角形上对应于所述第二夹角的第二对应夹角之间的第二差值在第二 预设误差范围内。
进一步地,根据所述第一夹角、第二夹角以及所述第二三角剖分网格确定所述第一辅助关键点和第二辅助关键点,包括:确定所述第一连线与所述第二三角剖分网格中对应于第一连线的第一对应连线之间的第一比例;根据所述第一比例以及所述第一夹角确定所述第一辅助关键点。
进一步地,根据所述第一夹角、第二夹角以及所述第二三角剖分网格确定所述第一辅助关键点和第二辅助关键点,包括:确定所述第三连线与所述第二三角剖分网格中对应于第三连线的边之间的第二比例;根据所述第二比例以及所述第二夹角确定所述第二辅助关键点。
进一步地,所述第二预设误差范围的最小值为0。
进一步地,还包括:根据所述眼部关键点确定所述人脸图像上眼睛的开合程度;根据所述开合程度确定所述第一差值和所述第二差值。
进一步地,根据所述开合程度确定所述第一差值和所述第二差值,包括:在所述开合程度达到预设最大值时,所述第一差值和第二差值设置为所述第二预设误差范围的最小值;在所述开合程度达到预设最小值时,所述第一差值和第二差值设置为所述第二预设误差范围的最大值。
进一步地,所述第二三角剖分网格中的三角形为等边三角形。
进一步地,根据所述第一三角剖分网格将眼妆效果图像变换到眼部预定位置处,包括:确定所述第一三角剖分网格与所述第二三角剖分网格之间的对应关系;根据所述对应关系将第二三角剖分网格中的所述眼妆效果图像变换至所述第一三角剖分网格中所述人脸图像上的眼部预定位置处。
为了实现上述目的,根据本公开的另一个方面,还提供以下技术方案:
一种图像处理装置,包括:
识别模块,用于识别人脸图像上的眼部关键点;
插值模块,用于插值得到辅助关键点;其中,所述辅助关键点与所述眼部关键点形成所述人脸图像上眼部预定位置处的第一三角剖分网格;
变换模块,用于根据所述第一三角剖分网格将眼妆效果图像变换到眼部预定位置处。
进一步地,所述眼妆效果图像包括眼睫毛、双眼皮、单眼皮、眼影、眼线中的至少一个。
进一步地,所述识别模块之前,还包括:
响应模块,用于响应于用户对所述眼妆效果图像的选定事件,检测所述人脸图像。
进一步地,所述插值模块,包括:
获取子模块,用于获取标准模板上对应的第二三角剖分网格;其中,所述眼妆效果图像绘制在所述标准模板上;
第一确定子模块,用于根据所述第二三角剖分网格确定所述第一三角剖分网格上的所述辅助关键点;其中,所述第一三角剖分网格和第二三角剖分网格中对应三角形之间的相似度在第一预设误差范围内。
进一步地,所述第一确定子模块,包括:
第二确定子模块,用于根据所述第二三角剖分网格确定第三连线与第四连线之间的第一夹角;其中,所述第三连线为所述第二眼部关键点与第三眼部关键点之间的连线,所述第二眼部关键点和第三眼部挂件的相邻;第四连线为所述第二眼部关键点与第二辅助关键点之间的连线;所述第二眼部关键点、第三眼部关键点和第二辅助关键点为所述第一三角剖分网格中第二三角形的三个顶点;
第三确定子模块,用于根据所述第二三角剖分网格确定第三连线与第四连线之间的第二夹角;其中,所述第三连线为所述第二眼部关键点与第三眼部关键点之间的连线,所述第二眼部关键点和第三眼部关键点相邻;第四连线为所述第二眼部关键点与第二辅助关键点之间的连线;所述第三眼部关键点、第四眼部关键点和第二辅助关键点为所述第二三角剖分网格中第二三角形的三个顶点;
第四确定子模块,用于根据所述第一夹角、第二夹角以及所述第二三角剖分网格确定所述第一辅助关键点。
进一步地,所述第二确定子模块,包括:
第五确定子模块,用于确定所述第二三角剖分网格上与所述第一三角形对应的第一对应三角形;
第六确定子模块,用于确定所述第一夹角;其中,所述第一夹角与所述第一对应三角形上对应于所述第一夹角的第一对应夹角之间的第一差值在第二预设误差范围内。
进一步地,所述第三确定子模块,包括:
第七确定子模块,用于确定所述第二三角剖分网格上与所述第二三角 形对应的第二对应三角形;
第八确定子模块,用于确定所述第二夹角;其中,所述第二夹角与所述第二对应三角形上对应于所述第二夹角的第二对应夹角之间的第二差值在第二预设误差范围内。
进一步地,所述第四确定子模块,包括:
第九确定子模块,用于确定所述第一连线与所述第二三角剖分网格中对应于第一连线的第一对应连线之间的第一比例;
第十确定子模块,用于根据所述第一比例以及所述第一夹角确定所述第一辅助关键点。
进一步地,所述第四确定子模块,包括:
第十一确定子模块,用于确定所述第三连线与所述第二三角剖分网格中对应于第三连线的边之间的第二比例;
第十二确定子模块,用于根据所述第二比例以及所述第二夹角确定所述第二辅助关键点。
进一步地,所述第二预设误差范围的最小值为0。
进一步地,还包括:
第一确定模块,用于根据所述眼部关键点确定所述人脸图像上眼睛的开合程度;
第二确定模块,用于根据所述开合程度确定所述第一差值和所述第二差值。
进一步地,所述第二确定模块,包括:
第一设置子模块,用于在所述开合程度达到预设最大值时,所述第一差值和第二差值设置为所述第二预设误差范围的最小值;
第二设置子模块,用于在所述开合程度达到预设最小值时,所述第一差值和第二差值设置为所述第二预设误差范围的最大值。
进一步地,所述第二三角剖分网格中的三角形为等边三角形。
进一步地,所述变换模块,包括:
第十三确定子模块,用于确定所述第一三角剖分网格与所述第二三角剖分网格之间的对应关系;
变换子模块,用于根据所述对应关系将第二三角剖分网格中的所述眼妆效果图像变换至所述第一三角剖分网格中所述人脸图像上的眼部预定位置处。
为了实现上述目的,根据本公开的又一个方面,还提供以下技术方案:
一种图像处理硬件装置,包括:
存储器,用于存储非暂时性计算机可读指令;以及
处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现根据上述任一图像处理方法技术方案中所述的步骤。
为了实现上述目的,根据本公开的又一个方面,还提供以下技术方案:
一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行上述任一图像处理方法技术方案中所述的步骤。
为了实现上述目的,根据本公开的又一个方面,还提供以下技术方案:
一种图像处理终端,包括上述任一图像处理装置。
本公开实施例提供一种图像处理方法、图像处理装置、图像处理硬件装置、计算机可读存储介质和图像处理终端。其中,该图像处理方法包括识别人脸图像上的眼部关键点;插值得到辅助关键点;其中,所述辅助关键点与所述眼部关键点形成所述人脸图像上眼部预定位置处的第一三角剖分网格;根据所述第一三角剖分网格将眼妆效果图像变换到眼部预定位置处。本公开实施例通过采取该技术方案,可以根据人脸眼部关键点,在人脸眼部周围插值得到辅助关键点,并根据人脸眼部关键点和辅助关键点构成的三角剖分网格将标准眼妆效果图像变换在人脸眼部预定位置处,以解决由于不同人、在眼睛不同状态下时三角剖分网格形状差距较大的问题,从而实现不同人、在眼睛不同状态下都能够较好地为眼部添加预期眼妆效果图的技术效果,从而提高了用户体验效果。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图说明
图1为根据本公开一个实施例的图像处理方法的流程示意图;
图2为图1所示实施例中步骤S2的流程示意图;
图3为图2所示实施例中步骤S22的流程示意图;
图4为根据本公开一个实施例的图像处理装置的结构示意图;
图5为根据本公开一个实施例的图像处理硬件装置的结构示意图;
图6为根据本公开一个实施例的计算机可读存储介质的结构示意图;
图7为根据本公开一个实施例的图像处理终端的结构示意图。
具体实施方式
以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。
已有技术中,在对眼部进行美妆时,通常是将源图像(人脸图像)和目标图像(眼妆效果图像所在的标准模板)均采用眼部关键点和眉毛关键点结合起来,形成三角剖分网格,并基于两者的三角剖分网格上对应位置处的三角形,将目标图像变换到源图像上。但是这种方式最大的问题在于,每个人的眉毛形状不同,尤其有些人的眉毛上某一部位会存在上挑严重的情况,也即该部位上的眉毛关键点与其他部位的眉毛关键点的位置差变化较大,会导致眼部关键点和眉毛关键点形成的三角剖分网格中,上挑严重的关键点与眼部关键点形成的三角形的形状与其他三角形的形状差异较大,也就造成与标准模板上的三角剖分网格中位置对应的三角形之间的形状不对应,且差别较大,而眼妆效果图又是基于标准模板上的三角剖分网格进行绘制的,因此在根据三角剖分原理将眼妆效果图像变换到人脸图像上的眼部预定位置处时,容易造成畸变;且对眼睛进行在线美妆时,由于眼睛的状态时刻在改变,眼睛与眉毛之间的位置关系也在不停的变化,尤其在变化剧烈时(如用户挑眉时),无法使眼妆效果图像在人脸图像上随着眼睛状态的变化而动态调整,会影响用户体验。
因此,为了解决如何提高用户体验效果的技术问题,本公开实施例提供一种图像处理方法。如图1所示,该图像处理方法主要包括如下步骤S1至步骤S3。其中:
步骤S1:识别人脸图像上的眼部关键点。
其中,人脸图像可以是通过人脸识别得到的离线人脸图像,也可以是在线识别的人脸图像,本公开对此不作限定。其中,眼部关键点可以是通过人脸特征关键点检测得到的眼部位置处的关键点,例如左、右眼角处的两个关键点;上眼睑上分布的一个或多个关键点;下眼睑上分布的一个或多个关键点。眼部关键点可以用于标识出眼睛轮廓。
步骤S2:插值得到辅助关键点;其中,所述辅助关键点与所述眼部关键点形成所述人脸图像上眼部预定位置处的第一三角剖分网格。
其中,三角剖分是指在人脸图像上标注若干对应的关键点,按照关键点把整张人脸图像分割成若干块三角形区域,而该若干块三角形区域连接形成一三角剖分网格,插值得到的辅助关键点和眼部关键点为该三角剖分网格中三角形区域上的顶点。在眼部关键点已知的情况下,可以根据实际需求利用三角剖分方法插值得到辅助关键点。在该第一三角剖分网格中,眼部关键点或辅助关键点可以为一个三角形区域的顶点,也可以同时为两个相邻三角形区域的顶点,或者同时为三个相邻三角形区域的顶点,具体 根据眼部关键点或辅助关键点在第一三角剖分网格中的位置决定。
步骤S3:根据所述第一三角剖分网格将眼妆效果图像变换到眼部预定位置处。
其中,眼妆效果图像可以为***预置的图像,眼妆效果图像是预先经过三角剖分后的,在将眼妆效果图像变换到人脸图像上的眼部预定位置处时,可以通过三角剖分得到的对应三角形之间的关系,将眼妆效果图像变换到人脸图像上眼部预定位置处。
为了便于理解,下面以具体实施例对图像处理的过程进行详细说明。
本实施例中,用户通过图像处理***为自己或者他人的人脸图像进行在线或者离线进行眼部美妆时,可以从图像处理***预设的多个标准模板中选择自己喜欢的眼妆效果图像,并通过拖拽或者按下相应按钮的方式触发眼妆效果图像与人脸图像的变换过程。当然,在其他实施例中,也可以是图像处理***自动对人脸图像进行眼部美妆,本公开对此不做限制。图像处理***首先获取待处理的人脸图像,之后再对人脸图像进行人脸检测。在检测到人脸区域后,对人脸进行关键点检测,并得到人脸图像上的眼部关键点。
在一实施例中,可以将人脸图像上的所有关键点都检测出来,包括眉毛、鼻子、嘴巴、眼睛、脸外轮廓等多处部位的关键点。在另一实施例中,也可以只将眼部预定位置的关键点检测出来。
在一实施例中,眼部关键点可以包括左、右眼角处的两个关键点,上眼睑最高处的一个关键点以及该关键点左右两边的两个关键点,下眼睑最低处的一个关键点以及该关键点左右两边的两个关键点,总共可以为8个关键点。当然,在其他实施例中,可以根据实际需要以及所采用的人脸关键点的检测方法得到更少或者更多的眼部关键点,本公开在此不做限制。
在检测出眼部关键点之后,可以根据三角剖分的原理以及用户所选择的眼妆效果图像插值得到辅助关键点。辅助关键点的位置可以基于眼部关键点的位置来选择,辅助关键点可以选择在眼部轮廓周围,例如在上眼皮、下眼皮以及眼角横向延长线上,其与眼部关键点共同形成第一三角剖分网格。第一三角剖分网格包括多个三角形,每个三角形的顶点为眼部关键点或辅助关键点。由于辅助关键点位于上眼皮、下眼皮或者眼角横向延长线上,人脸图像上眉毛的挑眉动作等不会使得第一三角剖分网格中三角形发生较大的形变,因此在根据第一三角剖分网格将眼妆效果图像变换到眼部预定位置处时,不会产生类似已有技术中的畸变,大大提高了用户体验效 果。
本实施例通过采取上述技术方案,可以根据人脸眼部关键点,在人脸眼部周围插值得到辅助关键点,并根据人脸眼部关键点和辅助关键点构成的三角剖分网格将标准眼妆效果图像变换在人脸眼部预定位置处,以解决由于不同人、在眼睛不同状态下时三角剖分网格形状差距较大的问题,从而实现不同人、在眼睛不同状态下都能够较好地为眼部贴上预期眼妆效果图的技术效果,从而提高了用户体验效果。
在一个可选的实施例中,眼妆效果图像包括眼睫毛、双眼皮、单眼皮、眼影、眼线中的至少一个。
该可选的实施例中,可以通过图像处理***为人脸图像自动变换眼睫毛、双眼皮、单眼皮、眼影、眼线等中的至少一个,且变换后的效果与标准模板上的效果一样,不会产生畸变,大大提高了用户体验效果。
在一个可选的实施例中,步骤S1即识别人脸图像上的眼部关键点之前还可以包括:
响应于用户对所述眼妆效果图像的选定事件,检测所述人脸图像。
在该可选的实施例中,图像处理***可以预先提供多种眼妆效果图像,眼妆效果图像是设计在图像处理***预置的标准模板上的。用户可以通过图像处理***为人脸图像增加眼妆效果。用户在选定了图像处理***提供的某个眼妆效果图像后,图像处理***可以先获取用户待增加眼妆效果的图片或者视频帧。用户可以通过图像处理***提供的接口上传包括人脸图像的图片,并对图片上的人脸图像进行离线处理,或者通过摄像头实时获取用户的头像视频帧,并对头像视频帧进行在线处理。无论是离线处理还是在线处理,在用户选定了眼妆效果图像后,从图片或者视频帧检测人脸图像。检测人脸图像的过程就是判断待检测图片或视频帧中是否存在人脸,如果存在则返回人脸的大小、位置等信息。人脸图像的检测方法包括很多种,例如肤色检测、运动检测、边缘检测等等,相关的模型也有很多,本公开对此不做限制。任何人脸图像的检测方法都可以与本公开的实施例相结合,以完成人脸图像的检测。此外,如果检测到当前图片或视频帧中存在多个人脸,则对每个人脸都生成人脸图像。
该可选的实施例中,以用户选定眼妆效果图像为触发事件,执行图像处理,以便给用户指定的人脸图像添加眼妆效果图像,可以为用户增添趣味性,并且提供了用户体验效果。
在一个可选的实施例中,如图2所示,步骤S2即插值得到辅助关键点的步骤可以包括:
步骤S21:获取标准模板上对应的第二三角剖分网格;其中,所述眼妆效果图像绘制在所述标准模板上;
步骤S22:根据所述第二三角剖分网格确定所述第一三角剖分网格上的所述辅助关键点;其中,所述第一三角剖分网格和第二三角剖分网格中对应三角形之间的相似度在第一预设误差范围内。
该可选的实施例中,眼妆效果图像是绘制在图像处理***的标准模板上的。标准模板包括标准人脸图像,且该本标准人脸图像被预先进行过三角剖分,形成有第二三角剖分网格。也就是说,眼妆效果图像是对应绘制在第二三角剖分网格中的。
为了将眼妆效果图像变换到所检测到的人脸图像上去,且为了尽量避免变换后由于检测到的人脸图像与标准模板上的标准人脸图像之间的差别,而导致眼妆效果图像发生畸变,在获得辅助关键点时,可以基于标准模板上的第二三角剖分网格来确定,使得第一三角剖分网格和第二三角剖分网格中对应三角形尽可能相似,也即两者的相似度控制在第一预设误差范围内。对应三角形是指检测到的人脸图像上的某一部位上的三角形与标准人脸图像上对应部位处的三角形。以右眼为例,检测到的人脸图像上以外眼角上的眼部关键点、外眼角横向延伸线上辅助关键点以及该辅助关键点上方的另一辅助关键点构成三角形a,标准人脸图像上以外眼角上的眼部关键点、外眼角横向延伸线上辅助关键点以及该辅助关键点上方的另一辅助关键点构成三角形b,那么三角形a和三角形b为对应三角形。第一预设误差范围越小,第一三角剖分网格与第二三角剖分网格中对应三角形越相似,第一预设误差范围的值可以基于实际情况进行设置,本公开在此不做限制。
该可选的实施例中,通过辅助关键点的选择,使得第一三角剖分网格和第二三角剖分网格中对应三角形尽可能相似,这样在将第二三角剖分网格上绘制眼妆效果图像添加到第一三角剖分网格所在的人脸图像的眼部位置处时,不会因为人脸图像上眼睛的差异或者眼睛状态的不同而造成眼妆效果图像的畸变,提高了用户体验效果。
在一个可选的实施例中,如图3所示,步骤S22即根据所述第二三角剖分网格确定所述第一三角剖分网格上的所述辅助关键点的步骤可以包括:
步骤S31:根据所述第二三角剖分网格确定所述第一三角剖分网格中第一连线与第二连线之间的第一夹角;其中,所述第一连线为第一眼部关键点和第二眼部关键点之间的连线,第一眼部关键点和第二眼部关键点相邻;第二连线为所述第二眼部关键点和第一辅助关键点之间的连线;所述第一眼部关键点、第二眼部关键点和第一辅助关键点为所述第一三角剖分网格中第一三角形的三个顶点;
步骤S32:根据所述第二三角剖分网格确定第三连线与第四连线之间的第二夹角;其中,所述第三连线为所述第二眼部关键点与第三眼部关键点之间的连线,所述第二眼部关键点和第三眼部挂件的相邻;第四连线为所述第二眼部关键点与第二辅助关键点之间的连线;所述第二眼部关键点、第三眼部关键点和第二辅助关键点为所述第一三角剖分网格中第二三角形的三个顶点;
步骤S33:根据所述第一夹角、第二夹角以及所述第二三角剖分网格确定所述第一辅助关键点和第二辅助关键点。
该可选的实施例中,根据第一三角剖分网格和第二三角剖分网格中对应三角形尽可能相似的原理确定辅助关键点时,可以先确定第二三角剖分网格中各个三角形的顶角的大小,进而根据相似三角形中对应角度相等的原理确定第一三角剖分网格中对应顶角的大小。最终在第一三角剖分网格中各个三角形的顶角大小确定、且眼部关键点作为三角形的顶点确定的情况下,确定辅助关键点。
例如,第一三角剖分网格中的第一三角形和第二三角形为相邻三角形,且第一三角形的两个顶点为检测得到的眼部关键点,分别为第一眼部关键点和第二眼部关键点,第一三角形的另一个顶点为待确定的第一辅助关键点,第一三角形中第一连线为第一眼部关键点和第二眼部关键点之间的连线,第二连线为第二眼部关键点和第一辅助关键点之间的连线。第二三角形与第一三角形相邻,其中两个顶点为辅助关键点,分别为第一辅助关键点和第二辅助关键点,而另一个顶点为第二眼部关键点,即与第一三角形具有共同的顶点。
第二三角形网格中具有与第一三角形和第二三角形对应的两个三角形,与第一三角形对应的第一对应三角形的两个顶点为标准人脸图像上的眼部关键点,其可以在建立标准模板,并对其三角剖分时通过关键点检测方法检测得到,另一个顶点为在眼部轮廓周围所选取的第一对应辅助关键点,其选取的原则可以根据实际情况而定,例如基于第二三角形为等边三 角形或等腰三角形的原则选择第一对应辅助关键点等。与第二三角形对应的第二对象三角形与第一对应三角形共有两个顶点,分别为第一对应三角形中的一个眼部关键点和第一对应辅助关键点,而另一个顶点为选取的第二对应辅助关键点,选取原理同第一对应辅助关键点。
第二三角剖分网格是预先建立好的,也就是说第二三角剖分网格中的对应辅助关键点都是预先选取并定义好的。那么在确定第一三角剖分网格上的辅助关键点时,只要确定了第二三角剖分网格中对应三角形的两个夹角后,就可以确定上述提到的第一三角形剖分网格中第一三角形和第二三角形中的第一夹角和第二夹角。
在第二三角剖分网格确定的情况下,根据相似三角形的原理,根据第一夹角、第二夹角以及第二三角剖分网格可以确定出第一辅助关键点和第二辅助关键点。那么第一三角剖分网格中的其他三角形中的辅助关键点也可以根据相同的原理确定。
在一个可选的实施例中,步骤S31即根据所述第二三角剖分网格确定第一连线与第二连线之间的第一夹角的步骤可以包括:
确定所述第二三角剖分网格上与所述第一三角形对应的第一对应三角形;
确定所述第一夹角;其中,所述第一夹角与所述第一对应三角形上对应于所述第一夹角的角度之间的第一差值在第二预设误差范围内。
该可选的实施例中,第一三角剖分网格和第二三角剖分网格具有对应的三角形,即人脸图像中对应部位的三角形几乎或者完全对应相似,那么在第二三角形剖分网格已经确定的情况下,先确定与第一三角剖分网格中的第一三角形对应的第一对应三角形,进而可以根据三角形相似原理以及第一对应三角形确定第一三角形中第一夹角的大小。例如,在第一三角形和第一对应三角形完全相似的情况下,可以使得第一夹角与第一对应三角形中对应的第一对应夹角的第一差值为0。当然,在实际操作中,可能很难做到第一三角形网格和第二三角形剖分网格中各个对应三角形都完全相似,做到尽可能相似也能达到相同的效果。因此,可以在确定第一夹角时,使得第一夹角与第一对应夹角之间具有一定的误差,而只要控制该误差在第二预设误差范围内即可。第二预设误差范围可以根据实际情况下设定,例如第二预设误差范围可以在[0,ɑ]之间,ɑ可以为20度,具体在此不做限制。
在一个可选的实施例中,步骤S32即根据所述第二三角剖分网格确定 第三连线与第四连线之间的第二夹角的步骤可以包括:
确定所述第二三角剖分网格上与所述第二三角形对应的第二对应三角形;
确定所述第二夹角;其中,所述第二夹角与所述第二对应三角形上对应于所述第二夹角的第二对应夹角之间的第二差值在第二预设误差范围内。
该可选的实施例中,第二夹角的确定方式与第一夹角类似。由于第一三角剖分网格和第二三角剖分网格具有对应的三角形,即人脸图像中对应部位的三角形几乎或者完全对应相似,那么在第二三角形剖分网格已经确定的情况下,先确定与第一三角剖分网格中的第二三角形对应的第二对应三角形,进而可以根据三角形相似原理以及第二对应三角形确定第二三角形中第二夹角的大小。例如,在第二三角形和第二对应三角形完全相似的情况下,可以使得第二夹角与第二对应三角形中对应的第二对应夹角的第一差值为0。当然,在实际操作中,可能很难做到第一三角形网格和第二三角形剖分网格中各个对应三角形都完全相似,做到尽可能相似也能达到相同的效果。因此,可以在确定第二夹角时,使得第二夹角与第二对应夹角之间具有一定的误差,而只要控制该误差在第二预设误差范围内即可。第二预设误差范围可以根据实际情况下设定,例如第二预设误差范围可以在[0,ɑ]之间,ɑ可以为20度,具体在此不做限制。
在一个可选的实施例中,步骤S33即根据所述第一夹角、第二夹角以及所述第二三角剖分网格确定所述第一辅助关键点和第二辅助关键点的步骤可以包括:
确定所述第一连线与所述第二三角剖分网格中对应于第一连线的边之间的第一比例;
根据所述第一比例以及所述第一夹角确定所述第一辅助关键点。
该可选的实施例中,根据三角形相似原理,在一条边以及该边上的一个夹角确定的情况下,形成该夹角的另一条边可以根据相似三角形之间对应边之间的比例来确定。
例如,本实施例中,第一连线为第一三角剖分网格中第一三角形上两个眼部关键点之间的连线,因此第一连线的长度确定;与第一三角形对应的第一对应三角形上与第一连线对应的边的长度也是确定的,也即第一连线和与之对应边之间的第一比例可以确定。因此可以根据相似三角形的原 理、第一比例以及第一连线上第一夹角的大小确定第一辅助关键点的确切位置。
在一个可选的实施例中,步骤S33即根据所述第一夹角、第二夹角以及所述第二三角剖分网格确定所述第一辅助关键点和第二辅助关键点的步骤可以包括:
确定所述第三连线与所述第二三角剖分网格中对应于第三连线的边之间的第二比例;
根据所述第二比例以及所述第二夹角确定所述第二辅助关键点。
该可选的实施例中,与第一辅助关键点的确定方式类似,可以根据三角形相似原理,在一条边以及该边上的一个夹角确定的情况下,形成该夹角的另一条边可以根据相似三角形之间对应边之间的比例来确定。
例如,本实施例中,第三连线为第一三角剖分网格中第二三角形上一个眼部关键点和第一辅助关键点之间的连线,因此在确定了第一辅助关键点之后,第三连线的长度确定;与第二三角形对应的第二对应三角形上与第三连线对应的边的长度也是确定的,也即第三连线和与之对应边之间的第二比例可以确定。因此可以根据相似三角形的原理、第二比例以及第三连线上第二夹角的大小确定第二辅助关键点的确切位置。
在一个可选的实施例中,第二预设误差范围的最小值为0。
该可选的实施例中,在第一三角剖分网格与第二三角剖分网格中,对应三角形之间尽可能保持相似的情况下,能够使得眼妆效果图像被变换到人脸图像上时,效果达到最佳,此时可以认为第一三角形和对应的第一对应三角形之间完全相似,那么第一夹角与第一对应夹角之间的误差为0;第二三剑侠和对应的第二对应三角形之间也完全相似,那么第二夹角和第二对应夹角之间的误差也0。通常情况下,标准模板上标准人脸图像上的眼睛始终是睁开状态,而在实际应用过程中,由于检测到的人脸图像上的眼睛状态是不停变化的,某一时刻处于睁开状态,某一时刻可能处于闭合状态,因此例如在闭合状态下,第一三角剖分网格与第二三角剖分网格中对应三角形之间的相似度可能无法达到完全相似,因此两者对应角度之间的误差也不为0。当然,为了保证效果,误差可以保持在第二预设误差范围内。
在一个可选的实施例中,图像处理方法还可以包括:
根据所述眼部关键点确定所述人脸图像上眼睛的开合程度;
根据所述开合程度确定所述第一差值和所述第二差值。
该可选的实施例中,通常情况下,图像处理***所预先建立的标准模板上的标准人脸图像中,眼睛是完全睁开的状态,可以设定这种状态下眼睛的开合程度最大,而在眼睛闭合状态下可以认为开合程度最小。因此,可以以标准模板为准,检测到的人脸图像上眼睛的开合程度与标准人脸图像上眼睛的开合程度一致时,可以设定为第一三角剖分网格与第二三角剖分网格中对应三角形最为相似,那么对应夹角之间的误差也最小,也即第一差值和第二差值最小,而检测到的人脸图像上眼睛的开合程度越小,第一三角剖分网格与第二三角剖分网格中对应三角形之间的相似度越小,那么对应夹角之间的误差也越大,也即第一差值和第二差值越大。需要注意的是,第一差值和第二差值可以相等,也可以不相等,只要保证两者都在第二预设误差范围内即可。
眼睛的开合程度可以通过眼部关键点的位置来确定。例如,通过眼部关键点中纵坐标最大的眼睑上的眼部关键点和眼角上的眼部关键点之间纵坐标之差来确定,差值越大,可以认为开合程度越大,差值越小,开合程度越小。
在一可选实施例中,根据所述开合程度确定所述第一差值和所述第二差值的步骤可以包括:
在所述开合程度达到预设最大值时,所述第一差值和第二差值设置为所述第二预设误差范围的最小值;
在所述开合程度达到预设最小值时,所述第一差值和第二差值设置为所述第二预设误差范围的最大值。
该可选的实施例中,检测到的人脸图像上眼睛的开合程度最大,也即与标准模板中眼睛的开合程度一致时,可以将第一三角剖分网格中各个三角形的夹角大小设置成与第二三角剖分网格中对应三角形中对应夹角大小的差值为第二预设误差范围的最小值,,即第一三角剖分网格与第二三角剖分网格中对应三角形最为相似;而检测到的人脸图像上眼睛的开合程度最小的情况下,可以将第一三角剖分网格中各个三角形的夹角大小设置成与第二三角剖分网格中对应三角形中对应夹角大小的差值为第二预设误差范围的最大值,即第一三角剖分网格与第二三角剖分网格中对应三角形之间的相似误差达到最大值。因此,可以在人脸图像上眼睛处于完全睁开的状态时,可以将第一差值和第二差值设置为最小值。而人脸图像上眼睛处于闭合状态时,可以将第一差值和第二差值设置为最大值。
在一可选实施例中,所述第二三角剖分网格中的三角形为等边三角形。
该可选的实现方式中,标准模板上的第二三角剖分网格中的三角形均为等边三角形,也即三角形的各个夹角均为60度。那么第一三角剖分网格上对应的三角形中,以眼部关键点为顶点的夹角可以设置为60度加上一个误差,该误差在第二预设误差范围内,且该误差根据检测到的人脸图像上眼睛的开合程度的不同而不同。这种情况下,采用本实施例的图像处理方法,可以使得眼妆效果图像的变换达到很好的效果,不容易产生畸变。
在一可选实施例中,步骤S3即根据所述第一三角剖分网格将眼妆效果图像变换到眼部预定位置处可以包括:
确定所述第一三角剖分网格与所述第二三角剖分网格之间的对应关系;
根据所述对应关系以及所述第二三角剖分网格中的图像变换所述第一三角剖分网格中的图像。
该可选的实现方式中,辅助关键点确定后,也即在人脸图像上的眼部预定位置处形成了第一三角剖分网格。那么在将标准模板上的眼妆效果图像变换到检测到人脸图像上时,可以通过第一三角剖分网格与第二三角剖分网格之间的对应关系,也即三角形之间各个顶点坐标的对应关系,并将第二三角剖分网格中各个三角形区域内的图像根据坐标对应关系变换到第一三角剖分网格中对应三角形区域中,实现眼妆效果图像的变换。
在上文中,虽然按照上述的顺序描述了图像处理方法实施例中的各个步骤,本领域技术人员应清楚,本公开实施例中的步骤并不必然按照上述顺序执行,其也可以倒序、并行、交叉等其他顺序执行,而且,在上述步骤的基础上,本领域技术人员也可以再加入其他步骤,这些明显变型或等同替换的方式也应包含在本公开的保护范围之内,在此不再赘述。
下面为本公开装置实施例,本公开装置实施例可用于执行本公开方法实施例实现的步骤,为了便于说明,仅示出了与本公开实施例相关的部分,具体技术细节未揭示的,请参照本公开方法实施例。
为了解决如何提高用户体验效果的技术问题,本公开实施例提供一种图像处理装置。该装置可以执行上述图像处理方法实施例中所述的步骤。如图4所示,该装置主要包括:识别模块41、插值模块42和变换模块43。其中,识别模块41用于识别人脸图像上的眼部关键点;插值模块42用于插值得到辅助关键点;其中,所述辅助关键点与所述眼部关键点形成所述人脸图像上眼部预定位置处的第一三角剖分网格;变换模块43用于根据所述第一三角剖分网格将眼妆效果图像变换到眼部预定位置处。
其中,人脸图像可以是通过人脸识别得到的离线人脸图像,也可以是在线识别的人脸图像,本公开对此不作限定。其中,眼部关键点可以是通过人脸特征关键点检测得到的眼部位置处的关键点,例如左、右眼角处的两个关键点;上眼睑上分布的一个或多个关键点;下眼睑上分布的一个或多个关键点。眼部关键点可以用于标识出眼睛轮廓。
其中,三角剖分是指在人脸图像上标注若干对应的关键点,按照关键点把整张人脸图像分割成若干块三角形区域,而该若干块三角形区域连接形成一三角剖分网格,插值得到的辅助关键点和眼部关键点为该三角剖分网格中三角形区域上的顶点。在眼部关键点已知的情况下,插值模块42可以根据实际需求利用三角剖分方法插值得到辅助关键点。在该第一三角剖分网格中,眼部关键点或辅助关键点可以为一个三角形区域的顶点,也可以同时为两个相邻三角形区域的顶点,或者同时为三个相邻三角形区域的顶点,具体根据眼部关键点或辅助关键点在第一三角剖分网格中的位置决定。
其中,眼妆效果图像可以为***预置的图像,眼妆效果图像是预先经过三角剖分后的,在将眼妆效果图像变换到人脸图像上的眼部预定位置处时,可以通过三角剖分得到的对应三角形之间的关系,将眼妆效果图像复制到人脸图像上眼部预定位置处。
为了便于理解,下面以具体实施例对图像处理的过程进行详细说明。
本实施例中,用户通过图像处理***为自己或者他人的人脸图像进行在线或者离线进行眼部美妆时,可以从图像处理***预设的多个标准模板中选择自己喜欢的眼妆效果图像,并通过拖拽或者按下相应按钮的方式触发眼妆效果图像与人脸图像的变换过程。当然,在其他实施例中,也可以是图像处理***自动对人脸图像进行眼部美妆,本公开对此不做限制。图像处理***首先获取待处理的人脸图像,之后再对人脸图像进行人脸检测。在检测到人脸区域后,对人脸进行关键点检测,并得到人脸图像上的眼部关键点。
在一实施例中,识别模块41可以将人脸图像上的所有关键点都检测出来,包括眉毛、鼻子、嘴巴、眼睛、脸外轮廓等。在另一实施例中,识别模块41也可以只将眼部位置的关键点检测出来。
在一实施例中,眼部关键点可以包括左、右眼角处的两个关键点,上眼睑最高处的一个关键点以及该关键点左右两边的两个关键点,下眼睑最低处的一个关键点以及该关键点左右两边的两个关键点,总共可以为8个 关键点。当然,在其他实施例中,可以根据实际需要以及所采用的人脸关键点的检测方法得到更少或者更多的眼部关键点,本公开在此不做限制。
识别模块41在检测出眼部关键点之后,插值模块42可以根据三角剖分的原理以及用户所选择的眼妆效果图像插值得到辅助关键点。辅助关键点的位置可以基于眼部关键点的位置来选择,辅助关键点可以选择在眼部轮廓周围,例如在上眼皮、下眼皮以及眼角横向延长线上,其与眼部关键点共同形成第一三角剖分网格。第一三角剖分网格包括多个三角形,每个三角形的顶点为眼部关键点或辅助关键点。由于辅助关键点位于上眼皮、下眼皮或者眼角横向延长线上,人脸图像上眉毛的挑眉动作等不会使得第一三角剖分网格中三角形发生较大的形变,因此在根据第一三角剖分网格将眼妆效果图像变换到眼部预定位置处时,不会产生类似已有技术中的畸变,大大提高了用户体验效果。
本实施例通过采取上述技术方案,可以根据人脸眼部关键点,在人脸眼部周围插值得到辅助关键点,并根据人脸眼部关键点和辅助关键点构成的三角剖分网格将标准眼妆效果图像变换在人脸眼部预定位置处,以解决由于不同人、在眼睛不同状态下时三角剖分网格形状差距较大的问题,从而实现不同人、在眼睛不同状态下都能够较好地为眼部贴上预期眼妆效果图的技术效果,从而提高了用户体验效果。
在一个可选的实施例中,眼妆效果图像包括眼睫毛、双眼皮、单眼皮、眼影、眼线中的至少一个。
该可选的实施例中,可以通过图像处理***为人脸图像自动变换眼睫毛、双眼皮、单眼皮、眼影、眼线等中的至少一个,且变换后的效果与标准模板上的效果一样,不会产生畸变,大大提高了用户体验效果。
在一个可选的实施例中,识别模块51之前还可以包括:
响应模块,用于响应于用户对所述眼妆效果图像的选定事件,检测所述人脸图像。
在该可选的实施例中,图像处理***可以预先提供多种眼妆效果图像,眼妆效果图像是设计在图像处理***预置的标准模板上的。用户可以通过图像处理***为人脸图像增加眼妆效果。用户在选定了图像处理***提供的某个眼妆效果图像后,图像处理***可以先获取用户待增加眼妆效果的图片或者视频帧。用户可以通过图像处理***提供的接口上传包括人脸图像的图片,并对图片上的人脸图像进行离线处理,或者通过摄像头实时获取用户的头像视频帧,并对头像视频帧进行在线处理。无论是离线处理还 是在线处理,在用户选定了眼妆效果图像后,响应模块从图片或者视频帧检测人脸图像。检测人脸图像的过程就是判断待检测图片或视频帧中是否存在人脸,如果存在则返回人脸的大小、位置等信息。人脸图像的检测方法包括很多种,例如肤色检测、运动检测、边缘检测等等,相关的模型也有很多,本公开对此不做限制。任何人脸图像的检测方法都可以与本公开的实施例相结合,以完成人脸图像的检测。此外,如果检测到当前图片或视频帧中存在多个人脸,则对每个人脸都生成人脸图像。
该可选的实施例中,以用户选定眼妆效果图像为触发事件,执行图像处理,以便给用户指定的人脸图像添加眼妆效果图像,可以为用户增添趣味性,并且提供了用户体验效果。
在一个可选的实施例中插值模块42可以包括:
获取子模块,用于获取标准模板上对应的第二三角剖分网格;其中,所述眼妆效果图像绘制在所述标准模板上;
第一确定子模块,用于根据所述第二三角剖分网格确定所述第一三角剖分网格上的所述辅助关键点;其中,所述第一三角剖分网格和第二三角剖分网格中对应三角形之间的相似度在第一预设误差范围内。
该可选的实施例中,眼妆效果图像是绘制在图像处理***的标准模板上的。标准模板包括标准人脸图像,且该本标准人脸图像被预先进行过三角剖分,形成有第二三角剖分网格。也就是说,眼妆效果图像是对应绘制在第二三角剖分网格中的。
为了将眼妆效果图像变换到所检测到的人脸图像上去,且为了尽量避免变换后由于检测到的人脸图像与标准模板上的标准人脸图像之间的差别,而导致眼妆效果图像发生畸变,在获得辅助关键点时,可以基于标准模板上的第二三角剖分网格来确定,使得第一三角剖分网格和第二三角剖分网格中对应三角形尽可能相似,也即两者的相似度控制在第一预设误差范围内。对应三角形是指检测到的人脸图像上的某一部位上的三角形与标准人脸图像上对应部位处的三角形。以右眼为例,检测到的人脸图像上以外眼角上的眼部关键点、外眼角横向延伸线上辅助关键点以及该辅助关键点上方的另一辅助关键点构成三角形a,标准人脸图像上以外眼角上的眼部关键点、外眼角横向延伸线上辅助关键点以及该辅助关键点上方的另一辅助关键点构成三角形b,那么三角形a和三角形b为对应三角形。第一预设误差范围越小,第一三角剖分网格与第二三角剖分网格中对应三角形越相似,第一预设误差范围的值可以基于实际情况进行设置,本公开在此不做 限制。
该可选的实施例中,通过辅助关键点的选择,使得第一三角剖分网格和第二三角剖分网格中对应三角形尽可能相似,这样在将第二三角剖分网格上绘制眼妆效果图像添加到第一三角剖分网格所在的人脸图像的眼部位置处时,不会因为人脸图像上眼睛的差异或者眼睛状态的不同而造成眼妆效果图像的畸变,提高了用户体验效果。
在一个可选的实施例中,所述第一确定模块可以包括:
第二确定子模块,根据所述第二三角剖分网格确定所述第一三角剖分网格中第一连线与第二连线之间的第一夹角;其中,所述第一连线为第一眼部关键点和第二眼部关键点之间的连线,第一眼部关键点和第二眼部关键点相邻;第二连线为所述第二眼部关键点和第一辅助关键点之间的连线;所述第一眼部关键点、第二眼部关键点和第一辅助关键点为所述第一三角剖分网格中第一三角形的三个顶点;
第三确定子模块,根据所述第二三角剖分网格确定第三连线与第四连线之间的第二夹角;其中,所述第三连线为所述第二眼部关键点与第三眼部关键点之间的连线,所述第二眼部关键点和第三眼部挂件的相邻;第四连线为所述第二眼部关键点与第二辅助关键点之间的连线;所述第二眼部关键点、第三眼部关键点和第二辅助关键点为所述第一三角剖分网格中第二三角形的三个顶点;
第四确定子模块,根据所述第一夹角、第二夹角以及所述第二三角剖分网格确定所述第一辅助关键点和第二辅助关键点。
该可选的实施例中,根据第一三角剖分网格和第二三角剖分网格中对应三角形尽可能相似的原理确定辅助关键点时,可以先确定第二三角剖分网格中各个三角形的顶角的大小,进而根据相似三角形中对应角度相等的原理确定第一三角剖分网格中对应顶角的大小。最终在第一三角剖分网格中各个三角形的顶角大小确定、且眼部关键点作为三角形的顶点确定的情况下,确定辅助关键点。
例如,第一三角剖分网格中的第一三角形和第二三角形为相邻三角形,且第一三角形的两个顶点为检测得到的眼部关键点,分别为第一眼部关键点和第二眼部关键点,第一三角形的另一个顶点为待确定的第一辅助关键点,第一三角形中第一连线为第一眼部关键点和第二眼部关键点之间的连线,第二连线为第二眼部关键点和第一辅助关键点之间的连线。第二三角形与第一三角形相邻,其中两个顶点为辅助关键点,分别为第一辅助关键 点和第二辅助关键点,而另一个顶点为第二眼部关键点,即与第一三角形具有共同的顶点。
第二三角形网格中具有与第一三角形和第二三角形对应的两个三角形,与第一三角形对应的第一对应三角形的两个顶点为标准人脸图像上的眼部关键点,其可以在建立标准模板,并对其三角剖分时通过关键点检测方法检测得到,另一个顶点为在眼部轮廓周围所选取的第一对应辅助关键点,其选取的原则可以根据实际情况而定,例如基于第二三角形为等边三角形或等腰三角形的原则选择第一对应辅助关键点等。与第二三角形对应的第二对象三角形与第一对应三角形共有两个顶点,分别为第一对应三角形中的一个眼部关键点和第一对应辅助关键点,而另一个顶点为选取的第二对应辅助关键点,选取原理同第一对应辅助关键点。
第二三角剖分网格是预先建立好的,也就是说第二三角剖分网格中的对应辅助关键点都是预先选取并定义好的。那么在确定第一三角剖分网格上的辅助关键点时,只要确定了第二三角剖分网格中对应三角形的两个夹角后,就可以确定上述提到的第一三角形剖分网格中第一三角形和第二三角形中的第一夹角和第二夹角。
在第二三角剖分网格确定的情况下,根据相似三角形的原理,根据第一夹角、第二夹角以及第二三角剖分网格可以确定出第一辅助关键点和第二辅助关键点。那么第一三角剖分网格中的其他三角形中的辅助关键点也可以根据相同的原理确定。
在一个可选的实施例中,所述第二确定子模块可以包括:
第五确定子模块,确定所述第二三角剖分网格上与所述第一三角形对应的第一对应三角形;
第五确定子模块,确定所述第一夹角;其中,所述第一夹角与所述第一对应三角形上对应于所述第一夹角的角度之间的第一差值在第二预设误差范围内。
该可选的实施例中,第一三角剖分网格和第二三角剖分网格具有对应的三角形,即人脸图像中对应部位的三角形几乎或者完全对应相似,那么在第二三角形剖分网格已经确定的情况下,第五确定子模块先确定与第一三角剖分网格中的第一三角形对应的第一对应三角形,第六确定子模块进而可以根据三角形相似原理以及第一对应三角形确定第一三角形中第一夹角的大小。例如,在第一三角形和第一对应三角形完全相似的情况下,可以使得第一夹角与第一对应三角形中对应的第一对应夹角的第一差值为0。 当然,在实际操作中,可能很难做到第一三角形网格和第二三角形剖分网格中各个对应三角形都完全相似,做到尽可能相似也能达到相同的效果。因此,可以在确定第一夹角时,使得第一夹角与第一对应夹角之间具有一定的误差,而只要控制该误差在第二预设误差范围内即可。第二预设误差范围可以根据实际情况下设定,例如第二预设误差范围可以在[0,ɑ]之间,ɑ可以为20度,具体在此不做限制。
在一个可选的实施例中,所述第三确定子模块可以包括:
第七确定子模块,用于确定所述第二三角剖分网格上与所述第二三角形对应的第二对应三角形;
第八确定子模块,用于确定所述第二夹角;其中,所述第二夹角与所述第二对应三角形上对应于所述第二夹角的第二对应夹角之间的第二差值在第二预设误差范围内。
该可选的实施例中,第二夹角的确定方式与第一夹角类似。由于第一三角剖分网格和第二三角剖分网格具有对应的三角形,即人脸图像中对应部位的三角形几乎或者完全对应相似,那么在第二三角形剖分网格已经确定的情况下,第七确定子模块先确定与第一三角剖分网格中的第二三角形对应的第二对应三角形,第八确定子模块进而可以根据三角形相似原理以及第二对应三角形确定第二三角形中第二夹角的大小。例如,在第二三角形和第二对应三角形完全相似的情况下,可以使得第二夹角与第二对应三角形中对应的第二对应夹角的第一差值为0。当然,在实际操作中,可能很难做到第一三角形网格和第二三角形剖分网格中各个对应三角形都完全相似,做到尽可能相似也能达到相同的效果。因此,可以在确定第二夹角时,使得第二夹角与第二对应夹角之间具有一定的误差,而只要控制该误差在第二预设误差范围内即可。第二预设误差范围可以根据实际情况下设定,例如第二预设误差范围可以在[0,ɑ]之间,ɑ可以为20度,具体在此不做限制。
在一个可选的实施例中,所述第四确定子模块可以包括:
第九确定子模块,用于确定所述第一连线与所述第二三角剖分网格中对应于第一连线的第一对应连线之间的第一比例;
第十确定子模块,用于根据所述第一比例以及所述第一夹角确定所述第一辅助关键点。
该可选的实施例中,根据三角形相似原理,在一条边以及该边上的一 个夹角确定的情况下,形成该夹角的另一条边可以根据相似三角形之间对应边之间的比例来确定。
例如,本实施例中,第一连线为第一三角剖分网格中第一三角形上两个眼部关键点之间的连线,因此第一连线的长度确定;与第一三角形对应的第一对应三角形上与第一连线对应的边的长度也是确定的,也即第一连线和与之对应边之间的第一比例可以确定。因此第十确定子模块可以根据相似三角形的原理、第一比例以及第一连线上第一夹角的大小确定第一辅助关键点的确切位置。
在一个可选的实施例中,所述第四确定子模块可以包括:
第十一确定子模块,用于确定所述第三连线与所述第二三角剖分网格中对应于第三连线的边之间的第二比例;
第十二确定子模块,用于根据所述第二比例以及所述第二夹角确定所述第二辅助关键点。
该可选的实施例中,与第一辅助关键点的确定方式类似,可以根据三角形相似原理,在一条边以及该边上的一个夹角确定的情况下,形成该夹角的另一条边可以根据相似三角形之间对应边之间的比例来确定。
例如,本实施例中,第三连线为第一三角剖分网格中第二三角形上一个眼部关键点和第一辅助关键点之间的连线,因此在确定了第一辅助关键点之后,第三连线的长度确定;与第二三角形对应的第二对应三角形上与第三连线对应的边的长度也是确定的,也即第三连线和与之对应边之间的第二比例可以确定。因此第十二确定子模块可以根据相似三角形的原理、第二比例以及第三连线上第二夹角的大小确定第二辅助关键点的确切位置。
在一个可选的实施例中,第二预设误差范围的最小值为0。
该可选的实施例中,在第一三角剖分网格与第二三角剖分网格中,对应三角形之间尽可能保持相似的情况下,能够使得眼妆效果图像被变换到人脸图像上时,效果达到最佳,此时可以认为第一三角形和对应的第一对应三角形之间完全相似,那么第一夹角与第一对应夹角之间的误差为0;第二三剑侠和对应的第二对应三角形之间也完全相似,那么第二夹角和第二对应夹角之间的误差也0。通常情况下,标准模板上标准人脸图像上的眼睛始终是睁开状态,而在实际应用过程中,由于检测到的人脸图像上的眼睛状态是不停变化的,某一时刻处于睁开状态,某一时刻可能处于闭合状态, 因此例如在闭合状态下,第一三角剖分网格与第二三角剖分网格中对应三角形之间的相似度可能无法达到完全相似,因此两者对应角度之间的误差也不为0。当然,为了保证效果,误差可以保持在第二预设误差范围内。
在一个可选的实施例中,图像处理装置还可以包括:
第一确定模块,用于根据所述眼部关键点确定所述人脸图像上眼睛的开合程度;
第二确定模块,用于根据所述开合程度确定所述第一差值和所述第二差值。
该可选的实施例中,通常情况下,图像处理***所预先建立的标准模板上的标准人脸图像中,眼睛是完全睁开的状态,可以设定这种状态下眼睛的开合程度最大,而在眼睛闭合状态下可以认为开合程度最小。因此,可以以标准模板为准,检测到的人脸图像上眼睛的开合程度与标准人脸图像上眼睛的开合程度一致时,可以设定为第一三角剖分网格与第二三角剖分网格中对应三角形最为相似,那么对应夹角之间的误差也最小,也即第一差值和第二差值最小,而检测到的人脸图像上眼睛的开合程度越小,第一三角剖分网格与第二三角剖分网格中对应三角形之间的相似度越小,那么对应夹角之间的误差也越大,也即第一差值和第二差值越大。需要注意的是,第一差值和第二差值可以相等,也可以不相等,只要保证两者都在第二预设误差范围内即可。
眼睛的开合程度可以通过眼部关键点的位置来确定。例如,通过眼部关键点中纵坐标最大的眼睑上的眼部关键点和眼角上的眼部关键点之间纵坐标之差来确定,差值越大,可以认为开合程度越大,差值越小,开合程度越小。
在一可选实施例中,所述第二确定模块可以包括:
第一设置子模块,用于在所述开合程度达到预设最大值时,所述第一差值和第二差值设置为所述第二预设误差范围的最小值;
第二设置子模块,用于在所述开合程度达到预设最小值时,所述第一差值和第二差值设置为所述第二预设误差范围的最大值。
该可选的实施例中,检测到的人脸图像上眼睛的开合程度最大,也即与标准模板中眼睛的开合程度一致时,第一设置子模块可以将第一三角剖分网格中各个三角形的夹角大小设置成与第二三角剖分网格中对应三角形中对应夹角大小的差值为第二预设误差范围的最小值,,即第一三角剖分网 格与第二三角剖分网格中对应三角形最为相似;而检测到的人脸图像上眼睛的开合程度最小的情况下,第二设置子模块可以将第一三角剖分网格中各个三角形的夹角大小设置成与第二三角剖分网格中对应三角形中对应夹角大小的差值为第二预设误差范围的最大值,即第一三角剖分网格与第二三角剖分网格中对应三角形之间的相似误差达到最大值。因此,可以在人脸图像上眼睛处于完全睁开的状态时,可以将第一差值和第二差值设置为最小值。而人脸图像上眼睛处于闭合状态时,可以将第一差值和第二差值设置为最大值。
在一可选实施例中,所述第二三角剖分网格中的三角形为等边三角形。
该可选的实现方式中,标准模板上的第二三角剖分网格中的三角形均为等边三角形,也即三角形的各个夹角均为60度。那么第一三角剖分网格上对应的三角形中,以眼部关键点为顶点的夹角可以设置为60度加上一个误差,该误差在第二预设误差范围内,且该误差根据检测到的人脸图像上眼睛的开合程度的不同而不同。这种情况下,采用本实施例的图像处理方法,可以使得眼妆效果图像的变换达到很好的效果,不容易产生畸变。
在一可选实施例中,所述变换模块43可以包括:
第十三确定子模块,用于确定所述第一三角剖分网格与所述第二三角剖分网格之间的对应关系;
变换子模块,用于根据所述对应关系将第二三角剖分网格中的所述眼妆效果图像变换至所述第一三角剖分网格中所述人脸图像上的眼部预定位置处。
该可选的实现方式中,辅助关键点确定后,也即在人脸图像上的眼部预定位置处形成了第一三角剖分网格。那么在将标准模板上的眼妆效果图像变换到检测到人脸图像上时,第十三确定子模块可以确定第一三角剖分网格与第二三角剖分网格之间的对应关系,也即三角形之间各个顶点坐标的对应关系,变换子模块将第二三角剖分网格中各个三角形区域内的图像根据坐标对应关系变换到第一三角剖分网格中对应三角形区域中,实现眼妆效果图像的变换。
图5是图示根据本公开的实施例的图像处理硬件装置的硬件框图。如图5所示,根据本公开实施例的图像处理硬件装置50包括存储器51和处理器52。
该存储器51用于存储非暂时性计算机可读指令。具体地,存储器51可以包括一个或多个计算机程序产品,该计算机程序产品可以包括各种形 式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。该易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。该非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。
该处理器52可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制图像处理硬件装置50中的其它组件以执行期望的功能。在本公开的一个实施例中,该处理器52用于运行该存储器51中存储的该计算机可读指令,使得该图像处理硬件装置50执行前述的本公开各实施例的图像处理方法的全部或部分步骤。
本领域技术人员应能理解,为了解决如何获得良好用户体验效果的技术问题,本实施例中也可以包括诸如通信总线、接口等公知的结构,这些公知的结构也应包含在本公开的保护范围之内。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图6是图示根据本公开的实施例的计算机可读存储介质的示意图。如图6所示,根据本公开实施例的计算机可读存储介质60,其上存储有非暂时性计算机可读指令61。当该非暂时性计算机可读指令61由处理器运行时,执行前述的本公开各实施例的图像处理方法的全部或部分步骤。
上述计算机可读存储介质60包括但不限于:光存储介质(例如:CD-ROM和DVD)、磁光存储介质(例如:MO)、磁存储介质(例如:磁带或移动硬盘)、具有内置的可重写非易失性存储器的媒体(例如:存储卡)和具有内置ROM的媒体(例如:ROM盒)。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图7是图示根据本公开实施例的图像处理终端的硬件结构示意图。如图7所示,该图像处理终端70包括上述图像处理装置实施例。
该终端设备可以以各种形式来实施,本公开中的终端设备可以包括但不限于诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置、车载终端设备、车载显示终端、车载电子后视镜等等的移动终端设备以及诸如数字TV、台式计算机等等的固定终端设备。
作为等同替换的实施方式,该终端还可以包括其他组件。如图7所示,该图像处理终端70可以包括电源单元71、无线通信单元72、A/V(音频/视频)输入单元73、用户输入单元74、感测单元75、接口单元76、控制 器77、输出单元78和存储器79等等。图7示出了具有各种组件的终端,但是应理解的是,并不要求实施所有示出的组件,也可以替代地实施更多或更少的组件。
其中,无线通信单元72允许终端70与无线通信***或网络之间的无线电通信。A/V输入单元73用于接收音频或视频信号。用户输入单元74可以根据用户输入的命令生成键输入数据以控制终端设备的各种操作。感测单元75检测终端70的当前状态、终端70的位置、用户对于终端70的触摸输入的有无、终端70的取向、终端70的加速或减速移动和方向等等,并且生成用于控制终端70的操作的命令或信号。接口单元76用作至少一个外部装置与终端70连接可以通过的接口。输出单元78被构造为以视觉、音频和/或触觉方式提供输出信号。存储器79可以存储由控制器77执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据。存储器79可以包括至少一种类型的存储介质。而且,终端70可以与通过网络连接执行存储器79的存储功能的网络存储装置协作。控制器77通常控制终端设备的总体操作。另外,控制器77可以包括用于再现或回放多媒体数据的多媒体模块。控制器77可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。电源单元71在控制器77的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
本公开提出的图像处理方法的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,本公开提出的图像处理方法的各种实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,本公开提出的图像处理方法的各种实施方式可以在控制器77中实施。对于软件实施,本公开提出的图像处理方法的各种实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器79中并且由控制器77执行。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些 优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本公开中涉及的器件、装置、设备、***的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、***。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
另外,如在此使用的,在以“至少一个”开始的项的列举中使用的“或”指示分离的列举,以便例如“A、B或C的至少一个”的列举意味着A或B或C,或AB或AC或BC,或ABC(即A和B和C)。此外,措辞“示例的”不意味着描述的例子是优选的或者比其他例子更好。
还需要指出的是,在本公开的***和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
可以不脱离由所附权利要求定义的教导的技术而进行对在此所述的技术的各种改变、替换和更改。此外,本公开的权利要求的范围不限于以上所述的处理、机器、制造、事件的组成、手段、方法和动作的具体方面。可以利用与在此所述的相应方面进行基本相同的功能或者实现基本相同的结果的当前存在的或者稍后要开发的处理、机器、制造、事件的组成、手段、方法或动作。因而,所附权利要求包括在其范围内的这样的处理、机器、制造、事件的组成、手段、方法或动作。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (18)

  1. 一种图像处理方法,包括:
    识别人脸图像上的眼部关键点;
    插值得到辅助关键点;其中,所述辅助关键点与所述眼部关键点形成所述人脸图像上眼部预定位置处的第一三角剖分网格;
    根据所述第一三角剖分网格将眼妆效果图像变换到眼部预定位置处。
  2. 根据权利要求1所述的方法,其中所述眼妆效果图像包括眼睫毛、双眼皮、单眼皮、眼影、眼线中的至少一个。
  3. 根据权利要求1所述的方法,其中所述识别人脸图像上的眼部关键点之前,还包括:
    响应于用户对所述眼妆效果图像的选定事件,检测所述人脸图像。
  4. 根据权利要求1所述的方法,其中所述插值得到辅助关键点,包括:
    获取标准模板上对应的第二三角剖分网格;其中,所述眼妆效果图像绘制在所述标准模板上;
    根据所述第二三角剖分网格确定所述第一三角剖分网格上的所述辅助关键点;其中,所述第一三角剖分网格和第二三角剖分网格中对应三角形之间的相似度在第一预设误差范围内。
  5. 根据权利要求4所述的方法,其中根据所述第二三角剖分网格确定所述第一三角剖分网格上的所述辅助关键点,包括:
    根据所述第二三角剖分网格确定第三连线与第四连线之间的第一夹角;其中,所述第三连线为所述第二眼部关键点与第三眼部关键点之间的连线,所述第二眼部关键点和第三眼部挂件的相邻;第四连线为所述第二眼部关键点与第二辅助关键点之间的连线;所述第二眼部关键点、第三眼部关键点和第二辅助关键点为所述第一三角剖分网格中第二三角形的三个顶点;
    根据所述第二三角剖分网格确定第三连线与第四连线之间的第二夹角;其中,所述第三连线为所述第二眼部关键点与第三眼部关键点之间的连线,所述第二眼部关键点和第三眼部关键点相邻;第四连线为所述第二眼部关键点与第二辅助关键点之间的连线;所述第三眼部关键点、第四眼部关键点和第二辅助关键点为所述第二三角剖分网格中第二三角形的三个 顶点;
    根据所述第一夹角、第二夹角以及所述第二三角剖分网格确定所述第一辅助关键点。
  6. 根据权利要求5所述的方法,其中根据所述第二三角剖分网格确定所述第一三角剖分网格中第一连线与第二连线之间的第一夹角,包括:
    确定所述第二三角剖分网格上与所述第一三角形对应的第一对应三角形;
    确定所述第一夹角;其中,所述第一夹角与所述第一对应三角形上对应于所述第一夹角的第一对应夹角之间的第一差值在第二预设误差范围内。
  7. 根据权利要求5所述的方法,其中根据所述第二三角剖分网格确定第三连线与第四连线之间的第二夹角,包括:
    确定所述第二三角剖分网格上与所述第二三角形对应的第二对应三角形;
    确定所述第二夹角;其中,所述第二夹角与所述第二对应三角形上对应于所述第二夹角的第二对应夹角之间的第二差值在第二预设误差范围内。
  8. 根据权利要求5所述的方法,其中根据所述第一夹角、第二夹角以及所述第二三角剖分网格确定所述第一辅助关键点和第二辅助关键点,包括:
    确定所述第一连线与所述第二三角剖分网格中对应于第一连线的第一对应连线之间的第一比例;
    根据所述第一比例以及所述第一夹角确定所述第一辅助关键点。
  9. 根据权利要求5所述的方法,其中根据所述第一夹角、第二夹角以及所述第二三角剖分网格确定所述第一辅助关键点和第二辅助关键点,包括:
    确定所述第三连线与所述第二三角剖分网格中对应于第三连线的边之间的第二比例;
    根据所述第二比例以及所述第二夹角确定所述第二辅助关键点。
  10. 根据权利要求6或7所述的方法,其中所述第二预设误差范围的最小值为0。
  11. 根据权利要求6或7所述的方法,还包括:
    根据所述眼部关键点确定所述人脸图像上眼睛的开合程度;
    根据所述开合程度确定所述第一差值和所述第二差值。
  12. 根据权利要求11所述的方法,其其中根据所述开合程度确定所述第一差值和所述第二差值,包括:
    在所述开合程度达到预设最大值时,所述第一差值和第二差值设置为所述第二预设误差范围的最小值;
    在所述开合程度达到预设最小值时,所述第一差值和第二差值设置为所述第二预设误差范围的最大值。
  13. 根据权利要求4-9、12任一项所述的方法,其中所述第二三角剖分网格中的三角形为等边三角形。
  14. 根据权利要求4-9、12任一项所述的方法,其中根据所述第一三角剖分网格将眼妆效果图像变换到眼部预定位置处,包括:
    确定所述第一三角剖分网格与所述第二三角剖分网格之间的对应关系;
    根据所述对应关系将第二三角剖分网格中的所述眼妆效果图像变换至所述第一三角剖分网格中所述人脸图像上的眼部预定位置处。
  15. 一种图像处理装置,包括:
    识别模块,用于识别人脸图像上的眼部关键点;
    插值模块,用于插值得到辅助关键点;其中,所述辅助关键点与所述眼部关键点形成所述人脸图像上眼部预定位置处的第一三角剖分网格;
    变换模块,用于根据所述第一三角剖分网格将眼妆效果图像变换到眼部预定位置处。
  16. 一种图像处理硬件装置,包括:
    存储器,用于存储非暂时性计算机可读指令;以及
    处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现根据权利要求1-14中任意一项所述的图像处理方法。
  17. 一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行权利要求1-14中任意一项所述的图像处理方法。
  18. 一种图像处理终端,包括权利要求15所述的一种图像处理装置。
PCT/CN2019/073074 2018-06-28 2019-01-25 图像处理方法、装置、计算机可读存储介质和终端 WO2020001013A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/980,323 US11017580B2 (en) 2018-06-28 2019-01-25 Face image processing based on key point detection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810687841.2A CN109063560B (zh) 2018-06-28 2018-06-28 图像处理方法、装置、计算机可读存储介质和终端
CN201810687841.2 2018-06-28

Publications (1)

Publication Number Publication Date
WO2020001013A1 true WO2020001013A1 (zh) 2020-01-02

Family

ID=64817761

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073074 WO2020001013A1 (zh) 2018-06-28 2019-01-25 图像处理方法、装置、计算机可读存储介质和终端

Country Status (3)

Country Link
US (1) US11017580B2 (zh)
CN (1) CN109063560B (zh)
WO (1) WO2020001013A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489311A (zh) * 2020-04-09 2020-08-04 北京百度网讯科技有限公司 一种人脸美化方法、装置、电子设备及存储介质

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063560B (zh) * 2018-06-28 2022-04-05 北京微播视界科技有限公司 图像处理方法、装置、计算机可读存储介质和终端
CN109472753B (zh) * 2018-10-30 2021-09-07 北京市商汤科技开发有限公司 一种图像处理方法、装置、计算机设备和计算机存储介质
CN110211211B (zh) * 2019-04-25 2024-01-26 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN110223218B (zh) * 2019-05-16 2024-01-12 北京达佳互联信息技术有限公司 人脸图像处理方法、装置、电子设备及存储介质
CA3154216A1 (en) * 2019-10-11 2021-04-15 Beyeonics Surgical Ltd. System and method for improved electronic assisted medical procedures
WO2021083133A1 (zh) * 2019-10-29 2021-05-06 广州虎牙科技有限公司 图像处理方法、装置、设备及存储介质
CN110941332A (zh) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 表情驱动方法、装置、电子设备及存储介质
CN110910308B (zh) * 2019-12-03 2024-03-05 广州虎牙科技有限公司 图像处理方法、装置、设备和介质
CN111369644A (zh) * 2020-02-28 2020-07-03 北京旷视科技有限公司 人脸图像的试妆处理方法、装置、计算机设备和存储介质
CN111563855B (zh) * 2020-04-29 2023-08-01 百度在线网络技术(北京)有限公司 图像处理的方法及装置
CN114095646B (zh) * 2020-08-24 2022-08-26 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN114095647A (zh) * 2020-08-24 2022-02-25 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN112257635A (zh) * 2020-10-30 2021-01-22 杭州魔点科技有限公司 人脸误检过滤的方法、***、电子装置和存储介质
CN113344837B (zh) * 2021-06-28 2023-04-18 展讯通信(上海)有限公司 人脸图像处理方法及装置、计算机可读存储介质、终端
CN113961746B (zh) * 2021-09-29 2023-11-21 北京百度网讯科技有限公司 视频生成方法、装置、电子设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7742623B1 (en) * 2008-08-04 2010-06-22 Videomining Corporation Method and system for estimating gaze target, gaze sequence, and gaze map from video
CN107680033A (zh) * 2017-09-08 2018-02-09 北京小米移动软件有限公司 图片处理方法及装置
CN107818543A (zh) * 2017-11-09 2018-03-20 北京小米移动软件有限公司 图像处理方法及装置
CN107977934A (zh) * 2017-11-10 2018-05-01 北京小米移动软件有限公司 图像处理方法及装置
CN109063560A (zh) * 2018-06-28 2018-12-21 北京微播视界科技有限公司 图像处理方法、装置、计算机可读存储介质和终端

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6879324B1 (en) * 1998-07-14 2005-04-12 Microsoft Corporation Regional progressive meshes
CN101814192A (zh) * 2009-02-20 2010-08-25 三星电子株式会社 真实感3d人脸重建的方法
CN103824269B (zh) * 2012-11-16 2017-03-29 广州三星通信技术研究有限公司 人脸特效处理方法以及***
US9443132B2 (en) * 2013-02-05 2016-09-13 Children's National Medical Center Device and method for classifying a condition based on image analysis
KR102365393B1 (ko) * 2014-12-11 2022-02-21 엘지전자 주식회사 이동단말기 및 그 제어방법
CN107330868B (zh) * 2017-06-26 2020-11-13 北京小米移动软件有限公司 图片处理方法及装置
CN107341777B (zh) * 2017-06-26 2020-12-04 北京小米移动软件有限公司 图片处理方法及装置
CN108492247A (zh) * 2018-03-23 2018-09-04 成都品果科技有限公司 一种基于网格变形的眼妆贴图方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7742623B1 (en) * 2008-08-04 2010-06-22 Videomining Corporation Method and system for estimating gaze target, gaze sequence, and gaze map from video
CN107680033A (zh) * 2017-09-08 2018-02-09 北京小米移动软件有限公司 图片处理方法及装置
CN107818543A (zh) * 2017-11-09 2018-03-20 北京小米移动软件有限公司 图像处理方法及装置
CN107977934A (zh) * 2017-11-10 2018-05-01 北京小米移动软件有限公司 图像处理方法及装置
CN109063560A (zh) * 2018-06-28 2018-12-21 北京微播视界科技有限公司 图像处理方法、装置、计算机可读存储介质和终端

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489311A (zh) * 2020-04-09 2020-08-04 北京百度网讯科技有限公司 一种人脸美化方法、装置、电子设备及存储介质
CN111489311B (zh) * 2020-04-09 2023-08-08 北京百度网讯科技有限公司 一种人脸美化方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
US20210074054A1 (en) 2021-03-11
CN109063560B (zh) 2022-04-05
CN109063560A (zh) 2018-12-21
US11017580B2 (en) 2021-05-25

Similar Documents

Publication Publication Date Title
WO2020001013A1 (zh) 图像处理方法、装置、计算机可读存储介质和终端
WO2020001014A1 (zh) 图像美化方法、装置及电子设备
WO2020019663A1 (zh) 基于人脸的特效生成方法、装置和电子设备
WO2019242271A1 (zh) 图像变形方法、装置及电子设备
WO2020019664A1 (zh) 基于人脸的形变图像生成方法和装置
WO2020024569A1 (zh) 动态生成人脸三维模型的方法、装置、电子设备
WO2020029554A1 (zh) 增强现实多平面模型动画交互方法、装置、设备及存储介质
WO2019237745A1 (zh) 人脸图像处理方法、装置、电子设备及计算机可读存储介质
WO2020037863A1 (zh) 三维人脸图像重建方法、装置和计算机可读存储介质
WO2020019665A1 (zh) 基于人脸的三维特效生成方法、装置和电子设备
WO2020019666A1 (zh) 人脸特效的多人脸跟踪方法、装置和电子设备
WO2021213067A1 (zh) 物品显示方法、装置、设备及存储介质
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN110072046B (zh) 图像合成方法和装置
US8976182B2 (en) Facial sketch creation device, configuration information generation device, configuration information generation method, and storage medium
TWI752419B (zh) 影像處理方法及裝置、圖像設備及儲存媒介
US11120535B2 (en) Image processing method, apparatus, terminal, and storage medium
WO2019237747A1 (zh) 图像裁剪方法、装置、电子设备及计算机可读存储介质
CN109698914A (zh) 一种闪电特效渲染方法、装置、设备及存储介质
WO2019237746A1 (zh) 图像合并的方法和装置
WO2020001015A1 (zh) 场景操控的方法、装置及电子设备
CN108921798A (zh) 图像处理的方法、装置及电子设备
WO2020037924A1 (zh) 动画生成方法和装置
WO2020029556A1 (zh) 自适应平面的方法、装置和计算机可读存储介质
WO2020029555A1 (zh) 用于平面间无缝切换的方法、装置和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19827184

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01/04/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19827184

Country of ref document: EP

Kind code of ref document: A1