CN107680033B - Picture processing method and device - Google Patents

Picture processing method and device Download PDF

Info

Publication number
CN107680033B
CN107680033B CN201710806085.6A CN201710806085A CN107680033B CN 107680033 B CN107680033 B CN 107680033B CN 201710806085 A CN201710806085 A CN 201710806085A CN 107680033 B CN107680033 B CN 107680033B
Authority
CN
China
Prior art keywords
points
determining
point
picture
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710806085.6A
Other languages
Chinese (zh)
Other versions
CN107680033A (en
Inventor
陈志军
王倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201710806085.6A priority Critical patent/CN107680033B/en
Publication of CN107680033A publication Critical patent/CN107680033A/en
Application granted granted Critical
Publication of CN107680033B publication Critical patent/CN107680033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a picture processing method and device. The method comprises the following steps: determining face characteristic points and surrounding points of the picture; wherein, the enclosing line formed by the enclosing points is used for enclosing the human face characteristic points; the human face characteristic points comprise organ characteristic points and contour characteristic points; determining the outward expansion positions of the contour feature points according to the contour feature points and a preset outward expansion ratio; determining the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion; and rendering the current picture according to the offset of the expanded contour characteristic points, organ characteristic points and surrounding points to obtain a beauty picture corresponding to the picture. The problem that the human face profile is not smooth after the face is thinned due to the fact that the profile feature points are too close to the actual profile of the human face can be overcome, the smoothness of the human face boundary after the face is beautified is increased, the face thinning effect is improved, and user experience is influenced.

Description

Picture processing method and device
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a method and an apparatus for processing an image.
Background
A thinner face is generally considered by many to have a better aesthetic. Most of the existing mobile phones have the face beautifying function of face thinning, eye growing and the like on human faces in pictures or videos shot by users, and the purpose of beautifying is achieved by deforming human face organs in the pictures.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present disclosure provide a method and an apparatus for processing an image. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an image processing method, including:
determining face characteristic points and surrounding points of the picture; the enclosing line is composed of the enclosing points and is used for enclosing the human face characteristic points; the face characteristic points comprise organ characteristic points and contour characteristic points; determining the outward expansion position of the contour feature point according to the contour feature point and a preset outward expansion ratio; determining the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion; and rendering the current picture according to the offset of the expanded contour feature points, the organ feature points and the surrounding points to obtain a beauty picture corresponding to the picture.
Determining face characteristic points and surrounding points of the picture; the enclosing line is composed of the enclosing points and is used for enclosing the human face characteristic points; the face characteristic points comprise organ characteristic points and contour characteristic points;
determining the outward expansion position of the contour feature point according to the contour feature point and a preset outward expansion ratio;
determining the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion;
and rendering the current picture according to the offset of the expanded contour feature points, the organ feature points and the surrounding points to obtain a beauty picture corresponding to the picture.
In one embodiment, the determining the outward expansion position of the contour feature point according to the contour feature point and a preset outward expansion ratio includes:
determining the center point of the face;
determining the external expansion amount of the contour feature points far away from the central point according to the central point, the contour feature points and a preset external expansion proportion;
and determining the outward expansion position of the contour feature point according to the outward expansion amount of the contour feature point far away from the central point.
In one embodiment, the organ feature points include: eye feature points and nose feature points; the determining the center point of the face comprises:
respectively determining the central point and the nose tip point of the two eyes according to the eye characteristic point and the nose characteristic point;
and determining the center point of the face according to the center points of the two eyes and the nose tip point.
In one embodiment, the rendering the current picture according to the offset of the expanded contour feature point, the organ feature point, and the surrounding point to obtain a beauty picture corresponding to the picture includes:
performing triangularization subdivision on the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle;
and performing GPU rendering on the current picture according to the subdivision triangle, the contour feature points after the external expansion, the organ feature points and the offset of the surrounding points to obtain a beauty picture corresponding to the picture.
In one embodiment, the rendering the current picture according to the offset of the expanded contour feature point, the organ feature point, and the surrounding point to obtain a beauty picture corresponding to the picture includes:
performing triangularization subdivision on the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle;
shifting the subdivision triangle according to the offset of the contour feature points, the organ feature points and the surrounding points after the external expansion, and determining the offset of each pixel point on the split triangle after the offset;
determining the pixel value of each pixel point on the split triangle after the deviation according to the deviation of each pixel point on the split triangle after the deviation;
and determining a beauty picture corresponding to the picture according to the offset and the pixel value of each pixel point on the split triangle after the offset.
According to a second aspect of the embodiments of the present disclosure, there is provided a picture processing apparatus including:
the first determining module is used for determining face characteristic points and surrounding points of the picture; the enclosing line is composed of the enclosing points and is used for enclosing the human face characteristic points; the face characteristic points comprise organ characteristic points and contour characteristic points;
the second determining module is used for determining the outward expansion position of the contour feature point according to the contour feature point and a preset outward expansion proportion;
the third determining module is used for determining the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion;
and the rendering module is used for rendering the current picture according to the offset of the expanded contour feature points, the organ feature points and the surrounding points to obtain a beauty picture corresponding to the picture.
In one embodiment, the second determining module includes:
the first determining submodule is used for determining the central point of the face;
the second determining submodule is used for determining the external expansion amount of the contour characteristic point far away from the central point according to the central point, the contour characteristic point and a preset external expansion proportion;
and the third determining submodule is used for determining the outward expansion position of the contour feature point according to the outward expansion amount of the contour feature point far away from the central point.
In one embodiment, the organ feature points include: eye feature points and nose feature points;
the first determination sub-module: respectively determining the central point and the nose tip point of the two eyes according to the eye characteristic point and the nose characteristic point; and determining the center point of the face according to the center points of the two eyes and the nose tip point.
In one embodiment, the rendering module includes:
the first subdivision submodule is used for triangularly subdividing the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle;
and the rendering submodule is used for performing GPU rendering on the current picture according to the subdivision triangle, the contour feature points after the external expansion, the organ feature points and the offset of the surrounding points to obtain a beauty picture corresponding to the picture.
In one embodiment, the rendering module includes:
the second subdivision submodule is used for triangularly subdividing the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle;
the fourth determining submodule is used for offsetting the subdivision triangle according to the offset of the expanded contour characteristic points, the organ characteristic points and the surrounding points and determining the offset of each pixel point on the offset subdivision triangle;
a fifth determining submodule, configured to determine a pixel value of each pixel point on the split triangle after the offset according to the offset of each pixel point on the split triangle after the offset;
and the sixth determining submodule is used for determining a beauty picture corresponding to the picture according to the offset and the pixel value of each pixel point on the split triangle after the offset.
According to a third aspect of the embodiments of the present disclosure, there is provided a picture processing apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
determining face characteristic points and surrounding points of the picture; the enclosing line is composed of the enclosing points and is used for enclosing the human face characteristic points; the face characteristic points comprise organ characteristic points and contour characteristic points; determining the outward expansion position of the contour feature point according to the contour feature point and a preset outward expansion ratio; determining the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion; and rendering the current picture according to the offset of the expanded contour feature points, the organ feature points and the surrounding points to obtain a beauty picture corresponding to the picture.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method of the first aspect described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: this technical scheme is through on the basis of the profile characteristic point of people's face, expands the profile characteristic point outward, can improve adjacent pixel on the actual profile of people's face and be divided to same subdivision triangle's probability, overcomes because the profile characteristic point too closely to the actual profile of people's face and the unsmooth problem of face profile behind the thin face that leads to, increases the smoothness on face boundary behind the beautiful face, improves the thin face effect, influences user experience.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a picture processing method according to an exemplary embodiment.
Fig. 2a is a scene diagram illustrating a picture processing method according to an exemplary embodiment.
Fig. 2b is a scene diagram illustrating a picture processing method according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a picture processing method according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment.
Fig. 5 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment.
Fig. 10 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the related art, in order to meet the user requirements, a terminal such as a smart phone supports face-beautifying processing such as face thinning for a face in a picture or a video. In the process of face beautifying treatment such as face thinning, human face organs in the picture are deformed according to the feature point positioning; however, in actual operation, if the contour feature points are too close to the actual contour of the human face, adjacent pixel points on the actual contour of the human face are easily divided into different subdivision triangles, and due to factors such as floating point errors during calculation, the contour of the human face after face thinning is not smooth, shaking occurs, the face thinning effect is affected, and user experience is affected.
In order to solve the above problem, an embodiment of the present disclosure provides an image processing method, where the method includes: determining face characteristic points and surrounding points of the picture; wherein, the enclosing line formed by the enclosing points is used for enclosing the human face characteristic points; the human face characteristic points comprise organ characteristic points and contour characteristic points; determining the outward expansion positions of the contour feature points according to the contour feature points and a preset outward expansion ratio; determining the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion; and rendering the current picture according to the offset of the expanded contour characteristic points, organ characteristic points and surrounding points to obtain a beauty picture corresponding to the picture.
According to the image processing method provided by the embodiment of the disclosure, the outline feature points are expanded on the basis of the outline feature points of the human face, so that the probability that adjacent pixel points on the actual outline of the human face are divided into the same subdivision triangle can be improved, the problem that the outline of the human face is not smooth after the human face is thinned due to the fact that the outline feature points are too close to the actual outline of the human face is solved, the smoothness of the boundary of the human face after the human face is beautified is increased, the face thinning effect is improved, and user experience is influenced.
It should be noted that, in the embodiment of the present disclosure, the terminal is, for example, a smart phone, a tablet computer, a desktop computer, a notebook computer, or a wearable device (such as a bracelet, smart glasses, and the like).
Based on the above analysis, the following specific examples are proposed.
Fig. 1 is a flowchart illustrating a picture processing method according to an exemplary embodiment, where an execution subject of the method may be a terminal, as shown in fig. 1, the method includes the following steps 101 and 104:
in step 101, determining face characteristic points and surrounding points of a picture; wherein, the enclosing line formed by the enclosing points is used for enclosing the human face characteristic points; the human face characteristic points comprise organ characteristic points and contour characteristic points;
in an example, in a scene of performing face beautifying processing such as face thinning on a picture, firstly, a face key point positioning algorithm is used for performing face feature point positioning on a current picture to position M face feature points, wherein M is a positive integer greater than 1; the human face characteristic points comprise organ characteristic points and contour characteristic points; for example, the face keypoint localization algorithm may include an Active Appearance Model (AAM), a supervised gradient descent method (SDM), a Convolutional Neural Network (CNN), and the like. Setting an enclosing line aiming at the face in the current picture, wherein the range of the enclosing line is larger than that of the face by a preset proportion, the enclosing line encloses M characteristic points, and the enclosing line cannot exceed the boundary of the current picture; and selecting N surrounding points on the surrounding line, wherein N is a positive integer greater than 1. When the face beautifying operations such as face thinning and the like are carried out on each organ of the human face in the picture, the deformation range of each organ of the human face is limited to be carried out in the surrounding line.
In step 102, determining an outward expansion position of the contour feature point according to the contour feature point and a preset outward expansion ratio;
illustratively, the flaring location of the contour feature point is located between the current location of the contour feature point and the enclosing line. The implementation manner of determining the outward expansion position of the contour feature point according to the contour feature point and the preset outward expansion ratio at least includes any one of the following manners:
mode 1, first, determining the center point of a human face: respectively determining the central point and the nose tip point of the two eyes according to the eye characteristic points and the nose characteristic points; the organ feature points comprise eye feature points and nose feature points; determining the center point of the face according to the center points of the two eyes and the nose tip point; for example, the center points of both eyes are located at the middle positions of all the eye feature points, and the nose tip point is located at the middle position of all the nose feature points. The center point of the face is positioned between the center points of the two eyes and the center of the nose tip point.
Then, determining the external expansion amount of the contour characteristic point far away from the central point according to the central point, the contour characteristic point and a preset external expansion proportion; determining the outward expansion positions of the contour feature points according to the outward expansion amount of the contour feature points far away from the central point; for example, calculating the distance between the contour feature point and the center point, then calculating the product of the distance and a preset outward expansion ratio, and determining the product value as the outward expansion amount of the contour feature point; the outward expansion amount of the contour feature points is the distance between the contour feature points and the outward expansion positions of the contour feature points; the outward expansion position of the contour feature point is on the same straight line with the contour feature point and the central point, and the contour feature point is located between the central point and the outward expansion position of the contour feature point. As an example, the preset flare ratio may be an empirical value, such as 10%.
Referring to fig. 2a, in fig. 2a, face feature points 0-28 are shown, with M taking the value 29; surrounding points 29-44, and N takes the value of 16; the thick solid line along the edge of the face in fig. 2a is the actual contour line of the face; the black circles represent contour feature points determined by positioning the face feature points of the picture, and as can be seen, the contour feature points are all located on the actual contour line of the face; in fig. 2a, 0 to 12 are the outward expansion positions of the feature points of the profile; 14 to 16 are nose feature points, and the nose tip point M2 is located at the middle position of all the nose feature points; 13. 17 to 28 are eye feature points; the center points M1 of the two eyes are located at the middle positions of all the eye feature points, the center point M0 of the human face is located at the middle of M1 and M2, and the coordinate of M0 can adopt the following calculation formula: m0=(M1+M2) 2; for example, in fig. 2a, the flared contour feature point 3 represents the flared position of the contour feature point M3, and 3 is located on the same straight line as M3 and M0.
Mode 2, connecting the contour feature points in sequence by using smooth curves to obtain a contour line formed by the contour feature points; expanding the contour line formed by the contour feature points by a preset outward expansion ratio; and determining a connecting line of the contour characteristic point and the central point, and determining the outward expansion position of the contour characteristic point by the intersection position of the connecting line and the outward expanded contour line.
In step 103, determining the offset of the contour feature points, the organ feature points and the surrounding points after the external expansion;
calculating the offset of the contour feature points and the organ feature points after the external expansion in the picture; the offset amount of the face feature point is information such as a movement distance and a movement direction of the face feature point determined by calculation when a face is beautified. Optionally, the offset of the surrounding point is 0; during the beautifying operation, the surrounding lines and the N surrounding points are kept still.
Taking any one of the expanded contour feature points i in fig. 2a as an example, the offset of the expanded contour feature point i may be calculated as follows: offseti=αi*(M0-Mi) (ii) a Wherein alpha isiFor the offset coefficients of the expanded contour feature points i, it should be noted that the offset coefficients corresponding to each expanded contour feature point are not completely the same, and may be preset.
In step 104, rendering the current picture according to the offset of the expanded contour feature points, organ feature points and surrounding points to obtain a beauty picture corresponding to the picture.
For example, the implementation manner of rendering the current picture according to the offset of the expanded contour feature point, the expanded organ feature point, and the extended surrounding point at least includes any one of the following manners:
mode a, real-time rendering based on a Graphics Processing Unit (GPU), comprising the steps of: performing triangularization subdivision on the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle; referring to the triangulated subdivision result shown in fig. 2b, the face is divided into a plurality of subdivision triangles, and therefore, adjacent pixel points on the actual contour of the face are divided into the same subdivision triangle as much as possible; and performing GPU rendering on the current picture according to the subdivision triangle and the offset of the expanded contour characteristic point, the expanded organ characteristic point and the surrounding point to obtain a beauty picture corresponding to the picture.
Mode b, rendering based on a Central Processing Unit (CPU), comprising the steps of: 1) performing triangularization subdivision on the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle; referring to the triangulated subdivision result shown in fig. 2b, the face is divided into a plurality of subdivision triangles, and therefore, adjacent pixel points on the actual contour of the face are divided into the same subdivision triangle as much as possible, and face beautifying operations such as face thinning and the like can be performed on the whole face. 2) Shifting the subdivision triangle according to the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a shifted subdivision triangle, and determining the offset of each pixel point on the shifted subdivision triangle relative to the original image; for example, affine transformation is used to calculate the offset of each pixel point on the split triangle after offset with respect to the original image. 3) Determining the pixel value of each pixel point on the split triangle after the deviation according to the deviation of each pixel point on the split triangle after the deviation; for example, according to the offset of each pixel point on the split triangle after the offset, a bilinear interpolation algorithm is adopted to determine the pixel value of each pixel point on the split triangle after the offset. 4) And determining a beauty picture corresponding to the picture according to the offset and the pixel value of each pixel point on the split triangle after the offset.
The technical scheme that this disclosed embodiment provided, on the basis through the profile characteristic point at the people's face, expand the profile characteristic point outward, can improve the adjacent pixel on the actual profile of people's face and be divided to the probability of same subdivision triangle-shaped, overcome because the profile characteristic point too closely the actual profile of people's face and the unsmooth problem of face profile behind the thin face that leads to, increase the smoothness on face boundary behind the beautiful face, improve the thin face effect, so, can influence user experience.
Fig. 3 is a flow chart illustrating a picture processing method according to an example embodiment. As shown in fig. 3, on the basis of the embodiment shown in fig. 1, the picture processing method according to the present disclosure may include the following steps 301-:
in step 301, performing face feature point positioning on the picture by using a face key point positioning algorithm to determine face feature points of the picture; setting an enclosing line for the face in the current picture, and selecting an enclosing point on the enclosing line; the enclosing line formed by the enclosing points is used for enclosing each face characteristic point.
Exemplary, the face feature points include organ feature points and contour feature points. Face keypoint location algorithms may include AAM, SDM, or CNN, conditional neural networks, and the like.
In step 302, determining the center point of the face;
exemplary organ feature points include: eye feature points and nose feature points; the implementation step of determining the center point of the face may include: respectively determining the central point and the nose tip point of the two eyes according to the eye characteristic points and the nose characteristic points; and determining the center point of the face according to the center points of the two eyes and the nose tip point.
In step 303, determining an outward expansion amount of the contour feature point far from the central point according to the central point, the contour feature point and a preset outward expansion ratio;
in step 304, the extension positions of the contour feature points are determined according to the extension amount of the contour feature points far away from the central point.
In step 305, determining the offset of the contour feature points, the organ feature points and the surrounding points after the external expansion;
illustratively, the offset of each enclosure point is 0; the N bounding points and bounding lines do not move during the cosmetic operation.
In step 306, triangulating the picture according to the contour feature points, the organ feature points and the surrounding points after the external expansion to obtain a subdivision triangle;
in step 307, shifting the subdivision triangle according to the offset of the contour feature points, the organ feature points and the surrounding points after the external expansion, and determining the offset of each pixel point on the shifted subdivision triangle;
in step 308, determining the pixel value of each pixel point on the split triangle after the offset according to the offset of each pixel point on the split triangle after the offset;
in step 309, a beauty picture corresponding to the picture is determined according to the offset and the pixel value of each pixel point on the split triangle after the offset.
The technical scheme provided by the embodiment of the disclosure expands the contour feature points on the basis of the contour feature points of the human face, can improve the probability that adjacent pixel points on the actual contour of the human face are divided into the same subdivision triangle, overcomes the problem that the contour of the human face is not smooth after thinning the face caused by the fact that the contour feature points are too close to the actual contour of the human face, increases the smoothness of the boundary of the human face after beautifying the face, improves the face thinning effect, and influences the user experience.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
FIG. 4 is a block diagram illustrating a picture processing device according to an example embodiment; the apparatus may be implemented in various ways, for example, with all of the components of the apparatus being implemented in a terminal, or with components of the apparatus being implemented in a coupled manner on the terminal side; the apparatus may implement the method related to the present disclosure through software, hardware, or a combination of the two, as shown in fig. 4, the image processing apparatus includes: a first determining module 401, a second determining module 402, a third determining module 403, and a rendering module 404, wherein:
the first determination module 401 is configured to determine face feature points and bounding points of a picture; wherein, the enclosing line formed by the enclosing points is used for enclosing the human face characteristic points; the human face characteristic points comprise organ characteristic points and contour characteristic points;
the second determining module 402 is configured to determine an outward expansion position of the contour feature point according to the contour feature point and a preset outward expansion ratio;
the third determination module 403 is configured to determine offsets of the contour feature points, the organ feature points, and the surrounding points after the flaring;
the rendering module 404 is configured to render the current picture according to the offset of the expanded contour feature points, organ feature points, and surrounding points, so as to obtain a beauty picture corresponding to the picture.
The device provided by the embodiment of the disclosure can be used for executing the technical scheme of the embodiment shown in fig. 1, and the execution mode and the beneficial effect are similar, and are not described again here.
In a possible implementation, as shown in fig. 5, the picture processing apparatus shown in fig. 4 may further include a second determining module 402 configured to include: a first determination submodule 501, a second determination submodule 502 and a third determination submodule 503, wherein:
the first determination submodule 501 is configured to determine a center point of a face;
the second determining submodule 502 is configured to determine an amount of outward expansion of the contour feature point away from the central point according to the central point, the contour feature point and a preset outward expansion ratio;
the third determining submodule 503 is configured to determine the flaring positions of the contour feature points according to the flaring amounts of the contour feature points away from the central point.
In one possible implementation, the first determining submodule 501: respectively determining the central point and the nose tip point of the two eyes according to the eye characteristic points and the nose characteristic points; determining the center point of the face according to the center points of the two eyes and the nose tip point; wherein the organ feature points include: eye feature points and nose feature points.
In a possible implementation, as shown in fig. 6, the picture processing apparatus shown in fig. 4 may further include a rendering module 404 configured to include: a first subdivision sub-module 601 and a rendering sub-module 602, wherein:
the first subdivision sub-module 601 is configured to triangulate a picture according to the contour feature points, the organ feature points and the surrounding points after the external expansion to obtain a subdivision triangle;
the rendering submodule 602 is configured to perform GPU rendering on the current picture according to the subdivision triangle and the offset of the expanded contour feature point, organ feature point, and surrounding point, to obtain a beauty picture corresponding to the picture.
In a possible implementation, as shown in fig. 7, the picture processing apparatus shown in fig. 4 may further include a rendering module 404 configured to include: a second subdivision sub-module 701, a fourth determination sub-module 702, a fifth determination sub-module 703 and a sixth determination sub-module 704, wherein:
the second subdivision submodule 701 is configured to triangulate a picture according to the expanded contour feature points, organ feature points and surrounding points to obtain a subdivision triangle;
the fourth determining submodule 702 is configured to shift the subdivision triangle according to the offsets of the expanded contour feature points, the organ feature points and the surrounding points, and determine the offset of each pixel point on the shifted subdivision triangle;
the fifth determining submodule 703 is configured to determine, according to the offset of each pixel point on the split triangle after the offset, a pixel value of each pixel point on the split triangle after the offset;
the sixth determining submodule 704 is configured to determine a beauty picture corresponding to the picture according to the offset and the pixel value of each pixel point on the split triangle after the offset.
Fig. 8 is a block diagram illustrating a picture processing device 800 according to an exemplary embodiment, where the picture processing device 800 may be implemented in various ways, such as implementing all components of the device in a terminal or implementing components of the device in a coupled manner on the terminal side; the picture processing apparatus 800 includes:
a processor 801;
a memory 802 for storing processor-executable instructions;
wherein the processor 801 is configured to:
determining face characteristic points and surrounding points of the picture; wherein, the enclosing line formed by the enclosing points is used for enclosing the human face characteristic points; the human face characteristic points comprise organ characteristic points and contour characteristic points;
determining the outward expansion positions of the contour feature points according to the contour feature points and a preset outward expansion ratio;
determining the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion;
and rendering the current picture according to the offset of the expanded contour characteristic points, organ characteristic points and surrounding points to obtain a beauty picture corresponding to the picture.
In one embodiment, the processor 801 may be further configured to:
determining the center point of the face;
determining the external expansion amount of the contour characteristic point far away from the central point according to the central point, the contour characteristic point and a preset external expansion proportion;
and determining the outward expansion positions of the contour feature points according to the outward expansion amount of the contour feature points far away from the central point.
In one embodiment, the processor 801 may be further configured to:
respectively determining the central point and the nose tip point of the two eyes according to the eye characteristic points and the nose characteristic points;
and determining the center point of the face according to the center points of the two eyes and the nose tip point.
In one embodiment, the processor 801 may be further configured to:
performing triangularization subdivision on the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle;
and performing GPU rendering on the current picture according to the subdivision triangle and the offset of the expanded contour characteristic point, the expanded organ characteristic point and the surrounding point to obtain a beauty picture corresponding to the picture.
In one embodiment, the processor 801 may be further configured to:
performing triangularization subdivision on the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle;
shifting the subdivision triangle according to the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion, and determining the offset of each pixel point on the split triangle after the offset;
determining the pixel value of each pixel point on the split triangle after the deviation according to the deviation of each pixel point on the split triangle after the deviation;
and determining a beauty picture corresponding to the picture according to the offset and the pixel value of each pixel point on the split triangle after the offset.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 9 is a block diagram illustrating a picture processing device in accordance with an exemplary embodiment; the picture processing apparatus 900 is applicable to a terminal; the picture processing device 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls overall operation of the picture processing device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operations in the picture processing apparatus 900. Examples of such data include instructions for any application or method operating on picture processing device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 906 provides power to the various components of the picture processing device 900. The power components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the picture processing device 900.
The multimedia components 908 include a screen that provides an output interface between the picture processing device 900 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. When the picture processing apparatus 900 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, audio component 910 includes a Microphone (MIC) configured to receive external audio signals when picture processing device 900 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status evaluation of various aspects for the picture processing device 900. For example, the sensor component 914 may detect an open/closed state of the picture processing device 900, a relative positioning of components, such as a display and a keypad of the picture processing device 900, a change in position of the picture processing device 900 or a component of the picture processing device 900, the presence or absence of user contact with the picture processing device 900, orientation or acceleration/deceleration of the picture processing device 900, and a change in temperature of the picture processing device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communication between the picture processing apparatus 900 and other devices in a wired or wireless manner. The picture processing device 900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the picture processing device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is provided that includes instructions, such as memory 904, that are executable by processor 920 of picture processing device 900 to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 10 is a block diagram illustrating a picture processing apparatus according to an exemplary embodiment. For example, the image processing apparatus 1000 may be provided as a server. The picture processing device 1000 includes a processing component 1002 that further includes one or more processors, and memory resources, represented by memory 1003, for storing instructions, such as applications, that are executable by the processing component 1002. The application programs stored in memory 1003 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1002 is configured to execute instructions to perform the above-described methods.
The picture processing device 1000 may also include a power component 1006 configured to perform power management of the picture processing device 1000, a wired or wireless network interface 1005 configured to connect the picture processing device 1000 to a network, and an input/output (I/O) interface 1008. Picture processing device 1000 may operate based on an operating system stored in memory 1003, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
A non-transitory computer-readable storage medium, in which instructions, when executed by a processor of a picture processing device 900 or a picture processing device 1000, enable the picture processing device 900 or the picture processing device 1000 to perform a picture processing method comprising:
determining face characteristic points and surrounding points of the picture; wherein, the enclosing line formed by the enclosing points is used for enclosing the human face characteristic points; the human face characteristic points comprise organ characteristic points and contour characteristic points;
determining the outward expansion positions of the contour feature points according to the contour feature points and a preset outward expansion ratio;
determining the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion;
and rendering the current picture according to the offset of the expanded contour characteristic points, organ characteristic points and surrounding points to obtain a beauty picture corresponding to the picture.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (6)

1. An image processing method, comprising:
determining face characteristic points and surrounding points of the picture; the enclosing line is composed of the enclosing points and is used for enclosing the human face characteristic points; the face characteristic points comprise organ characteristic points and contour characteristic points;
determining the outward expansion position of the contour feature point according to the contour feature point and a preset outward expansion ratio;
determining the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion;
rendering the picture according to the offset of the expanded contour feature points, the organ feature points and the surrounding points to obtain a beauty picture corresponding to the picture;
determining the outward expansion position of the contour feature point according to the contour feature point and a preset outward expansion ratio, wherein the method comprises the following steps: determining the center point of the face; determining the external expansion amount of the contour feature points far away from the central point according to the central point, the contour feature points and a preset external expansion proportion; determining the outward expansion position of the contour feature point according to the outward expansion amount of the contour feature point far away from the central point;
rendering the picture according to the offset of the expanded contour feature points, the organ feature points and the surrounding points to obtain a beauty picture corresponding to the picture, wherein the rendering comprises the following steps:
performing triangularization subdivision on the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle; performing GPU rendering on the picture according to the subdivision triangle, the contour feature points after the external expansion, and the offsets of the organ feature points and the surrounding points to obtain a beauty picture corresponding to the picture; or triangulating the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle; shifting the subdivision triangle according to the offset of the contour feature points, the organ feature points and the surrounding points after the external expansion, and determining the offset of each pixel point on the split triangle after the offset; determining the pixel value of each pixel point on the split triangle after the deviation according to the deviation of each pixel point on the split triangle after the deviation; and determining a beauty picture corresponding to the picture according to the offset and the pixel value of each pixel point on the split triangle after the offset.
2. The method of claim 1, wherein the organ feature points comprise: eye feature points and nose feature points; the determining the center point of the face comprises:
respectively determining the central point and the nose tip point of the two eyes according to the eye characteristic point and the nose characteristic point;
and determining the center point of the face according to the center points of the two eyes and the nose tip point.
3. A picture processing apparatus, comprising:
the first determining module is used for determining face characteristic points and surrounding points of the picture; the enclosing line is composed of the enclosing points and is used for enclosing the human face characteristic points; the face characteristic points comprise organ characteristic points and contour characteristic points;
the second determining module is used for determining the outward expansion position of the contour feature point according to the contour feature point and a preset outward expansion proportion;
the third determining module is used for determining the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion;
the rendering module is used for rendering the picture according to the offset of the expanded contour feature points, the organ feature points and the surrounding points to obtain a beauty picture corresponding to the picture;
the second determining module includes:
the first determining submodule is used for determining the central point of the face;
the second determining submodule is used for determining the external expansion amount of the contour characteristic point far away from the central point according to the central point, the contour characteristic point and a preset external expansion proportion;
the third determining submodule is used for determining the outward expansion position of the contour feature point according to the outward expansion amount of the contour feature point far away from the central point;
the rendering module includes:
the first subdivision submodule is used for triangularly subdividing the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle;
the rendering submodule is used for performing GPU rendering on the picture according to the subdivision triangle, the extended contour feature points, the organ feature points and the offset of the surrounding points to obtain a beauty picture corresponding to the picture;
alternatively, the rendering module includes:
the second subdivision submodule is used for triangularly subdividing the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle;
the fourth determining submodule is used for offsetting the subdivision triangle according to the offset of the expanded contour characteristic points, the organ characteristic points and the surrounding points and determining the offset of each pixel point on the offset subdivision triangle;
a fifth determining submodule, configured to determine a pixel value of each pixel point on the split triangle after the offset according to the offset of each pixel point on the split triangle after the offset;
and the sixth determining submodule is used for determining a beauty picture corresponding to the picture according to the offset and the pixel value of each pixel point on the split triangle after the offset.
4. The apparatus of claim 3, wherein the organ feature points comprise: eye feature points and nose feature points;
the first determination sub-module: respectively determining the central point and the nose tip point of the two eyes according to the eye characteristic point and the nose characteristic point; and determining the center point of the face according to the center points of the two eyes and the nose tip point.
5. A picture processing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
determining face characteristic points and surrounding points of the picture; the enclosing line is composed of the enclosing points and is used for enclosing the human face characteristic points; the face characteristic points comprise organ characteristic points and contour characteristic points;
determining the outward expansion position of the contour feature point according to the contour feature point and a preset outward expansion ratio;
determining the offset of the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion;
rendering the picture according to the offset of the expanded contour feature points, the organ feature points and the surrounding points to obtain a beauty picture corresponding to the picture;
the processor is further configured to: determining the center point of the face; determining the external expansion amount of the contour feature points far away from the central point according to the central point, the contour feature points and a preset external expansion proportion; determining the outward expansion position of the contour feature point according to the outward expansion amount of the contour feature point far away from the central point;
the processor is further configured to: performing triangularization subdivision on the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle; performing GPU rendering on the picture according to the subdivision triangle, the contour feature points after the external expansion, and the offsets of the organ feature points and the surrounding points to obtain a beauty picture corresponding to the picture; alternatively, the first and second electrodes may be,
performing triangularization subdivision on the picture according to the contour characteristic points, the organ characteristic points and the surrounding points after the external expansion to obtain a subdivision triangle; shifting the subdivision triangle according to the offset of the contour feature points, the organ feature points and the surrounding points after the external expansion, and determining the offset of each pixel point on the split triangle after the offset; determining the pixel value of each pixel point on the split triangle after the deviation according to the deviation of each pixel point on the split triangle after the deviation; and determining a beauty picture corresponding to the picture according to the offset and the pixel value of each pixel point on the split triangle after the offset.
6. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, perform the steps of the method of claim 1 or 2.
CN201710806085.6A 2017-09-08 2017-09-08 Picture processing method and device Active CN107680033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710806085.6A CN107680033B (en) 2017-09-08 2017-09-08 Picture processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710806085.6A CN107680033B (en) 2017-09-08 2017-09-08 Picture processing method and device

Publications (2)

Publication Number Publication Date
CN107680033A CN107680033A (en) 2018-02-09
CN107680033B true CN107680033B (en) 2021-02-19

Family

ID=61136331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710806085.6A Active CN107680033B (en) 2017-09-08 2017-09-08 Picture processing method and device

Country Status (1)

Country Link
CN (1) CN107680033B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470322B (en) * 2018-03-09 2022-03-18 北京小米移动软件有限公司 Method and device for processing face image and readable storage medium
CN108776957A (en) * 2018-05-25 2018-11-09 北京小米移动软件有限公司 Face image processing process and device
CN108921856B (en) * 2018-06-14 2022-02-08 北京微播视界科技有限公司 Image cropping method and device, electronic equipment and computer readable storage medium
CN109063560B (en) 2018-06-28 2022-04-05 北京微播视界科技有限公司 Image processing method, image processing device, computer-readable storage medium and terminal
CN109254775A (en) * 2018-08-30 2019-01-22 广州酷狗计算机科技有限公司 Image processing method, terminal and storage medium based on face
CN110070555A (en) * 2018-10-19 2019-07-30 北京微播视界科技有限公司 Image processing method, device, hardware device
CN109302628B (en) * 2018-10-24 2021-03-23 广州虎牙科技有限公司 Live broadcast-based face processing method, device, equipment and storage medium
CN109934766B (en) 2019-03-06 2021-11-30 北京市商汤科技开发有限公司 Image processing method and device
CN110097622B (en) * 2019-04-23 2022-02-25 北京字节跳动网络技术有限公司 Method and device for rendering image, electronic equipment and computer readable storage medium
CN110049351B (en) * 2019-05-23 2022-01-25 北京百度网讯科技有限公司 Method and device for deforming human face in video stream, electronic equipment and computer readable medium
CN111652794B (en) * 2019-07-05 2024-03-05 广州虎牙科技有限公司 Face adjusting and live broadcasting method and device, electronic equipment and storage medium
CN111915479B (en) * 2020-07-15 2024-04-26 抖音视界有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112767288B (en) * 2021-03-19 2023-05-12 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025831A (en) * 2006-02-24 2007-08-29 山东理工大学 Rapid precise constructing and shaping method for complex curved face product
CN101034482A (en) * 2006-03-07 2007-09-12 山东理工大学 Method for automatically generating complex components three-dimensional self-adapting finite element grid
CN102509338A (en) * 2011-09-20 2012-06-20 北京航空航天大学 Contour and skeleton diagram-based video scene behavior generation method
CN102819865A (en) * 2012-08-09 2012-12-12 成都理工大学 Modeling method for magnetotelluric three-dimensional geologic structure model
CN102999929A (en) * 2012-11-08 2013-03-27 大连理工大学 Triangular gridding based human image face-lift processing method
CN103337085A (en) * 2013-06-17 2013-10-02 大连理工大学 Efficient portrait face distortion method
CN103761536A (en) * 2014-01-28 2014-04-30 五邑大学 Human face beautifying method based on non-supervision optimal beauty features and depth evaluation model
CN106846241A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method of image co-registration, device and equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2389289B (en) * 2002-04-30 2005-09-14 Canon Kk Method and apparatus for generating models of individuals
KR100603602B1 (en) * 2004-12-13 2006-07-24 한국전자통신연구원 Method of surface reconstruction from unorganized sparse 3D points
US8208717B2 (en) * 2009-02-25 2012-06-26 Seiko Epson Corporation Combining subcomponent models for object image modeling

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025831A (en) * 2006-02-24 2007-08-29 山东理工大学 Rapid precise constructing and shaping method for complex curved face product
CN101034482A (en) * 2006-03-07 2007-09-12 山东理工大学 Method for automatically generating complex components three-dimensional self-adapting finite element grid
CN102509338A (en) * 2011-09-20 2012-06-20 北京航空航天大学 Contour and skeleton diagram-based video scene behavior generation method
CN102819865A (en) * 2012-08-09 2012-12-12 成都理工大学 Modeling method for magnetotelluric three-dimensional geologic structure model
CN102999929A (en) * 2012-11-08 2013-03-27 大连理工大学 Triangular gridding based human image face-lift processing method
CN103337085A (en) * 2013-06-17 2013-10-02 大连理工大学 Efficient portrait face distortion method
CN103761536A (en) * 2014-01-28 2014-04-30 五邑大学 Human face beautifying method based on non-supervision optimal beauty features and depth evaluation model
CN106846241A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method of image co-registration, device and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Robust Face Alignment for Illumination and Pose Invariant Face Recognition;Fatih Kahraman 等;《2007 IEEE Conference on Computer Vision and Pattern Recognition》;20070716;第1-7页 *
复杂光照下自适应区域增强人脸特征定位;林奎成 等;《仪器仪表学报》;20140228;第35卷(第2期);第292-298页 *

Also Published As

Publication number Publication date
CN107680033A (en) 2018-02-09

Similar Documents

Publication Publication Date Title
CN107680033B (en) Picture processing method and device
CN107818543B (en) Image processing method and device
CN107958439B (en) Image processing method and device
CN107330868B (en) Picture processing method and device
CN107977934B (en) Image processing method and device
CN107657590B (en) Picture processing method and device and storage medium
US10032076B2 (en) Method and device for displaying image
CN107341777B (en) Picture processing method and device
KR101694643B1 (en) Method, apparatus, device, program, and recording medium for image segmentation
CN108470322B (en) Method and device for processing face image and readable storage medium
CN109087238B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN107464253B (en) Eyebrow positioning method and device
EP2927787B1 (en) Method and device for displaying picture
US11734804B2 (en) Face image processing method and apparatus, electronic device, and storage medium
CN107403144B (en) Mouth positioning method and device
CN109325908B (en) Image processing method and device, electronic equipment and storage medium
CN111243011A (en) Key point detection method and device, electronic equipment and storage medium
CN112767288A (en) Image processing method and device, electronic equipment and storage medium
CN111078170B (en) Display control method, display control device, and computer-readable storage medium
CN110728621B (en) Face changing method and device of face image, electronic equipment and storage medium
CN111290663A (en) Curved screen display method and device, terminal and storage medium
CN107239758B (en) Method and device for positioning key points of human face
US9665925B2 (en) Method and terminal device for retargeting images
CN107563957B (en) Eye image processing method and device
CN110502993B (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant