WO2020001014A1 - 图像美化方法、装置及电子设备 - Google Patents

图像美化方法、装置及电子设备 Download PDF

Info

Publication number
WO2020001014A1
WO2020001014A1 PCT/CN2019/073075 CN2019073075W WO2020001014A1 WO 2020001014 A1 WO2020001014 A1 WO 2020001014A1 CN 2019073075 W CN2019073075 W CN 2019073075W WO 2020001014 A1 WO2020001014 A1 WO 2020001014A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
edge
target object
anchor point
present disclosure
Prior art date
Application number
PCT/CN2019/073075
Other languages
English (en)
French (fr)
Inventor
邓涵
刘志超
赖锦锋
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Publication of WO2020001014A1 publication Critical patent/WO2020001014A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to a method and device for image beautification.
  • APP Application, abbreviated as: APP
  • Beauty camera and super pixel The beauty functions of electronic devices usually include skin treatment effects such as skin tone adjustment, microdermabrasion, big eyes, and thin face, which can perform the same degree of beauty treatment on all faces identified in the image.
  • embodiments of the present disclosure provide a method, an apparatus and an electronic device for beautifying an image, which at least partially solve the problems in the prior art.
  • an embodiment of the present disclosure provides a method for beautifying an image, including:
  • a second image is attached to the target object based on the anchor point.
  • the method further includes:
  • the calculating an anchor point based on the first edge includes:
  • the attaching the second image to the target object based on the anchor point includes:
  • the second image is attached to the target object based on the key point and the anchor point on the second image.
  • the key point on the second image is a preset key point.
  • the method further includes:
  • the method before acquiring the first edge on the target object, the method includes:
  • the obtaining the target object of the first image includes:
  • the foreground image and the background image of the first image are separated; a target object is obtained in the foreground image.
  • the second image includes one or more of an eyelash image, a double eyelid image, an eyeliner image, or an eye shadow image.
  • an embodiment of the present disclosure further provides a device for beautifying an image, including:
  • Acquisition module used to acquire the first edge on the target object
  • a calculation module configured to perform calculation based on the first edge to obtain an anchor point
  • Fitting module used for fitting a second image to a target object based on the anchor point.
  • the method further includes:
  • a second edge acquisition module for acquiring a second edge on the target object
  • Detection module used to detect whether the positioning point obtained by the calculation module is located between the first edge and the second edge;
  • the judging module is configured to judge a detection result, and if the detection result is not, perform calculation based on the first edge again to obtain a new anchor point.
  • the calculation module includes:
  • Key point acquisition module used to acquire key points on the first edge
  • Triangulation module used to triangulate the target object based on the key points to obtain a triangular mesh
  • Anchor point acquisition module used to obtain an anchor point based on the triangle grid.
  • the bonding module includes:
  • a second image acquisition module configured to acquire a second image
  • Keypoint extraction module for second image used to extract keypoints on the second image
  • the second image fitting module is configured to fit the second image on the target object based on the key points and the anchor points on the second image.
  • the key points on the second image are preset key points.
  • the method further includes:
  • Error correction module used to perform error correction on the positioning points obtained by the positioning point acquisition module.
  • the method further includes:
  • Target object acquisition module a target object for acquiring a first image.
  • the target object acquisition module includes: a separation module: a foreground image and a background image for separating the first image;
  • Object acquisition module configured to acquire a target object in the foreground image.
  • the second image includes one or more of an eyelash image, a double eyelid image, an eyeliner image, or an eye shadow image.
  • an embodiment of the present disclosure further provides an electronic device.
  • the electronic device includes:
  • At least one processor At least one processor
  • a memory connected in communication with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the image beautification according to any one of the first aspects. method.
  • an embodiment of the present disclosure further provides a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to cause a computer to execute any one of the claims of the first aspect.
  • the method of image beautification is not limited to:
  • the image beautification method, device, electronic device, and non-transitory computer-readable storage medium provided in the embodiments of the present disclosure, wherein the image beautification method: according to a first edge on a target object as a reference, obtaining an anchor point through calculation, thereby It avoids the problem of image distortion caused by distortion of one reference when triangulation is performed with two references in the prior art.
  • FIG. 1 is a flowchart of an image beautification method according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart of detecting an anchor point based on a second edge according to an embodiment of the present disclosure
  • FIG. 3 is a flowchart of obtaining an anchor point by performing calculation based on a first edge according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of obtaining anchor points based on a triangular mesh according to an embodiment of the present disclosure
  • FIG. 5 is a flowchart of a method for attaching a second image to a target object based on an anchor point according to an embodiment of the present disclosure
  • FIG. 6 is a principle block diagram of an image beautifying device according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic block diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic block diagram of a terminal according to an embodiment of the present disclosure.
  • triangulation is first explained by way of example.
  • the basic principle of triangulation is that there must be and only one algorithm exists. Triangulate the scattered point set on a plane domain so that the sum of the minimum internal angles of all triangles is the largest.
  • a triangulation method that meets this condition is called Delaunay triangulation. Because this method has a series of unique properties, it has been widely used in computer graphics processing, 3D modeling and other fields.
  • an embodiment of the present disclosure provides a method for image beautification.
  • the image beautification method includes the following steps:
  • S101 Obtain a first edge on a target object.
  • the target object to be beautified needs to be processed first to obtain the first edge.
  • the key point on the target object can be extracted, and the extracted key point can be used as the first edge.
  • first extract the key points of the eye part, and the key points of the eye may be through the face Key points at the eye position obtained by detecting the characteristic key points, such as key points at the corners of the eye; one or more key points distributed on the upper eyelid; and one or more key points distributed on the lower eyelid.
  • Eye contours can be identified by extracting key points at the corners of the eyes, upper and lower eyelids.
  • key points at the corners of the eyes and upper eyelids are taken as the first edges.
  • S102 Perform calculation based on the first edge to obtain an anchor point.
  • the key points that are the first edge can be calculated by an equal distance translation, so as to obtain the translated key points correspondingly.
  • Key points can be used as anchor points. It is also possible to select two adjacent key points among the key points serving as the first edge and construct a triangle by using the line connecting the two adjacent key points as the base, and then the vertices of the triangle are the anchor points.
  • the beautification of an image in the present disclosure is mainly affixing a sticker to a target object in the image.
  • the second image in this step is the sticker image.
  • sticker images such as eyelashes, double eyelids, eyeliners, or eyeshadows are applied to the eye parts of a face image
  • the user online the eye parts of the face image through an application installed on an electronic device (such as a smartphone or tablet)
  • the application can also be used to automatically make-up the eye part of the face image, which is not limited in this disclosure.
  • the application first obtains the face image to be processed, and then detects the face image. After detecting the face area, the key points are extracted from the face to obtain the key points on the face image, and then the key points of the eye parts are selected.
  • all keypoints on the face image can be extracted, including keypoints on multiple parts such as eyebrows, nose, mouth, eyes, and contours of the face. Then, only the key points of the eye part can be selected, or only the key points of the predetermined position of the eye can be extracted.
  • For extracting the key points of the eye take the key points of one eye as an example, extract one key point at each corner of the eye, extract a key point at the highest point of the upper eyelid, and two key points on the left and right sides of the key point.
  • a key point at the lowest point of the eyelid and two key points on the left and right sides of the key point can be a total of 8 key points.
  • the extraction of eight key points in the present disclosure is only an exemplary description, and in actual applications, the number of key points that need to be extracted may be determined according to requirements.
  • a total of 5 key points at the two eye points and three key points at the upper eyelid can be used as the first edge.
  • the positioning points can be obtained according to the principle of triangulation and the interpolation of the eye makeup effect image selected by the user.
  • the position of the anchor point can be selected based on the position of the key point as the first edge.
  • the anchor point can be selected around the contour of the eye, for example, on the upper eyelid, lower eyelid, and lateral extension of the corner of the eye.
  • the key points together form the first triangulation mesh.
  • the first triangulation mesh includes a plurality of triangles, and a vertex of each triangle is an eye key point or an anchor point.
  • the anchor point is located on the upper eyelid, lower eyelid, or lateral extension of the eye corner, and the anchor point is calculated based on the key point as the first edge, it is mainly related to the shape and the key point as the first edge.
  • the shape of other parts of the face will not affect the position of the anchor point. For example, the distortion caused by the severely raised eyebrows on the face image will not change the position of the anchor point. Therefore, the triangle shape in the first triangulation mesh is relatively fixed. Therefore, the eye makeup effect is based on the first triangulation mesh.
  • the positioning points are calculated around the eye, and a triangulation mesh is formed according to the key points and positioning points of the face and eyes.
  • the standard eye makeup effect image is transformed at a predetermined position on the face of the human eye to solve the problem of large differences in the triangulation mesh shape due to different people and different conditions of the eyes, thereby achieving different people and different conditions of the eyes. All can better paste the technical effect of the expected eye makeup effect map on the eyes, thereby improving the user experience effect.
  • the method further includes:
  • Key points are extracted on the target object as a second edge.
  • a key point for extracting eyebrows may be selected as the second key point.
  • step S102 if it is detected that the eyeliner is not located between the eyes and the eyebrows, it returns to step S102 to recalculate the positioning point until the positioning point is between the eyes and the eyebrows.
  • obtaining a positioning point by performing calculation based on the first edge in step S102 includes:
  • the key points of the first edge are obtained. As in the above embodiment, 5 key points of the eye part are selected as the key points of the first edge.
  • S302 Triangulate the target object based on the key points to obtain a triangular mesh.
  • adjacent key points can be connected, and a straight line obtained by connecting adjacent key points is used as a base to construct a triangle, thereby obtaining a triangular mesh.
  • the vertices of the triangle are the anchor points.
  • points a, b, c, d, and e are the five key points on the first edge.
  • Line ab, line cd, and line de are used as the bottom edges to construct triangles abf, triangle bcg.
  • the vertices f, g, h, and i of triangle abf, triangle bcg, triangle cdh, and triangle dei are the anchor points.
  • step S103 of attaching the second image to the target object based on the anchor point includes:
  • the second image is an image that needs to be pasted onto the first image.
  • images such as eyelashes, double eyelids, eyeliners, and eye shadows are pasted to the eye parts of the face. Images such as eyelashes, double eyelids, eyeliners, and eye shadows are the second images.
  • the pixel values of the key points on the second image obtained in step S502 are written to the anchor points of the first image through an image processing algorithm, and the values of other pixels can be based on their positional relationship with the key points. , Correspondingly written to the corresponding position of the first image.
  • the key point on the second image is a preset key point.
  • the number and positions of key points on the second image are set in advance.
  • the key points on the second image are preset, in order to maintain the corresponding positioning points on the first image, the positioning points on the second image need to be calculated based on the The key points are calculated to obtain the anchor points.
  • step S302 when constructing a triangle in step S302, it is necessary to obtain the degree of the included angle of the corresponding triangle in the second image, so as to construct the triangle according to the degree of the included angle of the corresponding triangle, so as to ensure the calculated positioning point and the key point on the second image. Corresponding relationship.
  • the degrees of ⁇ fab and ⁇ fba are determined according to the angle of the triangle at the corresponding position in the second image, thereby ensuring that the triangle abf is similar to the triangle at the corresponding position in the second image.
  • the method further includes: performing error correction on the positioning point.
  • a specific application scenario such as pasting eyelashes on the eye area
  • step S101 of obtaining the first edge on the target object step S101 further includes obtaining the target object of the first image.
  • the application When the application receives the command to beautify the image, it first needs to obtain the target object in the image.
  • the application can save a variety of eye makeup effect images in advance for users to choose.
  • the eye makeup effect images are designed on the application's standard template. Users can add eye makeup effects to face images through the application. After the user selects an eye makeup effect image provided by the application, the application can first obtain a picture or a video frame of the eye makeup effect to be added by the user. The user can upload an image including a human face through the interface provided by the application, and perform offline processing on the human face on the image, or obtain the user's video frame in real time through the camera, and perform online processing on the video frame.
  • the face image detection needs to be performed first, and the detection of the face image determines the picture to be detected Or whether there is a human face in the video frame, and if it exists, information such as the size and position of the human face is returned.
  • detection methods for face images such as skin color detection, motion detection, edge detection, etc.
  • detection models which are not limited in this disclosure.
  • a face image is generated for each face.
  • the key points of the face image can be obtained by extracting the key points of the face, thereby obtaining the target object in the image.
  • the following steps are further included;
  • the foreground and background of the first image are separated; a target object is obtained in the foreground.
  • the background and foreground of the image are separated, and the background and foreground are separated by the background difference method, the frame difference method, and the optical flow field method. After the foreground and background are separated, most of the interference information in the foreground image is eliminated. Obtaining a target object in the foreground image is much easier than obtaining a target object in the entire image.
  • the person or person's head in the image is first separated from the background of the image to obtain a foreground image of the person or person's head. Face information is extracted from a foreground image containing a person or a person's head.
  • the present disclosure also provides a device for image beautification, including:
  • An acquisition module 602 configured to acquire a first edge on a target object
  • a calculation module 603, configured to perform calculation based on the first edge to obtain an anchor point
  • a fitting module 608 is configured to fit a second image on a target object based on the anchor point.
  • the image beautification device further includes: a second edge acquisition module 604: configured to acquire a second edge on the target object;
  • a detection module 605, configured to detect whether the positioning point obtained by the calculation module is located between the first edge and the second edge;
  • the judging module 606 is configured to judge a detection result, and if the detection result is not, perform calculation based on the first edge again to obtain a new anchor point.
  • the calculation module 603 includes:
  • Key point acquisition module 6031 used to acquire key points on the first edge
  • Triangulation module 6032 configured to triangulate the target object based on the key points to obtain a triangular mesh
  • Anchor point acquisition module 6033 configured to obtain an anchor point based on the triangular mesh.
  • the bonding module 608 includes:
  • a second image acquisition module 6081 configured to acquire a second image
  • Keypoint extraction module 6082 for second image used to extract keypoints on the second image
  • the second image fitting module 6083 is configured to fit the second image on the target object based on the key point and the anchor point on the second image.
  • the key point on the second image is a preset key point.
  • the method further includes:
  • An error correction module 607 is configured to perform error correction on the positioning points obtained by the positioning point acquisition module.
  • the method further includes:
  • Target object acquisition module 601 a target object for acquiring a first image.
  • the target object acquisition module 601 includes: a separation module 6011: a foreground image and a background image for separating the first image;
  • An object acquisition module 6012 is configured to acquire a target object in the foreground image.
  • the second image includes one or more of an eyelash image, a double eyelid image, an eyeliner image, or an eye shadow image.
  • FIG. 6 The overall schematic diagram of the image beautifying device is shown in FIG. 6.
  • FIG. 7 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure.
  • the electronic device 70 includes a memory 71 and a processor 72.
  • the memory 71 is configured to store non-transitory computer-readable instructions.
  • the memory 71 may include one or more computer program products, and the computer program product may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • the processor 72 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the electronic device 70 to perform desired functions.
  • the processor 72 is configured to run the computer-readable instructions stored in the memory 71, so that the electronic device 70 performs all or part of the steps of image beautification of the embodiments of the present disclosure.
  • this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure. within.
  • FIG. 8 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • a computer-readable storage medium 80 according to an embodiment of the present disclosure has non-transitory computer-readable instructions 81 stored thereon.
  • the non-transitory computer-readable instructions 81 are executed by a processor, all or part of the image beautification steps of the aforementioned embodiments of the present disclosure are performed.
  • the computer-readable storage medium 80 includes, but is not limited to, optical storage media (for example, CD-ROM and DVD), magneto-optical storage media (for example, MO), magnetic storage media (for example, magnetic tape or mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
  • optical storage media for example, CD-ROM and DVD
  • magneto-optical storage media for example, MO
  • magnetic storage media for example, magnetic tape or mobile hard disk
  • Non-volatile memory rewritable media for example: memory card
  • media with built-in ROM for example: ROM box
  • FIG. 9 is a schematic diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure. As shown in FIG. 9, the terminal 90 includes the foregoing embodiment of an image beautifying device.
  • the terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like, and fixed terminal devices such as digital TVs, desktop computers, and the like.
  • PMPs portable multimedia players
  • navigation devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like
  • fixed terminal devices such as digital TVs, desktop computers, and the like.
  • the terminal 90 may further include other components.
  • the terminal 90 may include a power supply unit 91, a wireless communication unit 92, an A / V (audio / video) input unit 93, a user input unit 94, a sensing unit 95, an interface unit 96, a controller 97, The output unit 98 and the storage unit 99 and so on.
  • FIG. 9 shows a terminal with various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the wireless communication unit 92 allows radio 20 communication between the terminal 90 and a wireless communication system or network.
  • the A / V input unit 93 is used to receive audio or video signals.
  • the user input unit 94 may generate key input data according to a command input by the user to control various operations of the terminal device.
  • the sensing unit 95 detects the current state of the terminal 90, the position of the terminal 90, the presence or absence of a user's touch input to the terminal 90, the orientation of the terminal 90, the acceleration or deceleration movement and direction of the terminal 90, and the like, and generates a signal for controlling the terminal 90's operation command or signal.
  • the interface unit 96 functions as an interface through which at least one external device can be connected to the terminal 90.
  • the output unit 98 is configured to provide an output signal in a visual, audio, and / or tactile manner.
  • the storage unit 99 may store software programs and the like for processing and control operations performed by the controller 97, or may temporarily store data that has been output or is to be output.
  • the storage unit 99 may include at least one type of storage medium.
  • the terminal 90 may cooperate with a network storage device that performs a storage function of the storage unit 99 through a network connection.
  • the controller 97 generally controls the overall operation of the terminal device.
  • the controller 97 may include a multimedia module for reproducing or playing back multimedia data.
  • the controller 97 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as characters or images.
  • the power supply unit 91 receives external power or internal power under the control of the controller 97 and provides appropriate power required to operate each element and component.
  • image beautification proposed by the present disclosure may be implemented using computer-readable media, such as computer software, hardware, or any combination thereof.
  • various embodiments of image beautification proposed by the present disclosure can be implemented by using application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), At least one of a field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, electronic unit designed to perform the functions described herein is implemented, and in some cases, the present disclosure proposes Various embodiments of image beautification can be implemented in the controller 97.
  • ASIC application-specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable gate array
  • processor controller
  • microcontroller microcontroller
  • microprocessor electronic unit designed to perform the functions described herein
  • the various embodiments of image beautification proposed by the present disclosure can be implemented with a separate software module that allows at least one function or operation to be performed.
  • the software codes may be implemented by a software application (or program) written in any suitable programming language, and the software codes may be stored in the storage unit 99 and executed by the controller 97.
  • relational terms such as first and second are used only to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that any such relationship exists between these entities or operations.
  • the block diagrams of the devices, devices, devices, and systems involved in this disclosure are only illustrative examples and are not intended to require or imply that they must be connected, arranged, and configured in the manner shown in the block diagrams. As those skilled in the art would realize, these devices, devices, equipment, and systems may be connected, arranged, and configured in any manner. Words such as “including,” “including,” “having,” and the like are open words, meaning “including, but not limited to,” and can be used interchangeably with them.
  • an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B, or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
  • the word "exemplary” does not mean that the described example is preferred or better than other examples.
  • each component or each step can be disassembled and / or recombined.
  • These decompositions and / or recombinations should be considered as equivalent solutions of the present disclosure.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种图像美化的方法、装置及电子设备,涉及图像美化领域。其中,图像美化方法,包括:在目标对象上获取第一边缘(S101);基于所述第一边缘进行计算,得到定位点(S102);基于所述定位点将第二图像贴合在目标对象上(S103)。依据目标对象上的第一边缘为基准,通过计算得到定位点,从而避免了现有技术采用三角剖分时,造成的图像畸变问题。

Description

图像美化方法、装置及电子设备
本申请要求于2018年6月28日提交中国专利局、申请号为201810690342.9,发明名称为“图像美化方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像处理领域,尤其涉及一种图像美化的方法和装置。
背景技术
现在采用电子设备进行拍照时,可以使用内置的拍照软件实现拍照效果,也可以通过从网络端下载应用程序(Application,简称为:APP)来实现具有附加功能的拍照效果,例如可以实现暗光检测、美颜相机和超级像素等功能的APP。电子设备的美颜功能通常包括肤色调整、磨皮、大眼和瘦脸等美颜处理效果,能对图像中已识别出的所有人脸进行相同程度的美颜处理。
发明内容
现有技术中对图像进行美化时,首先对图像进行三角剖分,而在三角剖分时,往往需要选用两个参考目标,如在使用眼线图像对人脸进行美化时,需要依据眼睛的上边沿和眉毛为参考目标,对眼睛和眉毛之间的区域进行三角区分,但每个人的眉毛的形状都不一样,有的人眉毛中间会突出比较多,这种情况下,如果还依据眼睛的上边沿和眉毛为参考目标,会造成眼线图像出现畸变的问题。即现有技术中选用两个参考目标对图像进行美化时,存在美化后图像畸变的问题。
有鉴于此,本公开实施例提供了一种图像美化的方法、装置及电子设备,至少部分的解决现有技术中存在的问题。
第一方面,本公开实施例提供了一种图像美化的方法,包括:
在目标对象上获取第一边缘;
基于所述第一边缘进行计算,得到定位点;
基于所述定位点将第二图像贴合在目标对象上。
作为本公开实施例的一种具体实现方式,所述基于所述第一边缘进行计算,得到定位点步骤之后,还包括:
获取目标对象上的第二边缘;
检测所述定位点是否位于第一边缘和第二边缘之间;
如检测结果为否,则重新基于所述第一边缘进行计算,得到新的定位点。
作为本公开实施例的一种具体实现方式,所述基于所述第一边缘进行计算,得到定位点,包括:
获取第一边缘上的关键点;
基于所述关键点对目标对象进行三角剖分,得到三角网格;基于所述三角网格,得到定位点。
作为本公开实施例的一种具体实现方式,所述基于所述定位点将第二图像贴合在目标对象上,包括:
获取第二图像;
提取第二图像上的关键点;
基于第二图像上的关键点和定位点将第二图像贴合在目标对象上。
作为本公开实施例的一种具体实现方式,所述第二图像上的关键点为预设的关键点。
作为本公开实施例的一种具体实现方式,在基于所述三角网格,得到定位点之后,还包括:
对所述定位点进行误差校正。
作为本公开实施例的一种具体实现方式,所述在目标对象上获取第一边缘之前,包括:
获取第一图像的目标对象。
作为本公开实施例的一种具体实现方式,所述获取第一图像的目标对象,包括:
分离第一图像的前景图和背景图;在所述前景图中获取目标对象。
作为本公开实施例的一种具体实现方式,所述第二图像包括睫毛图像、双眼皮图像、眼线图像或眼影图像中的一个或多个。
第二方面,本公开实施例还提供了一种图像美化的装置,包括:
获取模块:用于在目标对象上获取第一边缘;
计算模块:用于基于所述第一边缘进行计算,得到定位点;
贴合模块:用于基于所述定位点将第二图像贴合在目标对象上。
作为本公开实施例的一种具体实现方式,还包括:
第二边缘获取模块:用于获取目标对象上的第二边缘;
检测模块:用于检测所述计算模块得到的所述定位点是否位于所述第一边缘和所述第二边缘之间;
判断模块:用于判断检测结果,如检测结果为否,则重新基于所述第一边缘进行计算,得到新的定位点。
作为本公开实施例的一种具体实现方式,所述计算模块,包括:
关键点获取模块:用于获取第一边缘上的关键点;
三角剖分模块:用于基于所述关键点对目标对象进行三角剖分,得到三角网格;
定位点获取模块:用于基于所述三角网格,得到定位点。
作为本公开实施例的一种具体实现方式,所述贴合模块,包括:
第二图像获取模块:用于获取第二图像;
第二图像关键点提取模块:用于提取第二图像上的关键点;
第二图像贴合模块:用于基于第二图像上的关键点和定位点将第二图像贴合在目标对象上。
作为本公开实施例的一种具体实现方式,所述提取第二图像上关键点中:所述第二图像上的关键点为预设的关键点。
作为本公开实施例的一种具体实现方式,还包括:
误差校正模块:用于对所述定位点获取模块得到的定位点进行误差校正。
作为本公开实施例的一种具体实现方式,还包括:
目标对象获取模块:用于获取第一图像的目标对象。
作为本公开实施例的一种具体实现方式,所述目标对象获取模块,包括:分离模块:用于分离第一图像的前景图和背景图;
对象获取模块:用于在所述前景图中获取目标对象。
作为本公开实施例的一种具体实现方式,所述第二图像包括睫毛图像、双眼皮图像、眼线图像或眼影图像中的一个或多个。
第三方面,本公开实施例还提供了一种电子设备,该电子设备包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有能被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行第一方面任一所述的图像美化的方法。
第四方面,本公开实施例还提供了一种非暂态计算机可读存储介质,该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行权利要求第一方面任一所述的图像美化的方法。
本公开实施例提供的图像美化的方法、装置、电子设备及非暂态计算机可读存储介质,其中该图像美化的方法:依据目标对象上的第一边缘为基准,通过计算得到定位点,从而避免了现有技术中采用两个基准进行三角剖分时,因一个基准畸变而造成图像畸变的问题。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本公开实施例提供的一种图像美化的方法的流程图;
图2为本公开实施例提供基于第二边缘对定位点进行检测的流程图;
图3为本公开实施例提供基于第一边缘进行计算得到定位点的流程图;
图4为本公开实施例提供的基于三角网格得到定位点的示意图;
图5为本公开实施例提供基于定位点将第二图像贴合在目标对象上的流程图;
图6为本公开实施例提供的一种图像美化的装置的原理框图;
图7为本公开实施例提供的一种电子设备的原理框图;
图8为本公开实施例提供的一种计算机可读存储介质的示意图;
图9为本公开实施例提供的一种终端的原理框图。
具体实施方式
下面结合附图对本公开实施例进行详细描述。
应当明确,以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。
需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。
基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。
为了便于理解,首先对三角剖分进行示例性的解释。三角剖分的基本 原理是:必定存在且仅存在一种算法,对一个平面域上的散乱点集进行三角剖分,使得所有三角形的最小内角之和最大。满足这种条件的三角剖分方法,被称为Delaunay三角剖分。由于该方法具有一系列独特的性质,因而在计算机图形处理、3D建模等领域得到了广泛的应用。
参见图1,本公开实施例提供一种图像美化的方法。该图像美化的方法,包括如下步骤:
S101:在目标对象上获取第一边缘。
在图像美化的时候,首先需要对需要美化的目标对象进行处理,获取第一边缘,可以为提取目标对象上的关键点,以提取的关键点作为第一边缘。
在一个具体的应用场景中,如在对人脸图像进行美化处理时,具体在人脸的眼睛部位粘贴眼影贴纸图像时,首先提取眼睛部分的关键点,而眼部关键点可以是通过人脸特征关键点检测得到的眼部位置处的关键点,例如眼角处的关键点;上眼睑上分布的一个或多个关键点;下眼睑上分布的一个或多个关键点。通过提取的眼角、上眼睑和下眼睑处的关键点可以标识出眼睛轮廓。在具体的应用中取眼角和上眼睑处的关键点作为第一边缘。
S102:基于所述第一边缘进行计算,得到定位点。
在上述提取关键点作为第一边缘后,以这些关键点为基准进行计算,比如可以将作为第一边缘的关键点进行等距离的平移计算,从而对应得到平移后的关键点,这些平移后的关键点即可作为定位点。也可以在作为第一边缘的关键点中,选择两两相邻的关键点,并以相邻的两个关键点的连线为底边构建三角形,那么三角形的顶点即为定位点。
S103:基于所述定位点将第二图像贴合在目标对象上。
在一个具体的应用场景中,本公开中对图像的美化,主要是对图像中的目标对象粘贴贴纸,本步骤中的第二图像即为贴纸图像,如在步骤S102获得定位点后,即依据定位点的基准,将相应的睫毛、双眼皮、眼线或眼影等贴纸图像上的像素点数值写入到人脸图像的相应位置。
在对人脸图像的眼睛部位粘贴睫毛、双眼皮、眼线或眼影等贴纸图像的应用场景中,用户通过安装在电子设备(如智能手机、平板)上的应用 程序为人脸图像的眼睛部位进行在线或者离线美妆时,可以从预设的多个标准模板中选择自己喜欢的眼妆效果图像,并通过拖拽或者按下相应按钮的方式触发眼妆效果图像与人脸图像的变换过程。当然,也可以应用程序为自动对人脸图像的眼睛部位进行美妆,本公开对此不做限制。应用程序首先获取待处理的人脸图像,之后再对人脸图像进行检测。在检测到人脸区域后,对人脸进行关键点提取,从而得到人脸图像上的关键点,然后选取眼睛部位的关键点。
在一具体应用中,对人脸进行关键点提取时,可以将人脸图像上的所有关键点都提取出来,包括眉毛、鼻子、嘴巴、眼睛、脸外轮廓等多处部位的关键点。然后只选择眼睛部位的关键点即可,也可以只将眼部预定位置的关键点提取出来。
而对于提取眼睛关键点,以提取一个眼睛的关键点为例,在两个眼角处各提取一个关键点,提取上眼睑最高处的一个关键点以及该关键点左右两边的两个关键点,下眼睑最低处的一个关键点以及该关键点左右两边的两个关键点,总共可以为8个关键点。本公开提取8个关键点只是示例性的说明,在实际应用中可以根据需要确定需要提取的关键点个数。在提取出眼睛部位的关键后,可以将两个眼睛处的关键点和上眼睑处的三个关键点共5个关键点作为第一边缘。
在检测出眼部关键点之后,可以根据三角剖分的原理以及用户所选择的眼妆效果图像插值得到定位点。定位点的位置可以基于作为第一边缘的关键点的位置来选择,定位点可以选择在眼部轮廓周围,例如在上眼皮、下眼皮以及眼角横向延长线上,定位点与作为第一边缘的关键点共同构成第一三角剖分网格。第一三角剖分网格包括多个三角形,每个三角形的顶点为眼部关键点或定位点。由于定位点位于上眼皮、下眼皮或者眼角横向延长线上,且定位点时依据作为第一边缘的关键点计算得出的,因此其与形状主要和作为第一边缘的关键点相关联,而人脸其他部位的形状不会对定位点的位置造成影响。如人脸图像上眉毛严重突起造成的畸变等不会使得定位点的位置发生变化,因此第一三角剖分网格中三角形形状比较固定,因此在根据第一三角剖分网格将眼妆效果图像变换到眼部预定位置处时,不会产生类似已有技术中的畸变,大大提高了用户体验效果。
本应用中根据人脸眼部关键点,并以作为第一边缘的关键点为基准在眼部周围通过计算得到定位点,并根据人脸眼部关键点和定位点构成的三角剖分网格将标准眼妆效果图像变换在人脸眼部预定位置处,以解决由于不同人、在眼睛不同状态下时三角剖分网格形状差距较大的问题,从而实现不同人、在眼睛不同状态下都能够较好地为眼部贴上预期眼妆效果图的技术效果,从而提高了用户体验效果。
作为本公开实施例的一种具体实现方式,步骤S102所述基于所述第一边缘进行计算,得到定位点步骤之后,还包括:
S201:获取目标对象上的第二边缘。
在目标对象上提取关键点作为第二边缘。
在具体的应用场景中,如在人脸图像中,可选择提取眉毛的关键点作为第二关键点。
S202:检测所述定位点是否位于第一边缘和第二边缘之间。
在具体应用场景中,对人脸图像眼睛部位粘贴眼线时,需要保证眼线位于眼睛和眉毛之前,其位置不能高于眉毛。因此需要提取眉毛处的关键点,以对眼线的位置进行限定。
S203:如检测结果为否,则重新基于所述第一边缘进行计算,得到新的定位点。
在具体应用场景中,如检测到眼线不是位于眼睛和眉毛之间,则从新回到步骤S102对定位点重新计算,直到定位点位于眼睛和眉毛之间。
作为本公开实施例的一种具体实现方式,步骤S102所述基于所述第一边缘进行计算,得到定位点,包括:
S301:获取第一边缘上的关键点。
在获取第一边缘后,获取第一边缘的关键点,如在上面的实施例中,选择眼睛部分的5个关键点作为第一边缘的关键点。
S302:基于所述关键点对目标对象进行三角剖分,得到三角网格。
即在获取到第一边缘的关键点后,可以将相邻的关键点相连,并以相邻的关键点相连得到的直线为底边,构建三角形,从而得到三角网格。
S303:基于所述三角网格,得到定位点。
在步骤S302中构建三角形后,三角形的顶点即为定位点。
如图4所示,a、b、c、d和e点为第一边缘上的5个关键点,分别以线段ab、线段bc、线段cd和线段de为底边构建三角形abf、三角形bcg、三角形cdh和三角形dei,则三角形abf、三角形bcg、三角形cdh和三角形dei的顶点f、g、h和i即为定位点。
作为本公开实施例的一种具体实现方式,步骤S103基于所述定位点将第二图像贴合在目标对象上,包括:
S501:获取第二图像。
第二图像即需要粘贴到第一图像上的图像,在一个具体的应用场景中,将睫毛、双眼皮、眼线和眼影等图像粘贴到人脸的眼睛部位。睫毛、双眼皮、眼线和眼影等图像即为第二图像。
S502:提取第二图像上关键点的特征数据。
提取第二图像中与定位点想对应的关键点,并提取关键点处的像素点数值。
S503:将所述特征数据贴合在定位点上。
将步骤S502中得到的第二图像上的关键点处的像素点数值通过图像处理算法写入到第一图像的定位点上,而其它像素点的数值,则可依据其与关键点的位置关系,相应的写入到第一图像对应位置。
作为本公开实施例的一种具体实现方式,所述第二图像上的关键点为预设的关键点。
即第二图像上的关键点个数和位置都是预先设定好的。在第二图像上的关键点为预设好的前提下,第一图像上的定位点为了保持与第二图像上的关键点相对应,在计算得到定位点时,需要依据第二图像上的关键点计算得到定位点。
如在步骤S302中构建三角形时,需要获得第二图像中相应三角形的夹角的度数,从而依据相应三角形的夹角的度数构建三角形,从而保证计算得到的定位点与第二图像上的关键点的对应关系。
如在图4中,构建三角形abf时,∠fab和∠fba的度数是依据第二图像中相应位置处的三角形的角度确定的,从而保证三角形abf与第二图像中相应位置处的三角形相似。
作为本公开实施例的一种具体实现方式,在基于所述三角网格,得到定位点之后,还包括:对所述定位点进行误差校正。
在一个具体的应用场景中,如在眼睛部位粘贴睫毛,则通过对现有图像中多个睫毛的在眼睛部分的位置进行采集,并通过计算得到一条标准的睫毛位置线,以该位置线为标准对得到的定位点进行校正,如定位点均匀的位于该标准位置线上或两侧则不需要校正,如定位点位于该标准位置线的单侧,则需要对定位点进行校正,移动定位点,使定位点均匀的位于该标准位置线上或两侧即可。
为本公开实施例的一种具体实现方式,步骤S101在目标对象上获取第一边缘步骤之前,还包括获取第一图像的目标对象。
当应用程序接收到对图像进行美化的命令后,首先需要在图像中获取目标对象。
在一个具体的应用场景中,如在对人脸图像进行美化时,首先要获取图像中的人脸图像。
应用程序中可以预先保存多种眼妆效果图像供用户选择,眼妆效果图像是设计在应用程序的标准模板上的。用户可以通过应用程序为人脸图像增加眼妆效果。用户在选定了应用程序提供的某个眼妆效果图像后,应用程序可以先获取用户待增加眼妆效果的图片或者视频帧。用户可以通过应用程序提供的接口上传包括人脸的图像,并对图像上的人脸进行离线处理,或者通过摄像头实时获取用户的视频帧,并对视频帧进行在线处理。无论是离线处理还是在线处理,在用户选定了眼妆效果图像后,从图片或者视频帧获取人脸图像之前,首先需要进行人脸图像的检测,而人脸图像的检测即判断待检测图片或视频帧中是否存在人脸,如果存在则返回人脸的大小、位置等信息。人脸图像的检测方法包括很多种,例如肤色检测、运动检测、边缘检测等等,相关的检测模型也很多,本公开对此不做限制。此外,如果检测到当前图片或视频帧中存在多个人脸,则对每个人脸都生成人脸图像。
而在检测到人脸图像后,可以通过提取人脸关键点的方式,获取到人脸图像的关键点,从而获取到图像中的目标对象。
作为本公开实施例的一种具体实现方式,在获取图像中的目标对象的 步骤中,因图像中除了目标对象还包含其他信息,为了排除这些信息的干扰,还包括以下步骤;
分离第一图像的前景和背景;在所述前景中获取目标对象。
即将图像的背景和前景分离,而前景和背景分离有背景差分法、帧差分法及光流场法等等,在将前景和背景分离后,前景图像中干扰信息大部分被排除掉,因此在前景图像中获取目标对象会比在整个图像中获取目标对象简化许多。
如在一个具体的应用场景中,在包含人脸的图像中,首先将图像中的人或人的头部与图像的背景分离开来,从而得到人或人的头部的前景图像,然后在包含人或人头部的前景图像中提取人脸信息。
本公开还提供一种图像美化的装置,包括:
获取模块602:用于在目标对象上获取第一边缘;
计算模块603:用于基于所述第一边缘进行计算,得到定位点;
贴合模块608:用于基于所述定位点将第二图像贴合在目标对象上。
作为本公开实施例的一种具体实现方式,图像美化的装置,还包括:第二边缘获取模块604:用于获取目标对象上的第二边缘;
检测模块605:用于检测所述计算模块得到的所述定位点是否位于所述第一边缘和所述第二边缘之间;
判断模块606:用于判断检测结果,如检测结果为否,则重新基于所述第一边缘进行计算,得到新的定位点。
作为本公开实施例的一种具体实现方式,所述计算模块603,包括:
关键点获取模块6031:用于获取第一边缘上的关键点;
三角剖分模块6032:用于基于所述关键点对目标对象进行三角剖分,得到三角网格;
定位点获取模块6033:用于基于所述三角网格,得到定位点。
作为本公开实施例的一种具体实现方式,所述贴合模块608,包括:
第二图像获取模块6081:用于获取第二图像;
第二图像关键点提取模块6082:用于提取第二图像上的关键点;
第二图像贴合模块6083:用于基于第二图像上的关键点和定位点将第二图像贴合在目标对象上。
作为本公开实施例的一种具体实现方式,所述提取第二图像上关键点中:
所述第二图像上的关键点为预设的关键点。
作为本公开实施例的一种具体实现方式,还包括:
误差校正模块607:用于对所述定位点获取模块得到的定位点进行误差校正。
作为本公开实施例的一种具体实现方式,还包括:
目标对象获取模块601:用于获取第一图像的目标对象。
作为本公开实施例的一种具体实现方式,所述目标对象获取模块601,包括:分离模块6011:用于分离第一图像的前景图和背景图;
对象获取模块6012:用于在所述前景图中获取目标对象。
作为本公开实施例的一种具体实现方式,所述第二图像包括睫毛图像、双眼皮图像、眼线图像或眼影图像中的一个或多个。
图像美化的装置的整体示意图如图6所示。
图7是图示根据本公开的实施例的电子设备的硬件框图。如图7所示,根据本公开实施例的电子设备70包括存储器71和处理器72。该存储器71用于存储非暂时性计算机可读指令。具体地,存储器71可以包括一个或多个计算机程序产品,该计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。该易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。该非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。
该处理器72可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制电子设备70中的其它组件以执行期望的功能。在本公开的一个实施例中,该处理器72用于运行该存储器71中存储的该计算机可读指令,使得该电子设备70执行前述的本公开各实施例的图像美化的全部或部分步骤。
本领域技术人员应能理解,为了解决如何获得良好用户体验效果的技术问题,本实施例中也可以包括诸如通信总线、接口等公知的结构,这些公知的结构也应包含在本公开的保护范围之内。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此 不再赘述。
图8是图示根据本公开的实施例的计算机可读存储介质的示意图。如图8所示,根据本公开实施例的计算机可读存储介质80,其上存储有非暂时性计算机可读指令81。当该非暂时性计算机可读指令81由处理器运行时,执行前述的本公开各实施例的图像美化的全部或部分步骤。
上述计算机可读存储介质80包括但不限于:光存储介质(例如:CD-ROM和DVD)、磁光存储介质(例如:MO)、磁存储介质(例如:磁带或移动硬盘)、具有内置的可重写非易失性存储器的媒体(例如:存储卡)和具有内置ROM的媒体(例如:ROM盒)。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图9是图示根据本公开实施例的终端设备的硬件结构示意图。如图9所示,该终端90包括上述图像美化装置实施例。
该终端设备可以以各种形式来实施,本公开中的终端设备可以包括但不限于诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置、车载终端设备、车载显示终端、车载电子后视镜等等的移动终端设备以及诸如数字TV、台式计算机等等的固定终端设备。
作为等同替换的实施方式,该终端90还可以包括其他组件。如图9所示,该终端90可以包括电源单元91、无线通信单元92、A/V(音频/视频)输入单元93、用户输入单元94、感测单元95、接口单元96、控制器97、输出单元98和存储单元99等等。图9示出了具有各种组件的终端,但是应理解的是,并不要求实施所有示出的组件,也可以替代地实施更多或更少的组件。
其中,无线通信单元92允许终端90与无线通信***或网络之间的无线电20通信。A/V输入单元93用于接收音频或视频信号。用户输入单元94可以根据用户输入的命令生成键输入数据以控制终端设备的各种操作。感测单元95检测终端90的当前状态、终端90的位置、用户对于终端90的触摸输入的有无、终端90的取向、终端90的加速或减速移动和方向等等,并且生成用于控制终端90的操作的命令或信号。接 口单元96用作至少一个外部装置与终端90连接可以通过的接口。输出单元98被构造为以视觉、音频和/或触觉方式提供输出信号。存储单元99可以存储由控制器97执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据。存储单元99可以包括至少一种类型的存储介质。而且,终端90可以与通过网络连接执行存储单元99的存储功能的网络存储装置协作。控制器97通常控制终端设备的总体操作。另外,控制器97可以包括用于再现或回放多媒体数据的多媒体模块。控制器97可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。电源单元91在控制器97的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
本公开提出的图像美化的各种实施方式可以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,本公开提出的图像美化的各种实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,本公开提出的图像美化的各种实施方式可以在控制器97中实施。对于软件实施,本公开提出的图像美化的各种实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储单元99中并且由控制器97执行。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
在本公开中,诸如第一和第二等之类的关系术语仅仅用来将一个实体 或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序,本公开中涉及的器件、装置、设备、***的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、***。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
另外,如在此使用的,在以“至少一个”开始的项的列举中使用的“或”指示分离的列举,以便例如“A、B或C的至少一个”的列举意味着A或B或C,或AB或AC或BC,或ABC(即A和B和C)。此外,措辞“示例的”不意味着描述的例子是优选的或者比其他例子更好。
还需要指出的是,在本公开的***和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
可以不脱离由所附权利要求定义的教导的技术而进行对在此所述的技术的各种改变、替换和更改。此外,本公开的权利要求的范围不限于以上所述的处理、机器、制造、事件的组成、手段、方法和动作的具体方面。可以利用与在此所述的相应方面进行基本相同的功能或者实现基本相同的结果的当前存在的或者稍后要开发的处理、机器、制造、事件的组成、手段、方法或动作。因而,所附权利要求包括在其范围内的这样的处理、机器、制造、事件的组成、手段、方法或动作。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添 加和子组合。

Claims (12)

  1. 一种图像美化的方法,其特征在于,包括:
    在目标对象上获取第一边缘;
    基于所述第一边缘进行计算,得到定位点;
    基于所述定位点将第二图像贴合在目标对象上。
  2. 根据权利要求1所述的图像美化的方法,其特征在于,所述基于所述第一边缘进行计算,得到定位点步骤之后,还包括:
    获取目标对象上的第二边缘;
    检测所述定位点是否位于第一边缘和第二边缘之间;
    如检测结果为否,则重新基于所述第一边缘进行计算,得到新的定位点。
  3. 根据权利要求1所述的图像美化的方法,其特征在于,所述基于所述第一边缘进行计算,得到定位点,包括:
    获取第一边缘上的关键点;
    基于所述关键点对目标对象进行三角剖分,得到三角网格;
    基于所述三角网格,得到定位点。
  4. 根据权利要求3所述的图像美化的方法,其特征在于,所述基于所述定位点将第二图像贴合在目标对象上,包括:
    获取第二图像;
    提取第二图像上的关键点;
    基于第二图像上的关键点和定位点将第二图像贴合在目标对象上。
  5. 根据权利要求4所述的图像美化的方法,其特征在于:
    所述第二图像上的关键点为预设的关键点。
  6. 根据权利要求3所述的图像美化的方法,其特征在于,在基于所述三角网格,得到定位点之后,还包括:
    对所述定位点进行误差校正。
  7. 根据权利要求1所述的图像美化的方法,其特征在于,所述在目标对象上获取第一边缘之前,包括:
    获取第一图像的目标对象。
  8. 根据权利要求7所述的图像美化的方法,其特征在于,所述获取第 一图像的目标对象,包括:
    分离第一图像的前景图和背景图;
    在所述前景图中获取目标对象。
  9. 根据权利要求3所述的图像美化的方法,其特征在于:
    所述第二图像包括睫毛图像、双眼皮图像、眼线图像或眼影图像中的一个或多个。
  10. 一种图像美化的装置,其特征在于,包括:
    获取模块:用于在目标对象上获取第一边缘;
    计算模块:用于基于所述第一边缘进行计算,得到定位点;
    贴合模块:用于基于所述定位点将第二图像贴合在目标对象上。
  11. 一种电子设备,其特征在于,所述电子设备包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-9所述的图像美化的方法。
  12. 一种非暂态计算机可读存储介质,其特征在于,该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行权利要求1-9任一所述的图像美化的方法。
PCT/CN2019/073075 2018-06-28 2019-01-25 图像美化方法、装置及电子设备 WO2020001014A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810690342.9A CN108986016B (zh) 2018-06-28 2018-06-28 图像美化方法、装置及电子设备
CN201810690342.9 2018-06-28

Publications (1)

Publication Number Publication Date
WO2020001014A1 true WO2020001014A1 (zh) 2020-01-02

Family

ID=64539533

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073075 WO2020001014A1 (zh) 2018-06-28 2019-01-25 图像美化方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN108986016B (zh)
WO (1) WO2020001014A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150387A (zh) * 2020-09-30 2020-12-29 广州光锥元信息科技有限公司 对照片中的人像增强五官立体感的方法及装置

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986016B (zh) * 2018-06-28 2021-04-20 北京微播视界科技有限公司 图像美化方法、装置及电子设备
CN110211211B (zh) * 2019-04-25 2024-01-26 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN110136054B (zh) * 2019-05-17 2024-01-09 北京字节跳动网络技术有限公司 图像处理方法和装置
CN112132859A (zh) * 2019-06-25 2020-12-25 北京字节跳动网络技术有限公司 贴纸生成方法、装置、介质和电子设备
CN110766631A (zh) * 2019-10-21 2020-02-07 北京旷视科技有限公司 人脸图像的修饰方法、装置、电子设备和计算机可读介质
CN111489311B (zh) * 2020-04-09 2023-08-08 北京百度网讯科技有限公司 一种人脸美化方法、装置、电子设备及存储介质
CN114095646B (zh) * 2020-08-24 2022-08-26 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN114095647A (zh) * 2020-08-24 2022-02-25 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN112365415B (zh) * 2020-11-09 2024-02-09 珠海市润鼎智能科技有限公司 高动态范围图像的快速显示转换方法
CN112347979B (zh) * 2020-11-24 2024-03-15 郑州阿帕斯科技有限公司 一种眼线绘制方法和装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102486868A (zh) * 2010-12-06 2012-06-06 华南理工大学 一种基于平均脸的美丽人脸合成方法
CN103236066A (zh) * 2013-05-10 2013-08-07 苏州华漫信息服务有限公司 一种基于人脸特征分析的虚拟试妆方法
CN103325085A (zh) * 2012-03-02 2013-09-25 索尼公司 自动图像对齐
CN104778712A (zh) * 2015-04-27 2015-07-15 厦门美图之家科技有限公司 一种基于仿射变换的人脸贴图方法和***
CN106709931A (zh) * 2015-07-30 2017-05-24 中国艺术科技研究所 一种脸谱映射至人脸的方法、脸谱映射装置
CN108492247A (zh) * 2018-03-23 2018-09-04 成都品果科技有限公司 一种基于网格变形的眼妆贴图方法
CN108986016A (zh) * 2018-06-28 2018-12-11 北京微播视界科技有限公司 图像美化方法、装置及电子设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5953097B2 (ja) * 2012-04-24 2016-07-20 ゼネラル・エレクトリック・カンパニイ イメージ位置合わせのための最適勾配追求

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102486868A (zh) * 2010-12-06 2012-06-06 华南理工大学 一种基于平均脸的美丽人脸合成方法
CN103325085A (zh) * 2012-03-02 2013-09-25 索尼公司 自动图像对齐
CN103236066A (zh) * 2013-05-10 2013-08-07 苏州华漫信息服务有限公司 一种基于人脸特征分析的虚拟试妆方法
CN104778712A (zh) * 2015-04-27 2015-07-15 厦门美图之家科技有限公司 一种基于仿射变换的人脸贴图方法和***
CN106709931A (zh) * 2015-07-30 2017-05-24 中国艺术科技研究所 一种脸谱映射至人脸的方法、脸谱映射装置
CN108492247A (zh) * 2018-03-23 2018-09-04 成都品果科技有限公司 一种基于网格变形的眼妆贴图方法
CN108986016A (zh) * 2018-06-28 2018-12-11 北京微播视界科技有限公司 图像美化方法、装置及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150387A (zh) * 2020-09-30 2020-12-29 广州光锥元信息科技有限公司 对照片中的人像增强五官立体感的方法及装置
CN112150387B (zh) * 2020-09-30 2024-04-26 广州光锥元信息科技有限公司 对照片中的人像增强五官立体感的方法及装置

Also Published As

Publication number Publication date
CN108986016B (zh) 2021-04-20
CN108986016A (zh) 2018-12-11

Similar Documents

Publication Publication Date Title
WO2020001014A1 (zh) 图像美化方法、装置及电子设备
US11017580B2 (en) Face image processing based on key point detection
WO2020019663A1 (zh) 基于人脸的特效生成方法、装置和电子设备
WO2020037863A1 (zh) 三维人脸图像重建方法、装置和计算机可读存储介质
WO2019242271A1 (zh) 图像变形方法、装置及电子设备
WO2018201551A1 (zh) 一种人脸图像的融合方法、装置及计算设备
WO2020029554A1 (zh) 增强现实多平面模型动画交互方法、装置、设备及存储介质
WO2020019664A1 (zh) 基于人脸的形变图像生成方法和装置
WO2021213067A1 (zh) 物品显示方法、装置、设备及存储介质
WO2019237747A1 (zh) 图像裁剪方法、装置、电子设备及计算机可读存储介质
CN110072046B (zh) 图像合成方法和装置
WO2020019666A1 (zh) 人脸特效的多人脸跟踪方法、装置和电子设备
WO2020024569A1 (zh) 动态生成人脸三维模型的方法、装置、电子设备
WO2020019665A1 (zh) 基于人脸的三维特效生成方法、装置和电子设备
CN109064387A (zh) 图像特效生成方法、装置和电子设备
JP7383714B2 (ja) 動物顔部の画像処理方法と装置
WO2020037924A1 (zh) 动画生成方法和装置
CN108921798B (zh) 图像处理的方法、装置及电子设备
WO2019075656A1 (zh) 图像处理方法、装置、终端和存储介质
CN110378839A (zh) 人脸图像处理方法、装置、介质及电子设备
WO2019237746A1 (zh) 图像合并的方法和装置
EP2911115B1 (en) Electronic device and method for color extraction
WO2019237744A1 (zh) 构建图像深度信息的方法及装置
WO2020001015A1 (zh) 场景操控的方法、装置及电子设备
CN109146770A (zh) 一种形变图像生成方法、装置、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19827185

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.04.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19827185

Country of ref document: EP

Kind code of ref document: A1