WO2022012085A1 - 人脸图像处理方法、装置、存储介质及电子设备 - Google Patents

人脸图像处理方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2022012085A1
WO2022012085A1 PCT/CN2021/083651 CN2021083651W WO2022012085A1 WO 2022012085 A1 WO2022012085 A1 WO 2022012085A1 CN 2021083651 W CN2021083651 W CN 2021083651W WO 2022012085 A1 WO2022012085 A1 WO 2022012085A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
pixel
data
face image
Prior art date
Application number
PCT/CN2021/083651
Other languages
English (en)
French (fr)
Inventor
杨超
刘享军
齐坤鹏
杜峰
Original Assignee
北京沃东天骏信息技术有限公司
北京京东世纪贸易有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京沃东天骏信息技术有限公司, 北京京东世纪贸易有限公司 filed Critical 北京沃东天骏信息技术有限公司
Publication of WO2022012085A1 publication Critical patent/WO2022012085A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually

Definitions

  • the embodiments of the present application relate to the technical field of image processing, and in particular, to a method, an apparatus, a storage medium, and an electronic device for processing a face image.
  • face image processing technology is used to identify and process face images in photos or videos to achieve effects such as beautifying, deforming, and changing faces.
  • Ordinary people can process the captured photos or videos by installing corresponding application programs on the mobile terminal, so that the photos or videos show different effects, which greatly enriches people's daily life.
  • the combination of face image processing technology with virtual reality, augmented reality and other technologies has also led to the rise, transformation or development of related industries.
  • Face image deformation is a kind of face image processing technology that deforms the key parts of the face or the facial contour.
  • the image There are four steps of template screening, image preprocessing, obtaining deformation data and image processing.
  • the key point determined in the image template screening is used as the center of the circle to carry out a circle.
  • the attenuation offset of the ellipse to realize the calculation of the offset of the pixels in the area near the corresponding key point.
  • Embodiments of the present application provide a face image processing method, device, storage medium, and electronic device, so as to solve the problem of low user experience in the process of performing face image deformation processing in the prior art.
  • an embodiment of the present application provides a method for processing a face image, including:
  • Offset processing is performed on each pixel coordinate on the face grid image according to the offset vector of each pixel point on the face grid image to obtain deformed face image data;
  • the deformed face image data is rendered to obtain a deformed face image.
  • determining the offset vector of each pixel on the face mesh image according to the normal texture map including:
  • mapping relationship sequentially sample the two-dimensional color data of each pixel on the normal texture map
  • the offset vector of each pixel on the face mesh image is determined.
  • mapping relationship between the face mesh image and the normal texture map includes:
  • a pixel mapping relationship between the pixels of the face mesh image and the pixels of the normal texture map is established.
  • mapping relationship sequentially sampling the two-dimensional color data of each pixel on the normal texture map, including:
  • determining the offset vector of each pixel on the face mesh image according to the two-dimensional color data including:
  • the two-dimensional coordinate system is the x-axis
  • the value range of the y-axis is a plane rectangular coordinate system of [-1, 1];
  • An offset vector of each pixel on the face mesh image is determined according to the x-axis coordinate value and the y-axis coordinate value of the coordinate data.
  • converting the two-dimensional color data into coordinate data in a two-dimensional coordinate system includes:
  • the two-dimensional color data is converted into coordinate data in the two-dimensional coordinate system according to the coordinate reference value of the two-dimensional coordinate system.
  • performing offset processing on each pixel coordinate on the face grid image according to the offset vector of each pixel point on the face grid image to obtain deformed face image data include:
  • the deformed face image data is obtained according to the target offset vector of each pixel on the face grid image and the pixel coordinates of the corresponding pixel on the face grid image.
  • rendering the deformed face image data to obtain a deformed face image including:
  • the deformed face image data is rendered through a two-dimensional texture processing function to obtain a deformed face image.
  • the method before determining the offset vector of each pixel on the face mesh image according to the normal texture map, the method further includes:
  • the determining the required normal texture map includes:
  • generating a face mesh image according to the original face image data including:
  • the face key point detection algorithm determine the face key point from the original face image data
  • the auxiliary points are determined
  • the obtaining original face image data includes:
  • the two-dimensional color data in the normal texture map is composed of the color value of the R channel and the color value of the G channel.
  • an embodiment of the present application provides a face image processing apparatus, including:
  • the acquisition module is used to acquire the original face image data
  • a processing module configured to generate a face mesh image according to the original face image data; according to the normal texture map, determine the offset vector of each pixel on the face mesh image, the normal texture
  • the map is the deformation trend and deformation strength of the face image represented by two-dimensional color data; according to the offset vector of each pixel point on the face grid image, each pixel coordinate on the face grid image is carried out. Offset processing to obtain deformed face image data; and rendering the deformed face image data to obtain a deformed face image.
  • processing module is specifically used for:
  • mapping relationship sample the two-dimensional color data of each pixel on the normal texture map in turn;
  • the offset vector of each pixel on the face mesh image is determined.
  • processing module is specifically used for:
  • the pixel mapping relationship between the pixels of the face mesh image and the pixels of the normal texture map is established.
  • processing module is specifically used for:
  • the two-dimensional color data of each pixel on the normal texture map is sequentially sampled according to the preset sampling order.
  • processing module is specifically used for:
  • the offset vector of each pixel point on the face mesh image is determined.
  • processing module is specifically used for:
  • processing module is specifically used for:
  • the deformed face image data is obtained.
  • processing module is specifically used for:
  • the deformed face image data is rendered through a two-dimensional texture processing function to obtain a deformed face image.
  • processing module is also used for:
  • processing module is specifically used for:
  • the normal texture map corresponding to the deformation effect is selected from the preset multiple normal texture maps, and the face processing instruction is used to indicate the deformation effect required by the user;
  • processing module is specifically used for:
  • the face key points are determined from the original face image data
  • the auxiliary points are determined
  • the obtaining module is specifically used for:
  • the two-dimensional color data in the normal texture map consists of the color value of the R channel and the color value of the G channel.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, and the computer program is used to implement the above-mentioned method for processing a face image.
  • an embodiment of the present application provides an electronic device, including: a memory and a processor; the memory is used to store a computer program, and the processor executes the computer program to implement the above-mentioned method for processing a face image.
  • an embodiment of the present application provides a chip for running instructions, the chip includes a memory and a processor, where code and data are stored in the memory, the memory is coupled with the processor, and the processor runs The code in the memory enables the chip to perform the above-mentioned method for processing a face image.
  • an embodiment of the present application provides a program product including instructions, which, when the program product runs on a computer, causes the computer to execute the above-mentioned method for processing a face image.
  • an embodiment of the present application provides a computer program, which, when the computer program is executed by a processor, is used to execute the above-mentioned method for processing a face image.
  • the face image processing method, device, storage medium, and electronic device provided by the embodiments of the present application generate a face mesh image based on the original face image data, and store the normal of each pixel by using a normal texture map , and determine the offset vector of each pixel point on the face mesh image according to the normal texture map, and perform each pixel coordinate on the face mesh image according to the offset vector of each pixel point on the face mesh image.
  • Offset processing obtain the deformed face image data, render the deformed face image data, and obtain the deformed face image, because the normal texture map stores the value of each pixel in the form of two-dimensional color data.
  • the normal that is, the deformation trend and deformation strength of each pixel
  • the offset vector of each pixel can be calculated and determined by reading the two-dimensional color data of the normal texture map.
  • the calculation of the offset vector does not involve complex operations such as squaring, which improves the accuracy and fluency of processing face images, and is beneficial to improving user experience.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of Embodiment 1 of a method for processing a face image provided by an embodiment of the present application;
  • FIG. 3 is a schematic diagram of the distribution of key points and auxiliary points of a human face provided by an embodiment of the present application;
  • FIG. 4 is a schematic diagram of a face mesh image provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a normal texture map provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a processing process of a face image processing method provided by an embodiment of the present application.
  • Embodiment 7 is a schematic flowchart of Embodiment 2 of a method for processing a face image provided by an embodiment of the present application;
  • FIG. 8 is a schematic diagram of a mapping relationship between a face mesh image and a normal texture map provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of Embodiment 3 of a method for processing a face image provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an operation for determining a normal texture map to be used according to an embodiment of the present application
  • FIG. 11 is a schematic diagram of an operation of a terminal device according to an embodiment of the present application.
  • FIG. 12 provides a schematic diagram of an operation interface for face image processing according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of an operation interface of yet another face image processing provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an embodiment of a face image processing apparatus provided by an embodiment of the application.
  • FIG. 15 is a schematic structural diagram of an embodiment of an electronic device provided by an embodiment of the present application.
  • the present application provides a technical solution for face image processing, which is used to process corresponding videos or photos according to the user's usage requirements, so as to achieve an expected or a specific face deformation effect, so as to adapt to different application scenarios Or enrich the user's shooting experience.
  • virtual makeup try-on is a recently emerging online auxiliary sales method.
  • the facial image deformation technology is used to realize facial deformation effects such as big eyes and thin face, and the user tries makeup.
  • the post-face effect is presented to the user, and the user can experience the effect of different makeup products even if they are not on site, thus providing more convenience for the user to shop.
  • the embodiments of the present application take a terminal device as an example to describe the technical solutions of the present application. It can be understood that the technical solutions of the present application can also be applied to electronic devices with image processing functions, such as computers, servers, or cameras.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application. Taking the image deformation effect of realizing big eyes and thin face as an example, as shown in FIG. 1 , the left picture in FIG. 1 is the original person obtained by the terminal device. Face image, the original face image is processed by the terminal device, and the deformed face image shown in the right figure in Fig. 1 can be obtained.
  • the prior art face image deformation solution includes four steps: image template screening, image preprocessing, obtaining deformation data and image processing.
  • image template screening in the process of calculating the pixel offset, a circular or ellipse attenuation offset is performed with each key point as the center of the circle, so as to realize the calculation of the offset of the pixel points in the area near the corresponding key point.
  • resulting in a high overhead of the graphics processing unit (GPU) and the mobile terminal is prone to delays, freezes and other problems during the deformation processing of the face image. , there is a problem of low user experience.
  • GPU graphics processing unit
  • the technical solution of the present application is based on a pre-built normal texture map corresponding to a specific deformation effect, when performing pixel offset calculation, by establishing a normal texture map and a pending processing The mapping relationship of the face image, sampling the color value of the corresponding pixel in the normal texture map, so as to determine the offset of each pixel on the face image to be processed.
  • the offset vector is calculated for each pixel. , and does not involve complex operations such as squaring, which not only improves the accuracy of the deformation processing of the face image, but also reduces the consumption of the GPU, which is beneficial to improve the user experience.
  • FIG. 2 is a schematic flowchart of Embodiment 1 of a face image processing method provided by an embodiment of the present application.
  • the execution body of the embodiment of the present application is a terminal device, and the terminal device may be a terminal device such as a mobile phone, a tablet computer, etc., as shown in FIG. 2 .
  • the face image processing method in the embodiment of the present application includes:
  • the terminal device can identify faces from corresponding pictures or videos, and obtain corresponding face image data, and these pictures or videos can be acquired by the camera in real time or stored in the terminal device.
  • the camera of the terminal device when the camera of the terminal device is aimed at the face to take a picture, the original face image data is acquired.
  • the terminal device when the terminal device is in the process of recording a video, it captures one frame of the image to obtain the original face image data.
  • the terminal device reads the corresponding photos or videos stored in an image application, such as an album, to obtain original face image data.
  • an image application such as an album
  • a face mesh image is obtained by processing the original face image data obtained in S101.
  • some representative faces in the original face image can be identified through a face key point detection algorithm.
  • the points of the outline and key parts of the face are used as the key points of the face, and the auxiliary points are determined according to the needs of constructing the face grid and the distribution of the key points of the face.
  • 3 is a schematic diagram of the distribution of a kind of face key points and auxiliary points provided by the embodiment of the present application. As shown in FIG. 3 , after identifying a total of 99 face key points P1-P99 according to the face key point detection algorithm, According to the distribution of the 99 face key points, a total of 13 auxiliary points S1-S13 are determined.
  • the face key point detection algorithm is an algorithm that automatically identifies specific representative areas of the face.
  • the face key point detection algorithm can identify the key points of the eyes, nose, mouth, eyebrows, facial contours and other areas of the human face.
  • the embodiment of the present application can adopt any kind of face key point detection algorithm, such as active shape models (active shape models, ASM), active appearance models (active appearance models, AAM), cascaded pose regression (cascaded pose regression, CPR) etc. to obtain face key points.
  • Auxiliary points refer to the points that play an auxiliary role in positioning.
  • the auxiliary points can be points on the original face image or not on the original face image.
  • the distribution of key points on the face can be determined by a preset algorithm. The purpose of this determination is to build a more stable triangle.
  • the principle of triangle stability means that in order to make the constructed triangle more stable, try to ensure that the constructed triangle is an acute triangle, and avoid forming an obtuse triangle.
  • FIG. 4 is a schematic diagram of a face mesh image provided by an embodiment of the application.
  • the face mesh image includes a system of triangles, and the vertices of these triangles may all be auxiliary points.
  • triangle A such as triangle A, composed of auxiliary points S5, S7 and S11, or all face key points, such as triangle B, composed of face key points P1, P59 and P68, or face key points and auxiliary positions
  • Points are formed together, such as triangle C, which is formed by face key points P5 and P69 and auxiliary point S9, which is not strictly limited here, as long as the above-mentioned triangle stability principle and nearest principle are satisfied.
  • the method of normal texture mapping is adopted in this embodiment of the present application.
  • the terminal device After acquiring the face mesh image, the terminal device acquires the corresponding normal texture Texture map, determine the offset vector of each pixel on the face mesh image, that is, by reading the two-dimensional color data on the normal texture map, determine the offset of each pixel on the face mesh image Vector, this process does not involve complex multiplication and division operations or square root operations, etc. Therefore, the overhead to the GPU is small, which can bring users a smooth experience.
  • the offset vector includes an offset direction and an offset size, and is used to determine which direction the corresponding pixel point should be offset in, and the size of the offset amount in this direction.
  • two-dimensional color data is used to store the normal in the normal texture map, and each two-dimensional color data represents the deformation trend of the corresponding pixel. and deformation strength, correspondingly, the two-dimensional color data in the normal texture map may be composed of color data of any two color channels in RGB, which is not limited here.
  • FIG. 5 is a schematic diagram of a normal texture map provided by an embodiment of the application.
  • the two-dimensional color data in the normal texture map is composed of the color value of the R channel and the color value of the G channel. Composition of color values.
  • the screen resolution of the terminal device is obtained, and based on the screen resolution of the terminal device, the deformed face image data matching the actual situation of the display screen is obtained.
  • the offset vector of each pixel on the image is transformed into a target offset vector corresponding to the screen resolution of the terminal device, and the target offset vector of each pixel on the face mesh image corresponds to the face mesh image.
  • the pixel coordinates of the pixel points are obtained to obtain the deformed face image data.
  • the target offset vector refers to a vector obtained by transforming the offset vector of each pixel on the face mesh image according to the screen resolution of the terminal device.
  • the deformed face image data Since the deformed face image finally needs to be presented to the user through the terminal device, the deformed face image data must match the resolution of the terminal device to ensure that the final deformed face image is presented through the terminal device There will be no problems such as scale imbalance, and the resolution of terminal devices used by different users may be different.
  • the offset vector of each pixel on the face mesh image is transformed into the screen of the terminal device.
  • the target offset vector corresponding to the resolution. According to the target offset vector of each pixel on the face mesh image and the pixel coordinates of the corresponding pixel on the face mesh image, the deformed person matching the terminal device is finally obtained.
  • the face image data provides reliable data support for the subsequent rendering process, improves the reliability of face image processing, and improves the user experience.
  • the deformed face image is obtained by rendering the deformed face image data obtained in S104. It can be understood that, in order to improve the rendering effect, when rendering the deformed face image data, the deformed face image data can be rendered according to the face mesh data obtained in S102 to obtain the deformed face image data. face image.
  • FIG. 6 is a schematic diagram of a processing process of a method for processing a face image provided by an embodiment of the present application.
  • Determine the face key points and auxiliary points in (a) figure and construct a triangular network by taking the face key points and auxiliary points in (a) figure as vertices, and obtain the face mesh image in (b) figure , and then determine the offset vector of each pixel in the face mesh image in (b) through the normal texture map in (c) (corresponding to the deformation effect of big eyes and thin face), and finally, according to the obtained Perform offset processing on each pixel in the face mesh image to obtain the deformed face image data, and render the deformed face image data according to the constructed triangular mesh data to obtain Image of human face with big eyes and thin face effect.
  • a face mesh image is generated, the normal of each pixel is stored by using the normal texture map, and each pixel on the face mesh image is determined according to the normal texture map.
  • the offset vector of each pixel point according to the offset vector of each pixel point on the face grid image, the offset processing of each pixel coordinate on the face grid image is performed, and the deformed face image data is obtained, and the deformed face image data is obtained.
  • the resulting face image data is rendered to obtain the deformed face image.
  • the normal texture map stores the normal of each pixel in the form of two-dimensional color data, that is, the deformation trend and deformation strength of each pixel. And by reading the two-dimensional color data of the normal texture map, the offset vector of each pixel can be calculated and determined. In this process, the offset vector is calculated for each pixel, and complex operations such as square rooting are not involved. , which improves the accuracy and fluency of processing face images, which is beneficial to improve user experience.
  • FIG. 7 is a schematic flowchart of the second embodiment of the face image processing method provided by the embodiment of the present application.
  • each face mesh image on the face is determined.
  • Offset vector of pixels including:
  • the face mesh since the normal texture map and the face mesh image are independent of each other, in order to determine the offset vector of each pixel on the face mesh image according to the normal texture map, the face mesh needs to be established first.
  • FIG. 8 is a schematic diagram of the mapping relationship between the face mesh image and the normal texture map provided by the embodiment of the application. As shown in FIG. 8 , when establishing the mapping relationship between the face mesh image and the normal texture map. This can be achieved through the following process:
  • (1) Define the texture coordinates of the lower left vertex as (0, 0), the texture coordinates of the upper left vertex as (0, 1), the texture coordinates of the lower right vertex as (1, 0), and the texture of the upper right vertex as (0, 0).
  • the coordinates are (1, 1), and the texture coordinates of other pixels of the face mesh image are sequentially determined according to the texture coordinates of the vertices of the four corners defined above.
  • mapping relationship sequentially sample the two-dimensional color data of each pixel on the normal texture map.
  • the preset sampling order is to take a certain pixel as the starting point, and take the sampling order of all the pixels in turn. to the order above.
  • Q(u, v) in the normal texture map is the two-dimensional color data corresponding to P(0.5, 0.69), where u and v are the corresponding color channels of the pixel respectively. color value.
  • the two-dimensional color data in the normal texture map represents the deformation trend and deformation strength of the corresponding pixel on the face image
  • the two-dimensional image of a pixel on the normal texture map obtained by sampling color data the offset vector of the corresponding pixel in the face mesh image can be determined, and by sequentially sampling the two-dimensional color data of each pixel on the normal texture map, each pixel in the face mesh image can be determined. Offset vector for pixel points.
  • the two-dimensional color data in the normal texture map consists of color values of R channel and G channel, wherein the color value of R channel represents the corresponding pixel on the face image on the x-axis.
  • the color value of the G channel represents the offset vector of the corresponding pixel on the y-axis of the face image, using RG (0.5, 0.5) as the reference value, when RG is (0.5, 0.5), the corresponding pixel The point does not do any pixel offset.
  • the R value is less than 0.5, the pixel coordinates of the corresponding pixel on the face grid image are shifted to the left.
  • the R value is greater than 0.5, it represents the corresponding pixel on the face grid image.
  • the pixel coordinate of the corresponding pixel on the face grid image is shifted downward.
  • the G value is greater than 0.5, it represents the face grid image.
  • the pixel coordinates of the corresponding pixel point are shifted upward.
  • the normal texture map only needs to set the R value of the two-dimensional color data of the left cheek to be greater than 0.5, and at the same time, set the R value of the two-dimensional color data of the right cheek to 0.5, and the two colors of other positions All data can be set to (0.5, 0.5); to achieve the deformation effect of big eyes, the R value of the two-dimensional color data of the left corner of the two eyes only needs to be set to be less than 0.5 in the normal texture map.
  • the R value of the two-dimensional color data of the right corner of the eye is greater than 0.5, and the two-color data of other positions can be set to (0.5, 0.5).
  • the color data in the image cannot define a value less than 0, for the convenience of use, it is necessary to further process the two-dimensional color data obtained by sampling, so that the offset direction and the coordinate data can be more intuitively reflected.
  • the size of the offset optionally, according to the two-dimensional color data, to determine the offset vector of each pixel on the face mesh image, including:
  • the two-dimensional coordinate system is the x-axis and the y-axis is a plane rectangular coordinate system whose value range is [-1, 1]. According to the x-axis of the coordinate data The axis coordinate value and the y-axis coordinate value determine the offset vector of each pixel on the face mesh image.
  • the two-dimensional color data is converted into coordinate data in the two-dimensional coordinate system according to the coordinate reference value of the two-dimensional coordinate system.
  • V offset C offset ⁇ 2.0-vec(1.0, 1.0)
  • the two-dimensional color data is converted into coordinate data in a two-dimensional coordinate system
  • C offset represents the two-dimensional color data on the normal texture map
  • V offset represents the coordinate data in the transformed two-dimensional coordinate system
  • vec(1.0, 1.0) represents the coordinate reference value of the two-dimensional coordinate system
  • the two-dimensional color data can be converted into coordinate data in a plane rectangular coordinate system with the value range of the x-axis and the y-axis being -1 to 1 by the above formula.
  • the V offset can be determined to be (0, 0) after calculation, indicating that there is no offset in the x-axis and y-axis directions; when the sampled C offset is (0.4, 0.8), correspondingly It can be determined that V offset is (-0.2, 0.6), which means that the x-axis direction should be shifted to the left by 0.2 pixels, and the y-axis direction should be shifted upwards by 0.6 pixels.
  • the face grid can be conveniently and intuitively determined Offset vector for each pixel on the image.
  • the mapping relationship between the face mesh image and the normal texture map is established, and according to the mapping relationship, the two-dimensional color data of each pixel on the normal texture map is sampled in turn, and the human face is determined according to the two-dimensional color data.
  • the offset vector of each pixel on the face mesh image by establishing a mapping method, makes the face mesh image fit with the normal texture map, and improves the accuracy of each pixel on the determined face mesh image.
  • the offset vector of each pixel on the face mesh image is determined according to the coordinate data, simplifying
  • the calculation process is reduced, the consumption of the GPU is reduced, the fluency of image processing is improved, and the user experience is improved.
  • the deformed face image data is obtained, including:
  • T Target V pos +V offset /V pixSize , the deformed face image data is obtained;
  • V pos represents the coordinate data of the corresponding pixel on the face grid image
  • V pixSize represents the screen resolution
  • T Target represents the deformed face image data of the corresponding pixel.
  • V offset /V pixSize means that Target offset vector.
  • the deformed face image data is rendered to obtain a deformed face image, which includes: performing processing on the deformed face image data through a two-dimensional texture processing function. Render to get the deformed face image.
  • C fragColor Texture2D (T Camera , T Target )
  • the deformed face image data is rendered to obtain the deformed face image
  • Texture2D represents a two-dimensional texture processing function
  • T Camera represents the original face image data of the corresponding pixel point
  • C fragColor represents the color output by the fragment (T Camera , T Target ).
  • FIG. 9 is a schematic flowchart of the third embodiment of the face image processing method provided by the embodiment of the present application.
  • a terminal device that can provide multiple normal texture maps, as shown in FIG. 9 , in the above-mentioned first and second embodiments
  • the specific implementation scheme of this embodiment is:
  • the terminal device obtains face image data and detects key points and auxiliary points of the face, constructs face grid data according to the key points and auxiliary points of the face, and generates a face network image.
  • the end device determines which normal texture map to use.
  • the determination of the normal texture map to be used in this embodiment may be performed before the offset vector of each pixel on the face mesh image is determined according to the normal texture map.
  • the application type currently used by the user is determined, and according to the application type, the required normal texture map is determined.
  • the terminal device establishes the mapping relationship between the face mesh image and the normal texture map according to the generated face network image and the normal texture map to be used, and determines the face mesh.
  • the offset vector of each pixel on the image and the offset vector of each pixel on the screen on the face mesh image are determined, and finally the deformed face mesh image is obtained by processing.
  • FIG. 10 is an operation for determining the required normal texture map provided by the embodiment of the application Schematic diagram, as shown in Figure 10, when the user selects an application for face image processing through related operations, for example, one of the beauty camera, Meitu Xiuxiu or virtual makeup trial, the terminal device identifies the user The currently used application type, and the normal texture map to be used is determined according to the identified application type. For example, according to the user operation, when determining the virtual makeup of the application type selected by the user, the terminal device determines the method to be used.
  • Line texture maps are normal texture maps associated with virtual try-on.
  • a face processing instruction input by a user is received, and according to the face processing instruction, a normal texture map corresponding to the deformation effect is selected from a plurality of preset normal texture maps, wherein , the face processing instruction is used to instruct the user to require the deformation effect.
  • FIG. 11 provides a schematic diagram of the operation of the terminal device in this embodiment of the application.
  • the terminal device can operation (for example, taking a photo or video, selecting a photo or video from an album) to obtain the source image, referring to the right figure in FIG. 11 , in this way, the terminal device receives the face image processing instruction input by the user, Select the normal texture map corresponding to the deformation effect in each normal texture map.
  • the way to obtain the face image processing instructions may be different.
  • FIG. 12 provides a schematic diagram of an operation interface for face image processing for this embodiment of the application.
  • the operation interface will display various A list of beauty products, such as eyeliner, eyebrow pencil, blush, foundation, etc.
  • the terminal device can obtain the face processing instructions corresponding to the beauty product, such as , when the user selects the eyeliner, the terminal device receives the face processing instruction corresponding to the big eye.
  • FIG. 13 is a schematic diagram of an operation interface of another face image processing provided by an embodiment of the application.
  • the operation interface will display A list of the names of various deformations, such as big eyes, thin eyes, forehead, nose augmentation, etc., when the user selects a deformation, the terminal device can obtain the face processing instructions corresponding to the deformation, for example, when the user selects a deformation When the user selects the big eye, the terminal device receives the face processing instruction corresponding to the big eye.
  • FIG. 12 and FIG. 13 are only used to illustrate the implementation of this embodiment, but are not limited thereto, and are not repeated here.
  • the required normal texture map is determined, and the face is obtained through the normal texture map
  • the offset vector of each pixel on the grid image is finally processed to obtain a deformed face grid image, so that different face image processing can be performed according to the scene and user needs, and face images with different deformation effects can be obtained. , which expands the application scenarios of the face image processing method, meets the diversified needs of users, and improves the user experience.
  • FIG. 14 is a schematic structural diagram of an embodiment of a face image processing apparatus provided by an embodiment of the present application. As shown in FIG. 14 , the face image processing apparatus 10 in this embodiment includes:
  • the acquisition module 11 is used to acquire original face image data
  • the processing module 12 is used to generate a face mesh image according to the original face image data; according to the normal texture map, determine the offset vector of each pixel on the face mesh image, and the normal texture map is a two-dimensional The deformation trend and deformation strength of the face image represented by the color data; according to the offset vector of each pixel point on the face mesh image, each pixel coordinate on the face mesh image is offset to obtain the deformed human face. Face image data; render the deformed face image data to obtain a deformed face image.
  • processing module 12 is specifically used for:
  • mapping relationship sample the two-dimensional color data of each pixel on the normal texture map in turn;
  • the offset vector of each pixel on the face mesh image is determined.
  • processing module 12 is specifically used for:
  • the pixel mapping relationship between the pixels of the face mesh image and the pixels of the normal texture map is established.
  • processing module 12 is specifically used for:
  • the two-dimensional color data of each pixel on the normal texture map is sequentially sampled according to the preset sampling order.
  • processing module 12 is specifically used for:
  • the offset vector of each pixel point on the face mesh image is determined.
  • processing module 12 is specifically used for:
  • processing module 12 is specifically used for:
  • the deformed face image data is obtained.
  • processing module 12 is specifically used for:
  • the deformed face image data is rendered through a two-dimensional texture processing function to obtain a deformed face image.
  • processing module 12 is also used for:
  • processing module 12 is specifically used for:
  • the normal texture map corresponding to the deformation effect is selected from the preset multiple normal texture maps, and the face processing instruction is used to indicate the deformation effect required by the user;
  • processing module 12 is specifically used for:
  • the face key points are determined from the original face image data
  • the auxiliary points are determined
  • the obtaining module 11 is specifically used for:
  • the two-dimensional color data in the normal texture map consists of the color value of the R channel and the color value of the G channel.
  • each module of the above apparatus is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
  • these modules can all be implemented in the form of software calling through processing elements; they can also all be implemented in hardware; some modules can also be implemented in the form of calling software through processing elements, and some modules can be implemented in hardware.
  • all or part of these modules can be integrated together, and can also be implemented independently.
  • the processing element here may be an integrated circuit with signal processing capability.
  • each step of the above-mentioned method or each of the above-mentioned modules can be completed by an integrated logic circuit of hardware in the processor element or an instruction in the form of software.
  • the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more application specific integrated circuits (ASIC), or one or more microprocessors (digital) signal processor, DSP), or, one or more field programmable gate arrays (field programmable gate array, FPGA), etc.
  • ASIC application specific integrated circuits
  • DSP digital signal processor
  • FPGA field programmable gate array
  • the processing element may be a general-purpose processor, such as a central processing unit (central processing unit, CPU) or other processors that can call program codes.
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • SOC system-on-a-chip
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.
  • FIG. 15 is a schematic structural diagram of an embodiment of an electronic device provided by an embodiment of the application.
  • the electronic device 20 in this embodiment may include: a processor 21 , a memory 22 , a communication interface 23 and a system bus 24 , and the memory 22
  • the communication interface 23 is connected with the processor 21 through the system bus 24 and completes mutual communication
  • the memory 22 is used to store the computer execution instructions
  • the communication interface 23 is used to communicate with other devices
  • the processor 21 implements the computer program. Protocol of any of the method embodiments.
  • the above-mentioned processor 21 can be a general-purpose processor, including a central processing unit CPU, a network processor (NP), etc.; it can also be a digital signal processor DSP, an application-specific integrated circuit ASIC, a field programmable Gate array FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
  • a general-purpose processor including a central processing unit CPU, a network processor (NP), etc.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field programmable Gate array FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
  • the memory 22 may include random access memory (RAM), may also include read-only memory (RAM), and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM random access memory
  • RAM read-only memory
  • non-volatile memory such as at least one disk memory.
  • the communication interface 23 is used to realize the communication between the database access device and other devices (eg client, read-write library and read-only library).
  • the system bus 24 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus or the like.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the system bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program runs on a computer, the computer is made to execute any of the above method embodiments. Methods.
  • an embodiment of the present application further provides a chip for running instructions, the chip includes a memory and a processor, the memory stores code and data, the memory is coupled to the processor, and the processor runs Code in the memory enables the chip to perform the method of any of the above method embodiments.
  • An embodiment of the present application further provides a program product containing instructions, the program product includes a computer program, the computer program is stored in a computer-readable storage medium, and at least one processor can read the computer program from the computer-readable storage medium, When at least one processor executes the computer program, the method of any of the above-mentioned method embodiments can be implemented.
  • the embodiment of the present application further provides a computer program, which is used to execute the method of any of the foregoing method embodiments when the computer program is executed by a processor.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例提供一种人脸图像处理方法、装置、存储介质及电子设备,通过获取原始人脸图像数据,根据原始人脸图像数据,生成人脸网格图像,根据法线纹理贴图,确定人脸网格图像上每个像素点的偏移向量,根据人脸网格图像上每个像素点的偏移向量对人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据,对变形后的人脸图像数据进行渲染,得到变形后的人脸图像,在获取变形数据的过程中,针对每个像素点进行偏移向量的计算,并且不涉及复杂的开方等运算,提高了对人脸图像进行处理的精确度和流畅度,提高了用户体验。

Description

人脸图像处理方法、装置、存储介质及电子设备
本申请要求于2020年07月15日提交中国专利局、申请号为202010681269.6、申请名称为“人脸图像处理方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理技术领域,尤其涉及一种人脸图像处理方法、装置、存储介质及电子设备。
背景技术
随着移动终端的普及和图像处理技术的快速发展,通过人脸图像处理技术识别拍摄的照片或视频中的人脸图像并对其进行处理,以达到如美颜、变形、换脸等效果,已经被广泛应用。普通人通过在移动终端上安装相应的应用程序,就可以实现对拍摄的照片或视频进行处理,使照片或视频呈现出不同的效果,极大地丰富了人们的日常生活。此外,近年来通过将人脸图像处理技术与虚拟现实、增强现实等技术相结合,也带动了相关产业的崛起、转型或发展。
人脸图像变形是对人脸面部关键部位或面部轮廓进行变形一种人脸图像处理技术,现有技术中,为实现所需的变形效果,在对人脸图像进行变形处理时,需要经过图像模板筛选、图像预处理、获取变形数据和图像处理四个步骤,其中,在获取变形数据进行像素偏移量的计算过程,现有技术中以图像模板筛选中确定的关键点为圆心进行圆形或椭圆的衰减偏移,实现对应关键点附近区域的像素点的偏移量的计算。
上述现有技术中,对人脸图像的变形处理过程中容易导致移动终端卡顿,且处理得到的人脸图像精确度不高,因此,采用现有技术进行人脸图像的变形处理时,存在用户体验度不高的问题。
发明内容
本申请实施例提供一种人脸图像处理方法、装置、存储介质及电子设备,用以解决现有技术中进行人脸图像的变形处理过程中用户体验度不高的问题。
第一方面,本申请实施例提供一种人脸图像处理方法,包括:
获取原始人脸图像数据;
根据所述原始人脸图像数据,生成人脸网格图像;
根据法线纹理贴图,确定所述人脸网格图像上每个像素点的偏移向量,所述法线纹理贴图为用二维颜色数据表示的人脸图像的变形趋势和变形力度;
根据所述人脸网格图像上每个像素点的偏移向量对所述人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据;
对所述变形后的人脸图像数据进行渲染,得到变形后的人脸图像。
可选地,所述根据法线纹理贴图,确定所述人脸网格图像上每个像素点的偏移向量,包括:
建立所述人脸网格图像与所述法线纹理贴图的映射关系;
根据所述映射关系,依次取样所述法线纹理贴图上每个像素点的二维颜色数据;
根据所述二维颜色数据,确定所述人脸网格图像上每个像素点的偏移向量。
可选地,所述建立所述人脸网格图像与所述法线纹理贴图的映射关系,包括:
确定所述人脸网格图像的纹理坐标;
建立所述人脸网格图像的纹理坐标与所述法线纹理贴图的纹理坐标之间的坐标映射关系;
根据所述坐标映射关系,建立所述人脸网格图像的像素点与所述法线纹理贴图的像素点之间的像素映射关系。
可选地,所述根据所述映射关系,依次取样所述法线纹理贴图上每个像素点的二维颜色数据,包括:
根据所述人脸网格图像的纹理坐标与所述法线纹理贴图的纹理坐标之间的坐标映射关系,按照预设取样顺序依次取样所述法线纹理贴图上每个像素点的二维颜色数据。
可选地,所述根据所述二维颜色数据,确定所述人脸网格图像上每个像素点的偏移向量,包括:
将所述二维颜色数据转换为二维坐标系中的坐标数据,所述二维坐标系为x轴,y轴的取值范围均为[-1,1]的平面直角坐标系;
根据所述坐标数据的x轴坐标值和y轴坐标值,确定所述人脸网格图像上每个像素点的偏移向量。
可选地,所述将所述二维颜色数据转换为二维坐标系中的坐标数据,包括:
根据所述二维坐标系的坐标基准值,将所述二维颜色数据转换为所述二维坐标系中的坐标数据。
可选地,所述根据所述人脸网格图像上每个像素点的偏移向量对所述人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据,包括:
将所述人脸网格图像上每个像素点的偏移向量变换为与终端设备的屏幕分辨率对应的目标偏移向量;
根据所述人脸网格图像上每个像素点的目标偏移向量与所述人脸网格图像上对应像素点的像素坐标,得到所述变形后的人脸图像数据。
可选地,所述对所述变形后的人脸图像数据进行渲染,得到变形后的人脸图像,包括:
通过二维纹理处理函数对所述变形后的人脸图像数据进行渲染,得到变形后的人脸图像。
可选地,所述根据法线纹理贴图,确定所述人脸网格图像上每个像素点的偏移向量之前,所述方法还包括:
确定所需使用的法线纹理贴图。
可选地,所述确定所需使用的法线纹理贴图,包括:
根据用户输入的人脸处理指令,从预先设置的多个法线纹理贴图中选择与所述变形效果对应的法线纹理贴图,所述人脸处理指令用于指示所述用户所需的变形效果;
或者,
确定用户当前使用的应用类型;
根据所述应用类型,确定所需使用的法线纹理贴图。
可选地,所述根据所述原始人脸图像数据,生成人脸网格图像,包括:
根据人脸关键点检测算法,从所述原始人脸图像数据中确定出人脸关键 点;
根据所述人脸关键点的分布,确定出辅助位点;
按照三角形稳定原则,以最近的人脸关键点和辅助位点构建三角形网格数据;
渲染所述原始人脸图像数据得到底图,通过所述三角形网格数据对所述底图进行标记,得到所述人脸网格图像。
可选地,所述获取原始人脸图像数据,包括:
获取摄像头实时采集的原始人脸图像数据;
或者,
获取图像应用中存储的原始人脸图像数据。
可选地,所述法线纹理贴图中的二维颜色数据由R通道的颜色值和G通道的颜色值构成。
第二方面,本申请实施例提供一种人脸图像处理装置,包括:
获取模块,用于获取原始人脸图像数据;
处理模块,用于根据所述原始人脸图像数据,生成人脸网格图像;根据法线纹理贴图,确定所述人脸网格图像上每个像素点的偏移向量,所述法线纹理贴图为用二维颜色数据表示的人脸图像的变形趋势和变形力度;根据所述人脸网格图像上每个像素点的偏移向量对所述人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据;对所述变形后的人脸图像数据进行渲染,得到变形后的人脸图像。
可选地,所述处理模块具体用于:
建立人脸网格图像与法线纹理贴图的映射关系;
根据映射关系,依次取样法线纹理贴图上每个像素点的二维颜色数据;
根据二维颜色数据,确定人脸网格图像上每个像素点的偏移向量。
可选地,所述处理模块具体用于:
确定人脸网格图像的纹理坐标;
建立人脸网格图像的纹理坐标与法线纹理贴图的纹理坐标之间的坐标映射关系;
根据坐标映射关系,建立人脸网格图像的像素点与法线纹理贴图的像素点之间的像素映射关系。
可选地,所述处理模块具体用于:
根据人脸网格图像的纹理坐标与法线纹理贴图的纹理坐标之间的坐标映射关系,按照预设取样顺序依次取样法线纹理贴图上每个像素点的二维颜色数据。
可选地,所述处理模块具体用于:
将二维颜色数据转换为二维坐标系中的坐标数据,二维坐标系为x轴,y轴的取值范围均为[-1,1]的平面直角坐标系;
根据坐标数据的x轴坐标值和y轴坐标值,确定人脸网格图像上每个像素点的偏移向量。
可选地,所述处理模块具体用于:
根据二维坐标系的坐标基准值,将二维颜色数据转换为二维坐标系中的坐标数据。
可选地,所述处理模块具体用于:
将人脸网格图像上每个像素点的偏移向量变换为与终端设备的屏幕分辨率对应的目标偏移向量;
根据人脸网格图像上每个像素点的目标偏移向量与人脸网格图像上对应像素点的像素坐标,得到变形后的人脸图像数据。
可选地,所述处理模块具体用于:
通过二维纹理处理函数对所述变形后的人脸图像数据进行渲染,得到变形后的人脸图像。
可选地,所述处理模块还用于:
确定所需使用的法线纹理贴图。
可选地,所述处理模块具体用于:
根据用户输入的人脸处理指令,从预先设置的多个法线纹理贴图中选择与变形效果对应的法线纹理贴图,人脸处理指令用于指示用户所需的变形效果;
或者,
确定用户当前使用的应用类型,根据应用类型,确定所需使用的法线纹理贴图。
可选地,所述处理模块具体用于:
根据人脸关键点检测算法,从原始人脸图像数据中确定出人脸关键点;
根据人脸关键点的分布,确定出辅助位点;
按照三角形稳定原则,以最近的人脸关键点和辅助位点构建三角形网格数据;
渲染原始人脸图像数据得到底图,通过三角形网格数据对底图进行标记,得到人脸网格图像。
可选地,所述获取模块具体用于:
获取摄像头实时采集的原始人脸图像数据;
或者,
获取图像应用中存储的原始人脸图像数据。
可选地,法线纹理贴图中的二维颜色数据由R通道的颜色值和G通道的颜色值构成。
第三方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,所述计算机程序用于实现如上述的人脸图像处理方法。
第四方面,本申请实施例提供一种电子设备,包括:存储器和处理器;所述存储器用于存储计算机程序,所述处理器执行所述计算机程序实现如上述的人脸图像处理方法。
第五方面,本申请实施例提供了一种运行指令的芯片,所述芯片包括存储器、处理器,所述存储器中存储代码和数据,所述存储器与所述处理器耦合,所述处理器运行所述存储器中的代码使得所述芯片用于执行上述的人脸图像处理方法。
第六方面,本申请实施例提供了一种包含指令的程序产品,当所述程序产品在计算机上运行时,使得所述计算机执行上述的人脸图像处理方法。
第七方面,本申请实施例提供了一种计算机程序,当所述计算机程序被处理器执行时,用于执行上述的人脸图像处理方法。
本申请实施例提供的人脸图像处理方法、装置、存储介质及电子设备,以原始人脸图像数据为依据,生成人脸网格图像,通过使用法线纹理贴图存储每个像素点的法线,并根据法线纹理贴图确定人脸网格图像上每个像素点的偏移向量,根据人脸网格图像上每个像素点的偏移向量对人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据,对变形后的人脸图像数据进行渲染,得到变形后的人脸图像,由于法线纹理贴图通过二维颜色数据的方式存储每一个像素点的法线,即每一个像素点的变形趋势和变 形力度,并且通过读取法线纹理贴图二维颜色数据就可以计算确定每个像素点的偏移向量,该过程中,针对每个像素点进行偏移向量的计算,并且不涉及复杂的开方等运算,提高了对人脸图像进行处理的精确度和流畅度,有利于提高用户体验。
附图说明
为了更清楚地说明本申请或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种应用场景示意图;
图2为本申请实施例提供的人脸图像处理方法实施例一的流程示意图;
图3为本申请实施例提供的一种人脸关键点和辅助位点的分布示意图;
图4为本申请实施例提供的一种人脸网格图像的示意图;
图5为本申请实施例提供的一种法线纹理贴图的示意图;
图6为本申请实施例提供的一种人脸图像处理方法的一种处理过程的示意图;
图7为本申请实施例提供的人脸图像处理方法实施例二的流程示意图;
图8为本申请实施例提供的人脸网格图像与法线纹理贴图的映射关系示意图;
图9为本申请实施例提供的人脸图像处理方法实施例三的流程示意图;
图10为本申请实施例提供的一种确定所需使用的法线纹理贴图的操作示意图;
图11为本申请实施例提供终端设备的操作示意图;
图12为本申请实施例提供一种人脸图像处理的操作界面示意图;
图13为本申请实施例提供的又一种人脸图像处理的操作界面示意图;
图14为本申请实施例提供的人脸图像处理装置实施例的结构示意图;
图15为本申请实施例提供的电子设备实施例的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
本申请提供一种进行人脸图像处理的技术方案,用于根据用户的使用需求,对相应的视频或照片进行处理,从而达到预期的或某种特定的人脸变形效果,以适应不同应用场景或丰富用户的拍摄体验。示例性地,虚拟试妆作为最近兴起的一种线上辅助销售方式,在用户进行虚拟试妆的过程中,通过人脸图像变形技术实现大眼、瘦脸等脸部变形效果,将用户试妆后的脸部效果呈现给用户,用户即使不到现场也可以体验不同化妆产品的效果,从而为用户购物提供了更多的便利性。
本申请实施例以终端设备为例,对本申请的技术方案加以描述,可以理解的是,本申请的技术方案也可用于具有图像处理功能的电子设备,如计算机、服务器或照相机等。
图1为本申请实施例提供的一种应用场景示意图,以实现大眼和瘦脸的人脸图像变形效果为例,如图1所示,在图1中左图为终端设备获取到的原始人脸图像,通过终端设备对原始人脸图像进行处理,可以得到图1中右图所示的变形后的人脸图像。
本申请技术方案的主要思路:现有技术的人脸图像变形方案包括图像模板筛选、图像预处理、获取变形数据和图像处理四个步骤,其中,获取变形数据即计算像素的偏移量,现有技术中在进行像素偏移量的计算过程,以每个关键点为圆心进行圆形或椭圆的衰减偏移,实现对应关键点附近区域的像素点的偏移量的计算,一方面,由于不能精确计算每一个像素点的偏移情况,对人脸图像处理的精确度不高,另一方面,由于以关键点为圆心进行圆形或椭圆的衰减偏移的过程存在开方等复杂运算,导致图形处理器(graphics processing unit,GPU)开销大,导致对人脸图像的变形处理过程中容易移动终端出现延时、卡顿等问题,因此,现有技术进行人脸图像的变形处理时,存在用户体验度不高的问题。
基于现有技术中在以上的技术问题,本申请的技术方案根据预先构建的与特定的变形效果对应的法线纹理贴图,在进行像素的偏移计算时,通过建 立法线纹理贴图与待处理人脸图像的映射关系,取样法线纹理贴图中对应像素的颜色值,从而确定待处理人脸图像上每个像素的偏移情况,该处理过程中,针对每个像素进行偏移向量的计算,并且不涉及复杂的开方等运算,不仅提高了对人脸图像进行变形处理的精度,还减少了对GPU的消耗,有利于提高用户体验。
图2为本申请实施例提供的人脸图像处理方法实施例一的流程示意图,本申请实施例的执行主体为终端设备,该终端设备可以为终端设备如手机、平板电脑等,如图2所示,本申请实施例的人脸图像处理方法包括:
S101、获取原始人脸图像数据。
本步骤中,为了得到变形后的人脸图像,首先需要获取与变形处理前的人脸图像对应的原始人脸图像数据,以作为后续步骤中进行图像处理的依据。具体地,终端设备可以从相应的图片或视频中识别出人脸,并获取对应的人脸图像数据,这些图片或视频可以是摄像头实时获取的,也可以是存储在终端设备中的。
示例性地,当终端设备的摄像头对准人脸进行图片拍摄时,获取原始人脸图像数据。
示例性地,当终端设备在录制视频的过程中,抓取其中的一帧图像,获取原始人脸图像数据。
示例性地,终端设备读取图像应用如相册中存储的相应照片或视频,获取原始人脸图像数据。
S102、根据原始人脸图像数据,生成人脸网格图像。
本步骤中,通过对S101中获取的原始人脸图像数据进行处理,得到人脸网格图像,具体地,首先,可以通过人脸关键点检测算法识别出原始人脸图像中的一些表征人脸面部轮廓和关键部位的点,作为人脸关键点,并根据构建人脸网格的需要和人脸关键点的分布情况,确定出辅助位点。图3为本申请实施例提供的一种人脸关键点和辅助位点的分布示意图,如图3所示,在根据人脸关键点检测算法识别出P1-P99共99个人脸关键点之后,再根据该99个人脸关键点的分布,确定出S1-S13共13个辅助位点。
其中,人脸关键点检测算法是一种自动识别人脸具体代表性的区域的算法,通过人脸关键点检测算法可识别出人面部的眼睛、鼻子、嘴、眉毛、面部轮廓等区域的关键点,本申请实施例可以采用任何一种人脸关键点检测算 法,如主动形状模型(active shape models,ASM)、主动外观模型(active appearance models,AAM)、级联姿势回归(cascaded pose regression,CPR)等获取人脸关键点。
辅助位点,是指起辅助定位作用的点,辅助位点可以是原始人脸图像上的点,也可以不是原始人脸图像上的点,可以通过预设算法根据人脸关键点的分布情况进行确定,其目的在于构建更加稳定的三角形。
三角形稳定原则是指为使构建的三角形更稳定,应尽量保证构建的三角形为锐角三角形,而避免形成钝角三角形。本申请实施例中,在构建三角形网格数据时,应尽量保证以位置最近的人脸关键点和辅助位点作为顶点,同时保证构建的三角形尽可能稳定,有利于提高图像的品质。
之后,再根据三角形稳定原则,以最近的人脸关键点和辅助位点作为顶点构建三角形网格数据,并采用普通的渲染方式渲染S101中获取到的原始人脸图像数据作为底图,将三角形网格数据标记在该底图上,最终得到人脸网格图像。示例性地,图4为本申请实施例提供的一种人脸网格图像的示意图,如图4所示,人脸网格图像中包括一***三角形,这些三角形的顶点可以全部为辅助位点,如三角形A,由辅助位点S5、S7和S11构成,也可以全部为人脸关键点,如三角形B,由人脸关键点P1、P59和P68构成,也可以由人脸关键点和辅助位点共同构成,如三角形C,由人脸关键点P5和P69以及辅助位点S9构成,此处不作严格限制,只要满足上述三角形稳定原则和最近原则即可。
S103、根据法线纹理贴图,确定人脸网格图像上每个像素点的偏移向量。
本步骤中,为达到最终的人脸变形目的,本申请实施例中采用法线纹理贴图的方式,在获取到人脸网格图像后,终端设备获取相应的法线纹理贴图,并根据法线纹理贴图,确定人脸网格图像上每个像素点的偏移向量,即通过读取法线纹理贴图上的二维颜色数据的方式,确定人脸网格图像上每个像素点的偏移向量,该过程中不涉及复杂的乘除运算或者开方运算等,因此,对GPU的开销小,能给用户带来流畅的使用体验。
其中,偏移向量包括偏移方向和偏移大小,用于确定对应像素点该往哪个方向偏移,以及往该方向偏移的偏移量的大小。
由于本申请实例的技术方案用于对二维人脸图像进行变形处理,因此,在法线纹理贴图中采用二维颜色数据存储法线,每一个二维颜色数据表示表 示对应像素点的变形趋势和变形力度,相应地,法线纹理贴图中的二维颜色数据可以是以由RGB中任意两个颜色通道的颜色数据组成的,此处不作限定。
示例性地,图5为本申请实施例提供的一种法线纹理贴图的示意图,如图5所示,在该法线纹理图中的二维颜色数据由R通道的颜色值和G通道的颜色值组成。
S104、根据人脸网格图像上每个像素点的偏移向量对人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据。
本步骤中,为最终得到变形后的人脸图像,需要先获取用于图像渲染的变形后的人脸图像数据,具体地,根据S103中获取的人脸网格图像上每个像素点的偏移向量对人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据。
在一种可能的实现方式,获取终端设备的屏幕分辨率,并基于终端设备的屏幕分辨率得到与显示屏的实际情况相匹配的变形后的人脸图像数据,具体地,将人脸网格图像上每个像素点的偏移向量变换为与终端设备的屏幕分辨率对应的目标偏移向量,根据人脸网格图像上每个像素点的目标偏移向量与人脸网格图像上对应像素点的像素坐标,得到变形后的人脸图像数据。
其中,目标偏移向量是指根据终端设备的屏幕分辨率对人脸网格图像上每个像素的偏移向量进行变换处理后得到的向量。
由于变形后的人脸图像最终需要通过终端设备呈现给用户,为此变形后的人脸图像数据必须与终端设备的分辨率相匹配,才能保证最终得到的变形后的人脸图像通过终端设备呈现时不会出现比例失调等问题,而不同用户使用的终端设备的分辨率可能不同,本实现方式中,通过将人脸网格图像上每个像素点的偏移向量变换为与终端设备的屏幕分辨率对应的目标偏移向量,根据人脸网格图像上每个像素点的目标偏移向量与人脸网格图像上对应像素点的像素坐标,最终得到与终端设备匹配的变形后的人脸图像数据,为后续渲染过程提供了可靠的数据支持,提高了人脸图像处理的可靠性,提高了用户体验。
S105、对变形后的人脸图像数据进行渲染,得到变形后的人脸图像。
本步骤中,通过对S104得到的变形后的人脸图像数据进行渲染,得到变形后的人脸图像。可以理解的是,为提高渲染效果,在对变形后的人脸图像数据进行渲染时,可以根据S102中得到的人脸网格数据对变形后的人脸图像 数据进行渲染,得到变形后的人脸图像。
下面将以一个具体的示例对本实施例的实现过程加以说明:
图6为本申请实施例提供的一种人脸图像处理方法的一种处理过程的示意图,以实现大眼和瘦脸变形效果为例,如图6所示,通过对根据原始人脸图像数据,确定出(a)图中的人脸关键点和辅助位点,通过以(a)图人脸关键点和辅助位点为顶点,构建三角形网络,得到(b)图中的人脸网格图像,再通过(c)图中的法线纹理贴图(与大眼和瘦脸的变形效果对应)确定(b)图中的人脸网格图像中每个像素点的偏移向量,最后,根据得到的偏移向量对人脸网格图像中的每个像素点进行偏移处理,得到变形后的人脸图像数据,根据已构建的三角形网格数据对变形后的人脸图像数据进行渲染,得到具有大眼和瘦脸效果的人脸图像。
本实施例中,以原始人脸图像数据为依据,生成人脸网格图像,通过使用法线纹理贴图存储每个像素点的法线,并根据法线纹理贴图确定人脸网格图像上每个像素点的偏移向量,根据人脸网格图像上每个像素点的偏移向量对人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据,对变形后的人脸图像数据进行渲染,得到变形后的人脸图像,由于法线纹理贴图通过二维颜色数据的方式存储每一个像素点的法线,即每一个像素点的变形趋势和变形力度,并且通过读取法线纹理贴图二维颜色数据就可以计算确定每个像素点的偏移向量,该过程中,针对每个像素点进行偏移向量的计算,并且不涉及复杂的开方等运算,提高了对人脸图像进行处理的精确度和流畅度,有利于提高用户体验。
图7为本申请实施例提供的人脸图像处理方法实施例二的流程示意图,在上述实施例一的基础上,本实施例中,根据法线纹理贴图,确定人脸网格图像上每个像素点的偏移向量,包括:
S201、建立人脸网格图像与法线纹理贴图的映射关系。
本步骤中,由于法线纹理贴图与人脸网格图像本身相互独立,为了实现根据法线纹理贴图,确定人脸网格图像上每个像素点的偏移向量,首先需要建立人脸网格图像与法线纹理贴图的映射关系,以使法线纹理贴图与人脸网格图像相互“贴合”。
示例性地,图8为本申请实施例提供的人脸网格图像与法线纹理贴图的映射关系示意图,如图8所示,在建立人脸网格图像与法线纹理贴图的映射 关系时可以通过以下过程实现:
(1)定义以左下角顶点的纹理坐标为(0,0),左上角顶点的纹理坐标为(0,1),右下角的顶点的纹理坐标为(1,0),右上角顶点的纹理坐标为(1,1),根据上述定义的四个角的顶点的纹理坐标依次确定人脸网格图像其他像素点的纹理坐标。
(2)建立人脸网格图像的上述四个顶点的纹理坐标与法线纹理贴图的四个角的纹理坐标的坐标映射关系,进而,确定人脸网格图像的其他顶点的纹理坐与法线纹理贴图的纹理坐标的坐标映射关系。
(3)根据人脸网格图像上各顶点的纹理坐标与法线纹理贴图的纹理坐标的坐标映射关系,确定人脸网格图像的每个像素点与法线纹理贴图的每个像素点之间的像素映射关系。
S202、根据映射关系,依次取样法线纹理贴图上每个像素点的二维颜色数据。
本步骤中,在法线纹理贴图与人脸网格图像“贴合”的前提下,依次取样法线纹理贴图上每个像素点的二维颜色数据。
在一种可能的实现方式中,根据人脸网格图像的纹理坐标与法线纹理贴图的纹理坐标之间的坐标映射关系,按照预设取样顺序依次取样法线纹理贴图上每个像素点的二维颜色数据。
其中,预设取样顺序为以某一个像素点为起始点,依次取遍所有像素点的取样顺序,例如,预设取样顺序可以为从顶点(0,0)开始,从左到右、从下到上的顺序。
示例性,继续参见图8,法线纹理贴图中的Q(u,v)即为与P(0.5,0.69)对应的二维颜色数据,其中,u和v分别为该像素点对应颜色通道的颜色值。
S203、根据二维颜色数据,确定人脸网格图像上每个像素点的偏移向量。
本步骤中,由于法线纹理贴图中二维颜色数据表示的是人脸图像上对应的像素点的变形趋势和变形力度,因此,根据取样得到的法线纹理贴图上某一像素点的二维颜色数据,就可以确定人脸网格图像中对应像素点的偏移向量,通过依次取样法线纹理贴图上每个像素点的二维颜色数据,就可以确定出人脸网格图像上每个像素点的偏移向量。
在一种可能的实现方式中,法线纹理贴图中的二维颜色数据由R通道和G通道的颜色值组成,其中,R通道的颜色值表示人脸图像上对应像素点在x 轴上的偏移向量,以G通道的颜色值表示人脸图像上对应像素点在y轴上的偏移向量,将RG(0.5,0.5)作为基准值,当RG为(0.5,0.5)时,对应像素点不做任何像素偏移,当R值小于0.5时,代表人脸网格图像上对应像素点的像素坐标向左偏移,当R值大于0.5时,代表人脸网格图像上对应像素点的像素坐标向右偏移,同理,当G值小于0.5时,代表人脸网格图像上对应像素点的像素坐标向下偏移,当G值大于0.5时,代表人脸网格图像上对应像素点的像素坐标向上偏移。若要实现瘦脸的变形效果,该法线纹理贴图只要设置左脸颊的二维颜色数据的R值大于0.5,同时,设置右脸颊的二维颜色数据的R值大小0.5,而其他位置的二颜色数据均设置为(0.5,0.5)即可;若要实现大眼的变形效果,该法线纹理贴图中只要设置两只眼睛的左眼角的二维颜色数据的R值小于0.5,同时,设置两只眼睛的右眼角的二维颜色数据的R值大于0.5,而其他位置的二颜色数据均设置为(0.5,0.5)即可。
本实现方式中,由于图像中的颜色数据不能定义小于0的值,为使用方便,需要对取样得到的二维颜色数据做进一步的处理,使其可以通过坐标数据更直观地体现偏移方向和偏移量的大小,可选地,根据二维颜色数据,确定人脸网格图像上每个像素点的偏移向量,包括:
将二维颜色数据转换为二维坐标系中的坐标数据,二维坐标系为x轴,y轴的取值范围均为[-1,1]的平面直角坐标系,并根据坐标数据的x轴坐标值和y轴坐标值,确定人脸网格图像上每个像素点的偏移向量。
可选地,根据二维坐标系的坐标基准值,将二维颜色数据转换为二维坐标系中的坐标数据。
具体地,根据公式V offset=C offset×2.0-vec(1.0,1.0),将二维颜色数据转换为二维坐标系中的坐标数据;
其中,C offset表示法线纹理贴图上的二维颜色数据,V offset表示转化后的二维坐标系中的坐标数据,vec(1.0,1.0)表示二维坐标系的坐标基准值。
本实现方式中,通过上述公式可以将二维颜色数据转换为x轴和y轴的取值范围均为-1到1的平面直角坐标系中的坐标数据,例如,当取样得到的C offset为(0.5,0.5)时,经计算可以确定V offset为(0,0),表示x轴和y轴方向上均没有发生偏移;当取样得到的C offset为(0.4,0.8)时,相应地可以确定V offset为(-0.2,0.6),表示x轴方向上应向左偏移0.2个像素,y轴方向上应向上偏移0.6个像素。由此可见,本实现方式中,通过依次获取每个二 维颜色数据对应的坐标数据,并通过坐标数据的x轴坐标值和y轴坐标值,就可以方便、直观地确定出人脸网格图像上每个像素点的偏移向量。
本实施例中,通过建立人脸网格图像与法线纹理贴图的映射关系,根据映射关系,依次取样法线纹理贴图上每个像素点的二维颜色数据,根据二维颜色数据,确定人脸网格图像上每个像素点的偏移向量,通过建立映射的方式,使人脸网格图像与法线纹理贴图相贴合,提高了确定的人脸网格图像上每个像素点的偏移向量的准确性,并且通过将二维颜色数据转换为二维坐标系中带有正负的坐标数据,根据该坐标数据确定人脸网格图像上每个像素点的偏移向量,简化了计算过程,减少了对GPU的消耗,有利于提高图像处理的流畅度,提高了用户体验。
可选地,根据人脸网格图像上每个像素点的目标偏移向量与人脸网格图像上对应像素点的像素坐标,得到变形后的人脸图像数据,包括:
根据公式T Target=V pos+V offset/V pixSize,得到变形后的人脸图像数据;
其中,V pos表示对应像素点在人脸网格图像上的坐标数据,V pixSize表示屏幕分辨率,T Target表示对应像素点变形后的人脸图像数据,则公式中V offset/V pixSize即表示目标偏移向量。
在一种可能的实现方式中,本实施例中,对变形后的人脸图像数据进行渲染,得到变形后的人脸图像,包括:通过二维纹理处理函数对变形后的人脸图像数据进行渲染,得到变形后的人脸图像。
具体地,根据公式C fragColor=T exture2D(T Camera,T Target),对变形后的人脸图像数据进行渲染,得到变形后的人脸图像;
其中,T exture2D表示二维纹理处理函数,T Camera表示对应像素点的原始人脸图像数据,C fragColor表示片元(T Camera,T Target)输出的颜色。
图9为本申请实施例提供的人脸图像处理方法实施例三的流程示意图,对于可以提供多种法线纹理贴图的终端设备,如图9所示,在上述实施例一和实施例二的基础上,本实施例的具体实现方案为:
一方面,终端设备获取人脸图像数据并检测人脸关键点和辅助位点,根据人脸关键点和辅助位点构建人脸网格数据,生成人脸网络图像。
另一方面,终端设备确定所需使用的法线纹理贴图。
可以理解的是,本实施例中确定所需使用的法线纹理贴图,可以在根据法线纹理贴图,确定人脸网格图像上每个像素点的偏移向量之前执行。
在一可能的实现方式中,确定用户当前使用的应用类型,根据该应用类型,确定所需使用的法线纹理贴图。
终端设备在上述两个方面的基础上,即根据生成的人脸网络图像和确定所需使用的法线纹理贴图,建立人脸网格图像与法线纹理贴图的映射关系,确定人脸网格图像上每个像素点的偏移向量,以及确定人脸网格图像上每个像素在屏幕上的偏移向量,最后处理得到变形后的人脸网格图像。
本申请实施例的具体实现解释如下:
对于安装有不同的应用程序的终端设备,其中,不同的应用程序可以提供各特色的人脸图像变形功能,图10为本申请实施例提供的一种确定所需使用的法线纹理贴图的操作示意图,如图10所示,当用户通过相关操作选择了一种进行人脸图像处理的应用程序时,例如,美颜相机、美图秀秀或虚拟试妆等其中的一个,终端设备识别用户当前使用的应用类型,并根据识别出的应用类型,确定所需使用的法线纹理贴图,例如,根据用户操作,确定用户选择的应用类型的虚拟试妆时,终端设备确定所需使用的法线纹理贴图为与虚拟试妆相关的法线纹理贴图。
在另一种可能的实现方式中,接收用户输入的人脸处理指令,并根据该人脸处理指令,从预先设置的多个法线纹理贴图中选择与变形效果对应的法线纹理贴图,其中,人脸处理指令用于指示用户所需的变形效果。
示例性地,图11为本申请实施例提供终端设备的操作示意图,当用户通过点击操作进入到具体的应用之后,根据相应的操作指引,如图11中左图所示,终端设备可以根据用户的操作(例如,拍摄照片或视频、从相册中选取照片或视频)获取到源图像,参照图11的右图,这样,终端设备接收用户输入的人脸图像处理指令,并从预先设置的多个法线纹理贴图中选择与变形效果对应的法线纹理贴图,对于不同的应用,获取人脸图像处理指令的方式可能也不相同。
示例性地,图12为本申请实施例提供一种人脸图像处理的操作界面示意图,如图12所示,若用户使用的人脸图像处理应用为虚拟试妆,操作界面上会显示出各种美妆产品的列表,例如,眼线笔、眉笔、腮红、粉底等,当用户选择一款美妆产品时,终端设备即可获取到与该美妆产品对应的人脸处理指令,例如,当用户选择眼线笔时,终端设备接收到与大眼对应的人脸处理指令。
示例性地,图13为本申请实施例提供的又一种人脸图像处理的操作界面示意图,如图13所示,若用户使用的人脸图像处理应用为美图应用,操作界面上会显示出各种变形的名称列表,例如,大眼、瘦眼、扩额、隆鼻等,当用户选择一种变形时,终端设备即可获取到与该变形对应的人脸处理指令,例如,当用户选择大眼时,终端设备接收到与大眼对应的人脸处理指令。
图12和图13仅用于对本实施例的实现方式进行举例说明,但并不仅限于此,此处不再赘述。
本实施例中,在根据法线纹理贴图,确定人脸网格图像上每个像素点的偏移向量之前,通过确定所需使用的法线纹理贴图,并通过该法线纹理贴图获取人脸网格图像上每个像素的偏移向量,最终处理得到变形后的人脸网格图像,从而可以根据场景和用户使用需求,进行不同的人脸图像处理,得到具有不同变形效果的人脸图像,拓展了人脸图像处理方法的应用场景,满足了用户多元化的使用需求,提高了用户体验。
图14为本申请实施例提供的人脸图像处理装置实施例的结构示意图,如图14所示,本实施例中的人脸图像处理装置10包括:
获取模块11和处理模块12。
其中,获取模块11,用于获取原始人脸图像数据;
处理模块12,用于根据原始人脸图像数据,生成人脸网格图像;根据法线纹理贴图,确定人脸网格图像上每个像素点的偏移向量,法线纹理贴图为用二维颜色数据表示的人脸图像的变形趋势和变形力度;根据人脸网格图像上每个像素点的偏移向量对人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据;对变形后的人脸图像数据进行渲染,得到变形后的人脸图像。
可选地,处理模块12具体用于:
建立人脸网格图像与法线纹理贴图的映射关系;
根据映射关系,依次取样法线纹理贴图上每个像素点的二维颜色数据;
根据二维颜色数据,确定人脸网格图像上每个像素点的偏移向量。
可选地,处理模块12具体用于:
确定人脸网格图像的纹理坐标;
建立人脸网格图像的纹理坐标与法线纹理贴图的纹理坐标之间的坐标映射关系;
根据坐标映射关系,建立人脸网格图像的像素点与法线纹理贴图的像素点之间的像素映射关系。
可选地,处理模块12具体用于:
根据人脸网格图像的纹理坐标与法线纹理贴图的纹理坐标之间的坐标映射关系,按照预设取样顺序依次取样法线纹理贴图上每个像素点的二维颜色数据。
可选地,处理模块12具体用于:
将二维颜色数据转换为二维坐标系中的坐标数据,二维坐标系为x轴,y轴的取值范围均为[-1,1]的平面直角坐标系;
根据坐标数据的x轴坐标值和y轴坐标值,确定人脸网格图像上每个像素点的偏移向量。
可选地,处理模块12具体用于:
根据二维坐标系的坐标基准值,将二维颜色数据转换为二维坐标系中的坐标数据。
可选地,处理模块12具体用于:
将人脸网格图像上每个像素点的偏移向量变换为与终端设备的屏幕分辨率对应的目标偏移向量;
根据人脸网格图像上每个像素点的目标偏移向量与人脸网格图像上对应像素点的像素坐标,得到变形后的人脸图像数据。
可选地,处理模块12具体用于:
通过二维纹理处理函数对所述变形后的人脸图像数据进行渲染,得到变形后的人脸图像。
可选地,处理模块12还用于:
确定所需使用的法线纹理贴图。
可选地,处理模块12具体用于:
根据用户输入的人脸处理指令,从预先设置的多个法线纹理贴图中选择与变形效果对应的法线纹理贴图,人脸处理指令用于指示用户所需的变形效果;
或者,
确定用户当前使用的应用类型,根据应用类型,确定所需使用的法线纹理贴图。
可选地,处理模块12具体用于:
根据人脸关键点检测算法,从原始人脸图像数据中确定出人脸关键点;
根据人脸关键点的分布,确定出辅助位点;
按照三角形稳定原则,以最近的人脸关键点和辅助位点构建三角形网格数据;
渲染原始人脸图像数据得到底图,通过三角形网格数据对底图进行标记,得到人脸网格图像。
可选地,获取模块11具体用于:
获取摄像头实时采集的原始人脸图像数据;
或者,
获取图像应用中存储的原始人脸图像数据。
可选地,法线纹理贴图中的二维颜色数据由R通道的颜色值和G通道的颜色值构成。
本实施例的实现原理和技术效果与方法实施例类似,具体可参照方法实施例,此处不再一一赘述。
需要说明的是,应理解以上装置的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块通过处理元件调用软件的形式实现,部分模块通过硬件的形式实现。此外,这些模块全部或部分可以集成在一起,也可以独立实现。这里的处理元件可以是一种集成电路,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。
例如,以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(application specific integrated circuit,ASIC),或,一个或多个微处理器(digital signal processor,DSP),或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(central processing unit,CPU)或其它可以调用程序代码的处理器。再如,这些模块可以集成在一起,以片上***(system-on-a-chip,SOC)的形式实现。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘solid state disk(SSD))等。
图15为本申请实施例提供的电子设备实施例的结构示意图,如图15所示,本实施例中电子设备20可以包括:处理器21、存储器22、通信接口23和***总线24,存储器22和通信接口23通过***总线24与处理器21连接并完成相互间的通信,存储器22用于存储计算机执行指令,通信接口23用于和其他设备进行通信,处理器21执行计算机程序时实现如上述任一方法实施例的方案。
在图15中,上述的处理器21可以是通用处理器,包括中央处理器CPU、网络处理器(network processor,NP)等;还可以是数字信号处理器DSP、专用集成电路ASIC、现场可编程门阵列FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
存储器22可能包含随机存取存储器(random access memory,RAM),也可能包括只读存储器(read-only memory,RAM),还可能包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
通信接口23用于实现数据库访问装置与其他设备(例如客户端、读写库和只读库)之间的通信。
***总线24可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA) 总线等。***总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
可选的,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,当该计算机程序在计算机上运行时,使得计算机执行如上述任一方法实施例的方法。
可选的,本申请实施例还提供一种运行指令的芯片,所述芯片包括存储器、处理器,所述存储器中存储代码和数据,所述存储器与所述处理器耦合,所述处理器运行所述存储器中的代码使得所述芯片用于执行上述任一方法实施例的方法。
本申请实施例还提供一种包含指令的程序产品,该程序产品包括计算机程序,计算机程序存储在计算机可读存储介质中,至少一个处理器可以从计算机可读存储介质读取所述计算机程序,至少一个处理器执行所述计算机程序时可实现上述任一方法实施例的方法。
本申请实施例还提供一种计算机程序,当所述计算机程序被处理器执行时,用于执行上述任一方法实施例的方法。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (19)

  1. 一种人脸图像处理方法,其特征在于,包括:
    获取原始人脸图像数据;
    根据所述原始人脸图像数据,生成人脸网格图像;
    根据法线纹理贴图,确定所述人脸网格图像上每个像素点的偏移向量,所述法线纹理贴图为用二维颜色数据表示的人脸图像的变形趋势和变形力度;
    根据所述人脸网格图像上每个像素点的偏移向量对所述人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据;
    对所述变形后的人脸图像数据进行渲染,得到变形后的人脸图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据法线纹理贴图,确定所述人脸网格图像上每个像素点的偏移向量,包括:
    建立所述人脸网格图像与所述法线纹理贴图的映射关系;
    根据所述映射关系,依次取样所述法线纹理贴图上每个像素点的二维颜色数据;
    根据所述二维颜色数据,确定所述人脸网格图像上每个像素点的偏移向量。
  3. 根据权利要求2所述的方法,其特征在于,所述建立所述人脸网格图像与所述法线纹理贴图的映射关系,包括:
    确定所述人脸网格图像的纹理坐标;
    建立所述人脸网格图像的纹理坐标与所述法线纹理贴图的纹理坐标之间的坐标映射关系;
    根据所述坐标映射关系,建立所述人脸网格图像的像素点与所述法线纹理贴图的像素点之间的像素映射关系。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述映射关系,依次取样所述法线纹理贴图上每个像素点的二维颜色数据,包括:
    根据所述人脸网格图像的纹理坐标与所述法线纹理贴图的纹理坐标之间的坐标映射关系,按照预设取样顺序依次取样所述法线纹理贴图上每个像素点的二维颜色数据。
  5. 根据权利要求2所述的方法,其特征在于,所述根据所述二维颜色数据,确定所述人脸网格图像上每个像素点的偏移向量,包括:
    将所述二维颜色数据转换为二维坐标系中的坐标数据,所述二维坐标系为x轴,y轴的取值范围均为[-1,1]的平面直角坐标系;
    根据所述坐标数据的x轴坐标值和y轴坐标值,确定所述人脸网格图像上每个像素点的偏移向量。
  6. 根据权利要求5所述的方法,其特征在于,所述将所述二维颜色数据转换为二维坐标系中的坐标数据,包括:
    根据所述二维坐标系的坐标基准值,将所述二维颜色数据转换为所述二维坐标系中的坐标数据。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述根据所述人脸网格图像上每个像素点的偏移向量对所述人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据,包括:
    将所述人脸网格图像上每个像素点的偏移向量变换为与终端设备的屏幕分辨率对应的目标偏移向量;
    根据所述人脸网格图像上每个像素点的目标偏移向量与所述人脸网格图像上对应像素点的像素坐标,得到所述变形后的人脸图像数据。
  8. 根据权利要求7所述的方法,其特征在于,所述对所述变形后的人脸图像数据进行渲染,得到变形后的人脸图像,包括:
    通过二维纹理处理函数对所述变形后的人脸图像数据进行渲染,得到变形后的人脸图像。
  9. 根据权利要求1-6任一项所述的方法,其特征在于,所述根据法线纹理贴图,确定所述人脸网格图像上每个像素点的偏移向量之前,所述方法还包括:
    确定所需使用的法线纹理贴图。
  10. 根据权利要求9所述的方法,其特征在于,所述确定所需使用的法线纹理贴图,包括:
    根据用户输入的人脸处理指令,从预先设置的多个法线纹理贴图中选择与所述变形效果对应的法线纹理贴图,所述人脸处理指令用于指示所述用户所需的变形效果;
    或者,
    确定用户当前使用的应用类型,根据所述应用类型,确定所需使用的法线纹理贴图。
  11. 根据权利要求1-6任一项所述的方法,其特征在于,所述根据所述原始人脸图像数据,生成人脸网格图像,包括:
    根据人脸关键点检测算法,从所述原始人脸图像数据中确定出人脸关键点;
    根据所述人脸关键点的分布,确定出辅助位点;
    按照三角形稳定原则,以最近的人脸关键点和辅助位点构建三角形网格数据;
    渲染所述原始人脸图像数据得到底图,通过所述三角形网格数据对所述底图进行标记,得到所述人脸网格图像。
  12. 根据权利要求1-6任一项所述的方法,其特征在于,所述获取原始人脸图像数据,包括:
    获取摄像头实时采集的原始人脸图像数据;
    或者,
    获取图像应用中存储的原始人脸图像数据。
  13. 根据权利要求1-6任一项所述的方法,其特征在于,所述法线纹理贴图中的二维颜色数据由R通道的颜色值和G通道的颜色值构成。
  14. 一种人脸图像处理装置,其特征在于,包括:
    获取模块,用于获取原始人脸图像数据;
    处理模块,用于根据所述原始人脸图像数据,生成人脸网格图像;根据法线纹理贴图,确定所述人脸网格图像上每个像素点的偏移向量,所述法线纹理贴图为用二维颜色数据表示的人脸图像的变形趋势和变形力度;根据所述人脸网格图像上每个像素点的偏移向量对所述人脸网格图像上的各个像素坐标进行偏移处理,得到变形后的人脸图像数据;对所述变形后的人脸图像数据进行渲染,得到变形后的人脸图像。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,所述计算机程序用于实现权利要求1至13任一项所述的人脸图像处理方法。
  16. 一种电子设备,其特征在于,包括:存储器和处理器;所述存储器用于存储计算机程序,所述处理器执行所述计算机程序实现权利要求1至13任一项所述的人脸图像处理方法。
  17. 一种运行指令的芯片,其特征在于,所述芯片包括存储器、处理器, 所述存储器中存储代码和数据,所述存储器与所述处理器耦合,所述处理器运行所述存储器中的代码使得所述芯片用于执行上述权利要求1至13任一项所述的人脸图像处理方法。
  18. 一种包含指令的程序产品,其特征在于,当所述程序产品在计算机上运行时,使得所述计算机执行上述权利要求1至13任一项所述的人脸图像处理方法。
  19. 一种计算机程序,其特征在于,当所述计算机程序被处理器执行时,用于执行上述权利要求1至13任一项所述的人脸图像处理方法。
PCT/CN2021/083651 2020-07-15 2021-03-29 人脸图像处理方法、装置、存储介质及电子设备 WO2022012085A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010681269.6A CN112241933A (zh) 2020-07-15 2020-07-15 人脸图像处理方法、装置、存储介质及电子设备
CN202010681269.6 2020-07-15

Publications (1)

Publication Number Publication Date
WO2022012085A1 true WO2022012085A1 (zh) 2022-01-20

Family

ID=74170739

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/083651 WO2022012085A1 (zh) 2020-07-15 2021-03-29 人脸图像处理方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN112241933A (zh)
WO (1) WO2022012085A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114429523A (zh) * 2022-02-10 2022-05-03 浙江慧脑信息科技有限公司 一种控制三维模型分区贴图的方法
CN114445543A (zh) * 2022-01-24 2022-05-06 北京百度网讯科技有限公司 处理纹理图像的方法、装置、电子设备及存储介质
CN114626979A (zh) * 2022-03-18 2022-06-14 广州虎牙科技有限公司 一种人脸驱动方法、装置、电子设备及存储介质
CN114782600A (zh) * 2022-03-24 2022-07-22 杭州印鸽科技有限公司 一种基于辅助网格的视频特定区域渲染***及渲染方法
CN115187822A (zh) * 2022-07-28 2022-10-14 广州方硅信息技术有限公司 人脸图像数据集分析方法、直播人脸图像处理方法及装置
CN116109891A (zh) * 2023-02-08 2023-05-12 人民网股份有限公司 图像数据扩增方法、装置、计算设备及存储介质
CN116630510A (zh) * 2023-05-24 2023-08-22 浪潮智慧科技有限公司 一种有关圆锥渐变纹理的生成方法、设备及介质
CN117788720A (zh) * 2024-02-26 2024-03-29 山东齐鲁壹点传媒有限公司 一种生成用户人脸模型的方法、存储介质及终端

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241933A (zh) * 2020-07-15 2021-01-19 北京沃东天骏信息技术有限公司 人脸图像处理方法、装置、存储介质及电子设备
CN115908104A (zh) * 2021-08-16 2023-04-04 北京字跳网络技术有限公司 图像处理方法、装置、设备、介质及程序产品
CN113538549B (zh) * 2021-08-31 2023-12-22 广州光锥元信息科技有限公司 在处理图像时保留图像质感纹理的方法及***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765351A (zh) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和存储介质
CN109035388A (zh) * 2018-06-28 2018-12-18 北京的卢深视科技有限公司 三维人脸模型重建方法及装置
CN109829968A (zh) * 2018-12-19 2019-05-31 东软集团股份有限公司 法线纹理图的生成方法、装置、存储介质及电子设备
CN111292423A (zh) * 2018-12-07 2020-06-16 北京京东尚科信息技术有限公司 基于增强现实的着色方法、装置、电子设备与存储介质
CN112241933A (zh) * 2020-07-15 2021-01-19 北京沃东天骏信息技术有限公司 人脸图像处理方法、装置、存储介质及电子设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090092473A (ko) * 2008-02-27 2009-09-01 오리엔탈종합전자(주) 3차원 변형 가능 형상 모델에 기반한 3차원 얼굴 모델링방법
JP5460499B2 (ja) * 2010-07-12 2014-04-02 日本放送協会 画像処理装置およびコンピュータプログラム
US10796480B2 (en) * 2015-08-14 2020-10-06 Metail Limited Methods of generating personalized 3D head models or 3D body models
CN107808410B (zh) * 2017-10-27 2021-04-27 网易(杭州)网络有限公司 阴影深度偏移的处理方法及装置
CN108805090B (zh) * 2018-06-14 2020-02-21 广东工业大学 一种基于平面网格模型的虚拟试妆方法
CN109859097B (zh) * 2019-01-08 2023-10-27 北京奇艺世纪科技有限公司 脸部图像处理方法、设备、图像处理设备、介质
CN109859134A (zh) * 2019-01-30 2019-06-07 珠海天燕科技有限公司 一种美妆素材的处理方法及终端
CN110136236B (zh) * 2019-05-17 2022-11-29 腾讯科技(深圳)有限公司 三维角色的个性化脸部显示方法、装置、设备及存储介质
CN110675489B (zh) * 2019-09-25 2024-01-23 北京达佳互联信息技术有限公司 一种图像处理方法、装置、电子设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765351A (zh) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和存储介质
CN109035388A (zh) * 2018-06-28 2018-12-18 北京的卢深视科技有限公司 三维人脸模型重建方法及装置
CN111292423A (zh) * 2018-12-07 2020-06-16 北京京东尚科信息技术有限公司 基于增强现实的着色方法、装置、电子设备与存储介质
CN109829968A (zh) * 2018-12-19 2019-05-31 东软集团股份有限公司 法线纹理图的生成方法、装置、存储介质及电子设备
CN112241933A (zh) * 2020-07-15 2021-01-19 北京沃东天骏信息技术有限公司 人脸图像处理方法、装置、存储介质及电子设备

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445543A (zh) * 2022-01-24 2022-05-06 北京百度网讯科技有限公司 处理纹理图像的方法、装置、电子设备及存储介质
CN114429523A (zh) * 2022-02-10 2022-05-03 浙江慧脑信息科技有限公司 一种控制三维模型分区贴图的方法
CN114429523B (zh) * 2022-02-10 2024-05-14 浙江慧脑信息科技有限公司 一种控制三维模型分区贴图的方法
CN114626979A (zh) * 2022-03-18 2022-06-14 广州虎牙科技有限公司 一种人脸驱动方法、装置、电子设备及存储介质
CN114782600A (zh) * 2022-03-24 2022-07-22 杭州印鸽科技有限公司 一种基于辅助网格的视频特定区域渲染***及渲染方法
CN115187822B (zh) * 2022-07-28 2023-06-30 广州方硅信息技术有限公司 人脸图像数据集分析方法、直播人脸图像处理方法及装置
CN115187822A (zh) * 2022-07-28 2022-10-14 广州方硅信息技术有限公司 人脸图像数据集分析方法、直播人脸图像处理方法及装置
CN116109891A (zh) * 2023-02-08 2023-05-12 人民网股份有限公司 图像数据扩增方法、装置、计算设备及存储介质
CN116109891B (zh) * 2023-02-08 2023-07-25 人民网股份有限公司 图像数据扩增方法、装置、计算设备及存储介质
CN116630510A (zh) * 2023-05-24 2023-08-22 浪潮智慧科技有限公司 一种有关圆锥渐变纹理的生成方法、设备及介质
CN116630510B (zh) * 2023-05-24 2024-01-26 浪潮智慧科技有限公司 一种有关圆锥渐变纹理的生成方法、设备及介质
CN117788720A (zh) * 2024-02-26 2024-03-29 山东齐鲁壹点传媒有限公司 一种生成用户人脸模型的方法、存储介质及终端
CN117788720B (zh) * 2024-02-26 2024-05-17 山东齐鲁壹点传媒有限公司 一种生成用户人脸模型的方法、存储介质及终端

Also Published As

Publication number Publication date
CN112241933A (zh) 2021-01-19

Similar Documents

Publication Publication Date Title
WO2022012085A1 (zh) 人脸图像处理方法、装置、存储介质及电子设备
WO2020207191A1 (zh) 虚拟物体被遮挡的区域确定方法、装置及终端设备
WO2020042720A1 (zh) 一种人体三维模型重建方法、装置和存储介质
WO2022068487A1 (zh) 风格图像生成方法、模型训练方法、装置、设备和介质
US10818064B2 (en) Estimating accurate face shape and texture from an image
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
US9135678B2 (en) Methods and apparatus for interfacing panoramic image stitching with post-processors
WO2022068451A1 (zh) 风格图像生成方法、模型训练方法、装置、设备和介质
WO2020034785A1 (zh) 三维模型处理方法和装置
WO2021082801A1 (zh) 增强现实处理方法及装置、***、存储介质和电子设备
CN109584168B (zh) 图像处理方法和装置、电子设备和计算机存储介质
WO2024007478A1 (zh) 基于单手机的人体三维建模数据采集与重建方法及***
WO2023284713A1 (zh) 一种三维动态跟踪方法、装置、电子设备和存储介质
US20210390770A1 (en) Object reconstruction with texture parsing
US11631154B2 (en) Method, apparatus, device and storage medium for transforming hairstyle
CN110084154B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
WO2024098685A1 (zh) 虚拟人物面部驱动方法、装置、终端设备和可读存储介质
JP2023517121A (ja) 画像処理及び画像合成方法、装置及びコンピュータプログラム
WO2022135574A1 (zh) 肤色检测方法、装置、移动终端和存储介质
WO2023066120A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN112766215A (zh) 人脸融合方法、装置、电子设备及存储介质
WO2023207379A1 (zh) 图像处理方法、装置、设备及存储介质
WO2022121653A1 (zh) 确定透明度的方法、装置、电子设备和存储介质
CN114627244A (zh) 三维重建方法及装置、电子设备、计算机可读介质
WO2024055837A1 (zh) 一种图像处理方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21841800

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21841800

Country of ref document: EP

Kind code of ref document: A1