CN114742951A - Material generation method, image processing method, device, electronic device and storage medium - Google Patents

Material generation method, image processing method, device, electronic device and storage medium Download PDF

Info

Publication number
CN114742951A
CN114742951A CN202210417761.1A CN202210417761A CN114742951A CN 114742951 A CN114742951 A CN 114742951A CN 202210417761 A CN202210417761 A CN 202210417761A CN 114742951 A CN114742951 A CN 114742951A
Authority
CN
China
Prior art keywords
deformation
target
dimensional
human face
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210417761.1A
Other languages
Chinese (zh)
Inventor
李美昊
李园园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Datianmian White Sugar Technology Co ltd
Original Assignee
Beijing Datianmian White Sugar Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Datianmian White Sugar Technology Co ltd filed Critical Beijing Datianmian White Sugar Technology Co ltd
Priority to CN202210417761.1A priority Critical patent/CN114742951A/en
Publication of CN114742951A publication Critical patent/CN114742951A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a material generation method, an image processing apparatus, an electronic device, and a storage medium, wherein the method comprises: displaying a first three-dimensional model of a face in a first region of a graphical user interface; responding to the acquired target three-dimensional deformation parameters, and performing first three-dimensional deformation processing on a first human face three-dimensional model based on the target three-dimensional deformation parameters to obtain a second human face three-dimensional model; wherein the target three-dimensional deformation parameter is used for characterizing at least one of the following: carrying out deformation position, deformation mode and deformation amplitude on the target human face part; and responding to a human face deformation material generation instruction, and generating a target material containing the target three-dimensional deformation parameter based on the target three-dimensional deformation parameter, wherein the target material is used for deforming a target human face part of an image to be processed, and the deformation effect of the image to be processed is matched with that of the second human face three-dimensional model.

Description

Material generation method, image processing method, device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a method and an apparatus for generating a material and processing an image, an electronic device, and a storage medium.
Background
The human face five sense organs shape beautification processing usually adopts a human face key point recognition technology to perform key point recognition on a human face in an image to obtain the positions of human face key points in the image, and then adjusts the positions of the human face key points in the image to achieve the purpose of changing the human face five sense organs shape. The beautifying treatment method has the problem of poor deformation effect when the human face is deformed.
Disclosure of Invention
The embodiment of the disclosure at least provides a material generation method, an image processing method, a device, an electronic device and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a material generation method, including:
displaying a first three-dimensional model of a face in a first region of a graphical user interface;
responding to the acquired target three-dimensional deformation parameters, and performing first three-dimensional deformation processing on a first human face three-dimensional model based on the target three-dimensional deformation parameters to obtain a second human face three-dimensional model; wherein the target three-dimensional deformation parameter is used for characterizing at least one of the following: carrying out deformation position, deformation mode and deformation amplitude on the target human face part;
and responding to a human face deformation material generation instruction, and generating a target material containing the target three-dimensional deformation parameters based on the target three-dimensional deformation parameters, wherein the target material is used for deforming the target human face part of the image to be processed, and the deformation effect of the image to be processed is matched with the deformation effect of the second human face three-dimensional model.
Therefore, when a target material generated by using the target three-dimensional deformation parameters is used for processing an image to be processed, the deformation effect of the image to be processed is matched with the deformation effect of the second human face three-dimensional model, and further when the same human face of a plurality of human face images is deformed, the human face has the consistent deformation effect even if the poses of the human face in different human face images are different; in addition, the target three-dimensional deformation parameters also comprise information related to depth, and the degree of unnatural deformation of facial features can be reduced by performing deformation processing on the face through the target three-dimensional deformation parameters, so that the deformation effect on the face is improved.
In an alternative embodiment, the first three-dimensional model of a face comprises: a plurality of vertexes, and position information of the plurality of vertexes in a model coordinate system;
the performing a first three-dimensional deformation process on the first human face three-dimensional model based on the target three-dimensional deformation parameter to obtain a second human face three-dimensional model comprises the following steps:
determining a target vertex corresponding to a deformation position indicated by the target three-dimensional model parameter from a plurality of vertexes included in the first human face three-dimensional model;
and executing the deformation mode indicated by the target three-dimensional deformation parameter and the position adjustment corresponding to the deformation amplitude on the target vertex to obtain the second human face three-dimensional model.
Therefore, the position of the target vertex is adjusted by using the target three-dimensional deformation parameter, the deformation processing of the first human face three-dimensional model can be realized, the second human face three-dimensional model which accords with the deformation mode and the deformation amplitude corresponding to the target three-dimensional deformation parameter is obtained, different parts of the human face correspond to different vertexes, and the purpose of performing the deformation processing on different parts of the human face in a targeted manner is realized.
In an optional embodiment, the obtaining the target three-dimensional deformation parameter includes:
displaying a parameter setting panel in a third area of the graphical user interface; the parameter setting panel includes: a first parameter setting control for setting a plurality of human face parts corresponding to the three-dimensional deformation parameters respectively;
and responding to the setting operation of the first parameter setting control corresponding to any target face part, and determining a target three-dimensional deformation parameter corresponding to the target face part.
In an optional embodiment, the parameter setting panel further comprises: a second parameter setting control for setting display parameters, wherein the display parameters include at least one of: camera angle, background color, and three-dimensional model display size;
the method further comprises the following steps: setting a control based on the second parameter to obtain target display parameters; and displaying the second human face three-dimensional model based on the target display parameters.
Thus, the deformation amplitude of the three-dimensional model of the human face can be set through the parameter setting panel, and the deformation effect corresponding to the variation amplitude of the three-dimensional model of the human face is displayed; parameters such as camera angle, background color and display size can be set through the parameter setting panel, and then the display effect of the human face three-dimensional model is optimized.
In an alternative embodiment, the first three-dimensional model of the face comprises: the method comprises the following steps of obtaining a standard human face three-dimensional model, or obtaining a three-dimensional model after carrying out deformation processing on the standard human face three-dimensional model by utilizing a preset parameter configuration file.
Therefore, the materials which are generated historically and used for face deformation can be adjusted for the second time, and the materials can be used conveniently.
In an optional embodiment, the first three-dimensional face model includes a three-dimensional model obtained by performing three-dimensional deformation processing on the standard three-dimensional face model by using a preset parameter configuration file;
before the first three-dimensional model of the face is displayed in the first area of the graphical user interface, the method further comprises the following steps:
analyzing the parameter configuration file in response to the import of the parameter configuration file to obtain a three-dimensional deformation parameter carried in the parameter configuration file;
and carrying out first three-dimensional deformation processing on the standard human face three-dimensional model by using the three-dimensional deformation parameters to obtain the first human face three-dimensional model and a first preview image corresponding to the first human face three-dimensional model.
In this way, the three-dimensional deformation parameters obtained by analyzing the parameter configuration file are used for carrying out first three-dimensional deformation processing on the standard human face three-dimensional model to obtain a first human face three-dimensional model and a first preview image, the first human face three-dimensional model and the first preview image are used for displaying in a graphical user interface, the target three-dimensional deformation parameters are configured on the first human face three-dimensional model to obtain a second human face three-dimensional model, and target materials are generated to obtain a second preview image matched with the second human face three-dimensional model.
In an optional embodiment, the parameter configuration file comprises a first parameter configuration file generated by using preset three-dimensional design software;
after analyzing the parameter configuration file to obtain the original three-dimensional deformation parameters carried in the parameter configuration file, the method further comprises at least one of the following steps:
loading a software module corresponding to the deformation mode based on the deformation mode represented by the original three-dimensional deformation parameters; the software module is used for carrying out first three-dimensional deformation processing on the standard human face three-dimensional model;
initializing a parameter setting panel based on the deformation position and the deformation amplitude represented by the original three-dimensional deformation parameters;
initializing a configuration target character string; and the target character string is used for storing adjustment information for adjusting the original three-dimensional deformation parameters by a user.
Therefore, when the parameter configuration file is generated, the original three-dimensional deformation parameters are obtained by analyzing the first parameter configuration file under the condition that the first parameter configuration file is generated by using the preset three-dimensional design software, and after a deformation mode based on the original three-dimensional deformation parameter representation is loaded and a software module corresponding to the deformation mode is loaded, the loaded software module is conveniently used for realizing the deformation of the first human face three-dimensional model, so that the processing speed is increased; after the parameter setting panel is initialized, providing an entrance for adjusting corresponding parameters for a user by using the parameter setting panel, and visually displaying the specific deformation condition of the current first face three-dimensional model for the user; after the target character string is initialized, the target character string can be used for storing the adjustment information, so that the original three-dimensional deformation parameter and the adjustment information are stored separately, and a user can conveniently perform secondary adjustment on the adjustment information on the basis of the original three-dimensional deformation parameter.
In an optional implementation manner, the parameter configuration file includes a second parameter configuration file obtained by adjusting an original three-dimensional deformation parameter in the first parameter configuration file;
after analyzing the parameter configuration file to obtain the original three-dimensional deformation parameters carried in the parameter configuration file, the method further includes:
analyzing the target character string to obtain adjustment information for adjusting the three-dimensional deformation parameters in the first parameter configuration file; and updating the parameter setting panel based on the adjustment information obtained by analysis.
Therefore, when the second parameter configuration file is analyzed, the second parameter configuration file is obtained after the original three-dimensional deformation parameters in the first parameter configuration file are adjusted before the second parameter configuration file is analyzed, so that a user can adjust part of the three-dimensional deformation parameters in the first parameter configuration file for the second time when the first parameter configuration file cannot meet the requirements of the user; when the three-dimensional deformation parameters are adjusted for the second time, the target character strings in the first parameter configuration file can be directly analyzed to obtain corresponding adjustment information, and the current parameter setting panel is updated by using the obtained adjustment information, so that the processing steps are simplified, and the repeated calling of the template is realized.
In an optional implementation manner, the performing a first three-dimensional deformation process on the standard three-dimensional human face model by using the three-dimensional deformation parameters to obtain the first three-dimensional human face model includes:
obtaining an adjusted three-dimensional deformation parameter by using the adjustment information and the three-dimensional deformation parameter;
and performing first three-dimensional deformation processing on the standard human face three-dimensional model by using the adjusted three-dimensional deformation parameters to obtain the first human face three-dimensional model.
In an optional embodiment, the method further comprises:
displaying a first preview image corresponding to the first three-dimensional model of the face in a second area of the graphical user interface;
in response to the acquisition of the target three-dimensional deformation parameter, performing second three-dimensional deformation processing on the first preview image based on the target three-dimensional deformation parameter to obtain a second preview image;
and displaying the second preview image.
In this way, the first preview image can be a second preview image obtained by performing deformation processing on a target material containing a target three-dimensional deformation parameter, and since the second face three-dimensional model is obtained by performing first deformation processing on the first face three-dimensional model according to the target three-dimensional deformation parameter and the target material contains the target three-dimensional deformation parameter, the second preview image can be obtained according to the first preview image and the target material, so that the second preview image and the second face three-dimensional model have the same deformation effect.
In an optional implementation manner, the performing, on the first preview image, a second three-dimensional deformation process based on the target three-dimensional deformation parameter to obtain a second preview image includes:
performing three-dimensional face reconstruction on the first preview image to obtain a third face three-dimensional model of the template face in the first preview image;
performing first three-dimensional deformation processing on the third face three-dimensional model based on the target deformation coefficient to obtain a fourth face three-dimensional model;
and performing position transformation processing on pixel points in the image to be processed based on the fourth face three-dimensional model to obtain the second preview image.
Therefore, a third face three-dimensional model is obtained by performing three-dimensional face reconstruction on the first preview image, and compared with a standard face three-dimensional model, the third face three-dimensional model can be obtained by performing three-dimensional face reconstruction directly according to an image provided by a user or an image in a video, and then the user can obtain the third face three-dimensional model which accords with face feature information in the image according to the provided image, so that the user can more intuitively obtain the deformation effect of a fourth face three-dimensional model after deformation processing is performed according to the third face three-dimensional model.
In an alternative embodiment, the first preview image comprises an original preview image; or, performing first three-dimensional deformation processing on the original preview image by using the parameter configuration file to obtain an image.
Therefore, the first preview image and the first three-dimensional face model have the same deformation effect, and a user can compare the first preview image and the first three-dimensional face model and more visually see the influence of the same face deformation parameters on the face in the image and the face in the three-dimensional model.
In a second aspect, an embodiment of the present disclosure further provides an image processing method, including:
displaying an image to be processed on a graphical user interface; the image to be processed comprises a target face;
responding to the loading operation of a target material, and performing deformation processing on the image to be processed based on the target material to obtain a first target image; wherein the target material is generated according to the material generation method of any one of the above embodiments;
and displaying the first target image.
Therefore, through the loading operation on the target material, the target three-dimensional deformation parameters corresponding to the human face part in the target material can be used for processing the image to be processed, and further the first target image can be directly generated through the target material, so that the human face in the first target image is deformed by using the three-dimensional deformation parameters, the condition that multiple frames exist in the first target image is ensured, the same human face in the multiple frames of the first target image has the same three-dimensional deformation effect, in addition, in the target three-dimensional deformation parameters, the depth information is also included, the human face is deformed by the target three-dimensional deformation parameters, the degree of unnatural deformation of the five sense organs of the human face can be reduced, and the deformation effect on the human face is improved.
In an optional implementation manner, the target material includes a target three-dimensional deformation parameter, and the performing three-dimensional deformation processing on the image to be processed based on the target material to obtain a first target image includes:
carrying out three-dimensional face reconstruction on the image to be processed to obtain a face three-dimensional model of a target face in the image to be processed;
performing first three-dimensional deformation processing on the human face three-dimensional model based on the target deformation coefficient to obtain a target human face three-dimensional model;
and performing position transformation processing on pixel points in the image to be processed based on the target human face three-dimensional model to obtain the first target image.
In this way, the characteristic information of the three-dimensional face model is matched with the characteristic information of the image to be processed through the three-dimensional face reconstruction of the image to be processed to obtain the three-dimensional face model of the target face, so that a user can visually obtain a deformation effect; according to the first target image obtained by the target human face three-dimensional model, the position transformation is directly carried out according to the pixel points in the image to be processed, so that the deformation effect of the image is more vivid.
In a third aspect, an embodiment of the present disclosure provides a material generating apparatus, including:
the display module is used for displaying the first three-dimensional face model in a first area of the graphical user interface;
the processing module is used for responding to the acquired target three-dimensional deformation parameters and carrying out first three-dimensional deformation processing on the first human face three-dimensional model based on the target three-dimensional deformation parameters to obtain a second human face three-dimensional model; wherein the target three-dimensional deformation parameter is used for characterizing at least one of the following: carrying out deformation position, deformation mode and deformation amplitude on the target human face part;
and the generating module is used for responding to a human face deformation material generating instruction and generating a target material containing the three-dimensional deformation parameters based on the target three-dimensional deformation parameters.
In a fourth aspect, an embodiment of the present disclosure further provides an image processing apparatus, including:
the display module is used for displaying the image to be processed on the graphical user interface; the image to be processed comprises a target face;
the response module is used for responding to the loading operation of a target material and carrying out three-dimensional deformation processing on the image to be processed based on the target material to obtain a first target image; wherein the target material is generated by using the material generation method of any one of the first aspect;
the display module is further used for displaying the first target image.
In a fifth aspect, this disclosure also provides an electronic device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in any one of the possible implementations of the first aspect or the second aspect.
In a sixth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in any of the above-mentioned first aspect or the possible embodiments of the second aspect.
For the description of the effects of the above material generation and image processing apparatus, the electronic device, and the computer readable storage medium, reference is made to the description of the above material generation and image processing method, and details are not repeated here.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a method for generating materials provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a deformation effect in a material generation method according to an embodiment of the disclosure;
fig. 3a is a schematic diagram illustrating a graphical user interface in a material generation method provided by an embodiment of the present disclosure;
fig. 3b is a schematic diagram illustrating an enlargement of a third area of a graphical user interface in a material generation method provided by the embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an operation of a mobile terminal in an image processing method according to an embodiment of the present disclosure;
FIG. 5 is a flow chart illustrating an image processing method provided by an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a material generation apparatus provided by an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of an image processing apparatus provided by an embodiment of the present disclosure;
fig. 8 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Research shows that in the current mode of carrying out deformation processing on human faces, the deformation process firstly obtains human face key points determining the deformation effect according to 2D information on the images through the images input by users, secondly, the positions of the key points are adjusted, and the positions of other pixel points in the human face images are adjusted based on the positions of the adjusted key points, so that the deformed human face images are obtained; on one hand, the beautifying effect can only be known through the beautified image by the adjusting mode, so that the perception of the human face deformation effect by the user is poor. On the other hand, adjustment is performed based on an image layer, and only the deformation effect can be adjusted for a plane structure, that is, adjustment is performed on the basis of 2D, when multi-frame face images in a video stream need to be adjusted, because poses of faces in different face images may be different, the adjustment mode can cause the situations that obvious differences exist in the face adjustment in different face images and facial features of the faces are deformed unnaturally, so that the deformation effect on the faces is poor.
Based on the research, the present disclosure provides a material generation method, an image processing method, a device, an electronic device, and a storage medium, wherein when a target material generated by using a target three-dimensional deformation parameter is used to process an image to be processed, the deformation effect of the image to be processed is matched with the deformation effect of a second face three-dimensional model, and further when the same face of a plurality of face images is deformed, the face has a consistent deformation effect even though the poses of the face in different face images are different; in addition, the target three-dimensional deformation parameters also comprise information related to depth, and the target three-dimensional deformation parameters are used for carrying out deformation processing on the face, so that the degree of unnatural deformation of five sense organs of the face can be reduced, and the deformation effect on the face is improved.
In addition, as the target three-dimensional deformation parameters are utilized to carry out three-dimensional deformation processing on the first human face three-dimensional model and the first preview image in a 3D dimension, through the three-dimensional deformation processing on the human face three-dimensional model, not only can the plane information of the human face image be adjusted, but also the depth information of the human face, such as the depth of an eye socket, the concave-convex degree of the face and the like, can be adjusted, so that the adjustment fineness is improved, the adjusted human face image is more natural and real, and the use requirements of users are further met.
In the human face deformation process provided by the embodiment of the disclosure, the human face part is divided into the plurality of deformation parts, and when a certain human face part is adjusted, the deformation positions of the human face part can be adjusted according to the deformation mode and the deformation amplitude corresponding to each deformation position, so that the fine adjustment of the five sense organs of the human face is realized, the local fine adjustment of the five sense organs of the human face can be realized, and the human face deformation effect is improved.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a detailed description is given of a material generation method and an image processing method disclosed in the embodiments of the present disclosure, and an execution subject of the material generation method and the image processing method provided in the embodiments of the present disclosure is generally an electronic device with certain computing capability, and the electronic device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a handheld device, a computer device, a vehicle-mounted device, or a server or other processing device. In some possible implementations, the material generation method, and the image processing method may be implemented by a processor calling computer readable instructions stored in a memory.
The following explains a material generation method provided by the embodiment of the present disclosure.
Referring to fig. 1, a flowchart of a material generation method provided in the embodiment of the present disclosure is shown, where the method includes steps S101 to S103, where:
s101: a first three-dimensional model of a face is presented in a first region of a graphical user interface.
S102: responding to the acquired target three-dimensional deformation parameters, and performing first three-dimensional deformation processing on a first human face three-dimensional model based on the target three-dimensional deformation parameters to obtain a second human face three-dimensional model; wherein the target three-dimensional deformation parameter is used for characterizing at least one of the following: and (3) deforming position, deforming mode and deforming amplitude for deforming the target human face part.
S103: and responding to a human face deformation material generation instruction, and generating a target material containing the target three-dimensional deformation parameters based on the target three-dimensional deformation parameters, wherein the target material is used for deforming the target human face part of the image to be processed, and the deformation effect of the image to be processed is matched with the deformation effect of the second human face three-dimensional model.
The following describes each of the above-mentioned steps S101 to S103 in detail.
In relation to S101 above, the first three-dimensional face model includes, for example: the method comprises the following steps of obtaining a standard human face three-dimensional model, or obtaining a three-dimensional model after carrying out deformation processing on the standard human face three-dimensional model by utilizing a preset parameter configuration file.
The standard human face three-dimensional model is represented as a three-dimensional model without secondary adjustment, wherein all parameters are originally carried, if a designer wants to adjust parameters in the standard human face three-dimensional model, the standard human face three-dimensional model can be subjected to deformation processing through a preset parameter configuration file to obtain a processed three-dimensional model, and the designer can be a user who subsequently uses the model or a designer of the model.
The standard three-dimensional human face model is a three-dimensional human face model obtained by performing standard modeling on a human face, and the standard three-dimensional human face model comprises the following components: 3D Mesh model (Mesh).
The mesh model generally includes a plurality of vertices and three-dimensional coordinates of the vertices in a model coordinate system. The plurality of vertexes are connected with each other to form a plurality of meshes for forming the surface of the human face three-dimensional model.
In addition, different grids can have a certain linkage relation; the linkage relation is used for representing the change amplitude information of the position change of other grids adjacent to the grid after the vertex in the grid is subjected to the position change.
The parameter configuration file is, for example, a configuration file generated by using preset three-dimensional design software when performing deformation processing on a standard face model, and the configuration file includes three-dimensional deformation parameters for performing table-line processing on the standard face model.
In a possible implementation manner, the preset three-dimensional design software may be, for example, software for generating a target material based on the material generation method provided by the embodiment of the present disclosure; in another possible embodiment, the preset three-dimensional design software may also be software that performs deformation processing on the standard three-dimensional face model based on other three-dimensional model deformation methods, for example, a method of deforming the three-dimensional model with respect to the bone transformation coefficients.
In addition, the parameter configuration file can also be used for manually or randomly generating a group of three-dimensional deformation parameters, and the generated three-dimensional deformation parameters are carried in the parameter configuration file so as to utilize the parameter configuration file to carry out deformation processing on the standard human face three-dimensional model and generate the first human face three-dimensional model.
In another embodiment of the present disclosure, the method further comprises: and displaying a first preview image corresponding to the first three-dimensional face model in a second area of the graphical user interface. The first preview image is, for example, a frame of a face image.
The first preview image of the presentation corresponds to the deformation effect of the three-dimensional model of the first face. That is, when the first three-dimensional face model is a standard three-dimensional face model, the first preview image is, for example, an original face image obtained by shooting a face.
If the first three-dimensional model of the face is a three-dimensional model obtained by deforming the standard three-dimensional model of the face by using a preset parameter configuration file, the first preview image is, for example, an image obtained by deforming an original face by using the preset parameter configuration file. In this case, the deformation effect on the face in the first preview image is the same as the deformation effect of the first three-dimensional model of the face compared to the standard three-dimensional model of the face.
For this case, before the first three-dimensional model of the face is presented in the first area of the graphical user interface, the method further includes: and importing a parameter configuration file, and performing first three-dimensional deformation processing on the standard human face model by using the parameter configuration file to obtain a first human face three-dimensional model.
In addition, under the condition that the first preview image corresponding to the first human face three-dimensional model is displayed in the second area, the imported parameter configuration file can be used for carrying out second three-dimensional deformation processing on the original human face image to obtain a first preview image.
Specifically, after a parameter configuration file is imported, analyzing the parameter configuration file in response to the import of the parameter configuration file to obtain a three-dimensional deformation parameter carried in the parameter configuration file; and carrying out first three-dimensional deformation processing on the standard human face three-dimensional model by using the three-dimensional deformation parameters to obtain the first human face three-dimensional model and a first preview image corresponding to the first human face three-dimensional model.
A: the parameter configuration file comprises: and generating a first parameter configuration file by using preset three-dimensional design software. The three-dimensional deformation parameters carried in the parameter configuration file are the corresponding original three-dimensional deformation parameters when the first parameter configuration file is generated.
Illustratively, the first parameter profile may be in fbx format, and the first parameter profile in fbx format is a 3D common model file supporting major 3D data elements, such as 3D image information. And analyzing the parameter configuration file to obtain original three-dimensional deformation parameters, wherein each original three-dimensional deformation parameter corresponds to a deformation item.
In this case, after analyzing the parameter configuration file to obtain the original three-dimensional deformation parameters carried in the parameter configuration file, at least one of the following a 1-a 3 is further included:
a 1: loading a software module corresponding to the deformation mode based on the deformation mode represented by the original three-dimensional deformation parameters; the software module is used for carrying out first three-dimensional deformation processing on the standard human face three-dimensional model.
a 2: and initializing a parameter setting panel based on the deformation position and the deformation amplitude represented by the original three-dimensional deformation parameters.
Illustratively, taking an execution main body as a computer as an example, a software module is displayed on a display of the computer, a first area, a second area and a third area are displayed in the software module, the first area is displayed with a three-dimensional model of a human face, the second area is displayed with a preview image corresponding to the three-dimensional model of the human face, and the third area is displayed with a parameter setting panel. The parameter setting panel displays parameter setting information for performing first deformation processing on a standard face model, for example, a three-dimensional face model includes a plurality of face parts, each face part corresponds to each deformation item, a sliding bar is arranged in each deformation item, the adjustment range of an original three-dimensional deformation parameter corresponding to each sliding bar is 0-100, the first deformation processing sets a corresponding deformation effect in the range of 0-100, 0 represents that the deformation position and the deformation amplitude are minimum relative to the deformation effect of the original three-dimensional face model, and 100 represents that the deformation position and the deformation amplitude are maximum relative to the deformation effect of the original three-dimensional face model. After the deformation effect is set, initializing a parameter setting panel, so that the parameter setting panel is displayed in a third area.
a 3: initializing a configuration target character string; and the target character string is used for storing adjustment information for adjusting the original three-dimensional deformation parameters by a user.
Illustratively, the purpose of initializing and configuring the target character string is to generate the target character string so as to facilitate subsequent adjustment of the original three-dimensional deformation parameter, store corresponding adjustment information by using the generated target character string, where the adjustment information includes at least one of the three-dimensional deformation parameters, and when the original three-dimensional deformation parameter is subsequently adjusted, directly call the target character string to generate the target three-dimensional deformation parameter and configure the target three-dimensional deformation parameter on the parameter setting panel, so that the corresponding three-dimensional deformation parameter can be visually seen through the parameter panel, and the three-dimensional deformation parameter can be conveniently adjusted by using the parameter setting panel.
The target character string can be stored in a preset storage unit, and when the target character string needs to be called, the target character string is called from the preset storage unit according to the statement.
B: the parameter configuration file comprises a second parameter configuration file obtained by adjusting the original three-dimensional deformation parameters in the first parameter configuration file.
In this case, after analyzing the parameter configuration file to obtain the original three-dimensional deformation parameters carried in the parameter configuration file, the method further includes:
analyzing the target character string to obtain adjustment information for adjusting the three-dimensional deformation parameters in the first parameter configuration file; and updating the parameter setting panel based on the adjustment information obtained by analysis, and performing first three-dimensional deformation processing on the standard human face three-dimensional model by using the adjusted three-dimensional deformation parameters to obtain the first human face three-dimensional model.
Illustratively, for the second parameter configuration file, first, by parsing the target character string in the first parameter configuration file, obtaining adjustment information for adjusting the three-dimensional deformation parameter in the first parameter configuration file, where the adjustment information characterizes deformation effects corresponding to different portions of the original human face three-dimensional model, and the deformation effects are expressed by the three-dimensional deformation parameter, for example, in a first parameter configuration file, all key point portions in the original three-dimensional human face model are adjusted to obtain the original three-dimensional deformation parameter, and a target character string is generated, and then if a certain portion needs to be adjusted there, only the target character string corresponding to the portion in the first parameter configuration file needs to be read, and the adjusted three-dimensional deformation parameter corresponding to the portion is obtained by adjusting according to the adjustment information and the original three-dimensional deformation parameter of the portion, and updating the target character string to obtain a first face three-dimensional model.
Taking a device for executing the material generation method as an example of a computer, a 3D micro-shaping layer may be added in a graphical user interface of three-dimensional model creation software, a parameter control panel is shown in a third area of the graphical user interface, the parameter control panel includes an import model option, a format of a model file is displayed below an import model, for example, a mouse may be used to click an "import model" option card, at this time, the software loads an original three-dimensional face model file by default, and clicking the "import" option card may support importing a fbx file with mixed shape (bs) information and reading bs information, and if clicking the "delete" option card, deleting the read fbx and restoring the fbx to the default original three-dimensional face model. Clicking the 'import parameter template' tab can pop up a resource template library, selecting a 3D model in the resource template library, and loading at least 25 deformed items by software after selection. Specific reference is made to fig. 2 for some deformation items.
For S102: a target three-dimensional deformation parameter for characterizing at least one of: and (3) deforming position, deforming mode and deforming amplitude for deforming the target human face part.
The face may be divided into a plurality of deformed portions, and each deformed portion may be divided into a plurality of deformed positions in more detail.
When the face is divided into a plurality of deformed portions, the face may be divided according to the positions of five sense organs, for example. Examples include: eye part, eyebrow part, nose part, mouth part, head part, and face part. For the eye region, the plurality of deformation positions obtained by dividing include, for example: whole eye, external canthus, internal canthus, eyeball, silkworm, etc. The plurality of deformation positions obtained by dividing the eyebrow region include, for example: eyebrow trimming, eyebrow tip, middle eyebrow, eyebrow bow, eyebrow tail, etc. The plurality of deformation positions obtained by dividing the nose region include, for example: nose arrangement, nose heads, nose wings, nose bridges, nose roots and the like. The plurality of deformation positions obtained by dividing the mouth part include, for example: whole mouth, mouth corner, upper lip, lower lip, etc. The plurality of deformation positions obtained by dividing the forehead region include, for example: forehead, forehead upper part, forehead lower part, forehead both sides etc.. For the face part, the plurality of deformation positions obtained by dividing include, for example: cheek, mandible, chin, etc.
The deformation modes corresponding to different deformation positions are different according to the characteristics of the deformation positions.
For example, for an eye region, if the deformation position is a whole eye, the deformation manner may include: integrally adjusting the eye proportion, the eye angle and the distance between two eyes; for the external canthus, the deformation may include, for example: adjusting the extending position of the external canthus, the inclination angle of the external canthus, the opening and closing proportion between the upper eyelid and the lower eyelid corresponding to the external canthus, and the like; for the inner corner of the eye, the deformation modes include, for example: the depth of the inner canthus, the inclination angle of the inner canthus, the opening and closing proportion between the upper eyelid and the lower eyelid corresponding to the inner canthus, and the like; for silkworm, its modification includes, for example: the position, width, length, etc. of the lying silkworms. For the eyeball, the deformation modes include, for example: eyeball size, eyeball concavity, eyeball position, eyeball color, and the like.
For each deformation mode, a deformation amplitude can be corresponding. For example, the deformation position is a lying silkworm, and the deformation mode is as follows: when the width of the silkworm is adjusted, the corresponding deformation range may be any value within a certain deformation range, for example. The deformation range is, for example: 70-150%, if the corresponding deformation amplitude is 90%, that is, the width of the lying silkworm is adjusted to 90% of the original width.
When acquiring the target three-dimensional deformation parameters, for example, the following method may be adopted:
through the first parameter setting control, a target three-dimensional deformation parameter can be input.
Displaying a parameter setting panel in a third area of the graphical user interface; the parameter setting panel includes: the first parameter setting control is used for setting the three-dimensional deformation parameters corresponding to the plurality of human face parts respectively;
and responding to the setting operation of the first parameter setting control corresponding to any target face part, and determining a target three-dimensional deformation parameter corresponding to the target face part.
Referring to fig. 3a and 3b, an embodiment of the present disclosure provides a specific example of presenting a first three-dimensional model of a face in a first area of a graphical user interface and presenting a first preview image in a second area. In this example, 31 represents a first region, 32 represents a first three-dimensional model of a face; 33 denotes a second area; and 34, a first preview image.
As in the examples shown in fig. 3a and 3b, a specific example of presenting the parameter setting panel in the third area is shown. In fig. 3a and 3b, 35 denotes a third area, and 36 denotes a parameter setting panel. The method comprises the steps that a first parameter setting control is displayed in a parameter setting panel and comprises all deformation items, wherein each deformation item corresponds to bs information, each bs information corresponds to a target three-dimensional deformation parameter, the interval of the target three-dimensional deformation parameters is set to be 0-100, a user can drag a slider through a mouse or input a numerical value for changing the deformation effect through a keyboard, the deformation effect corresponding to the parameters 0-100 is dynamically controllable, an input window of the deformation item supports the parameters for modifying the deformation effect, the minimum value of the parameters is 0, and if the parameter is set to be 0, the situation that the deformation effect is not set or the minimum deformation effect is set for the deformation item corresponding to the first human face three-dimensional model can be shown. In this case, the face part corresponding to the deformation item has no difference or the difference is the smallest from the face part before deformation. The parameter may be set to be 100 at maximum, and if the parameter is set to be 100, it may indicate that the deformation effect of the deformation item setting corresponding to the first three-dimensional face model has reached the maximum effect. At this time, the difference between the face part corresponding to the deformation item and the face part before deformation is maximized.
In addition, at least two deformation items can be grouped into one group, wherein the deformation item of the superior advanced group can be selected as the negative interval of the combination parameter adjustment according to the selection sequence of different deformation items, and the deformation item of the subsequent advanced group can be selected as the positive interval of the combination parameter adjustment. The parameter interval of the deformation item after combination is [ -100, 100], the parameter interval corresponding to the deformation item of the advanced group is [ -100, 0], the parameter interval corresponding to the deformation item of the subsequent group is [0, 100], the adjustment mode is the same as the adjustment mode of the single deformation item, the selection mode of the positive and negative intervals can also be the mode of selecting the deformation item of the advanced group as the positive interval of the combination, the selection of the subsequent group is the negative interval of the combination, the specific setting is carried out according to the actual application condition, and the limitation is not made here. It should be noted that, at this time, the value of the parameter being 0 indicates that the deformation item corresponding to the first three-dimensional face model has not changed.
After the combination is newly built, the combination is displayed below the first parameter setting control, the default of the combination name is group1, 2, 3 … and the like, and the combination can be disassembled by clicking the option card of 'canceling the combination', and the deformed item in the combination is restored to the position before the combination.
In another embodiment of the present disclosure, the parameter setting panel further includes: a second parameter setting control for setting display parameters, wherein the display parameters include at least one of: camera angle, background color, and three-dimensional model display size; the method further comprises the following steps: setting a control based on the second parameter, and acquiring target display parameters; and displaying the second human face three-dimensional model based on the target display parameters.
As shown in fig. 3a and fig. 3b, a second parameter setting control is shown, and the second parameter setting control includes: the position, the rotation and the zooming of the window are adjusted, a corresponding window is selected by clicking through a mouse, and the position, the rotation angle, the zooming proportion and the like of the first face three-dimensional model relative to the camera can be adjusted by inputting corresponding parameters through a keyboard.
Through the second parameter setting control, the user can adjust the pose of the first human face three-dimensional model and/or the pose of the second human face three-dimensional model in the model coordinate system, so that the user can watch specific deformation effects when the first three-dimensional deformation processing is carried out on the first human face three-dimensional model by using the target three-dimensional deformation parameters at multiple angles, and the user can select the target three-dimensional deformation parameters meeting the use requirements.
And for the step S102, after the target three-dimensional deformation parameter is set, performing a first three-dimensional deformation process on the first face three-dimensional model according to the target three-dimensional deformation parameter, so as to obtain a second face three-dimensional model.
When the first three-dimensional deformation processing is performed on the first three-dimensional face model by using the target three-dimensional deformation parameters, for example, the following method may be adopted:
determining a target vertex corresponding to the deformation position indicated by the target three-dimensional model parameter from a plurality of vertexes included in the first human face three-dimensional model.
And executing position adjustment corresponding to the deformation mode indicated by the target three-dimensional deformation parameter and the deformation amplitude on the target vertex.
Illustratively, traversing a plurality of target vertexes of a first human face three-dimensional model, dividing a human face deformation position containing the plurality of target vertexes according to the positions of the traversed target vertexes corresponding to the first human face three-dimensional model under a coordinate system, and adjusting a deformation mode and a deformation amplitude of the human face deformation position indicated by the target three-dimensional deformation parameter according to the obtained target three-dimensional deformation parameter, for example, in a first deformation process of a nose part of the first human face three-dimensional model, as shown in fig. 3a and 3b, in a first parameter setting control, there is detail division for the nose part, including adjustment of the size, width and the like of the nose part, after selecting the tab, the target three-dimensional deformation parameter may be input, and the range corresponding to the target three-dimensional deformation parameter is adjusted to 0-100, here, taking a target three-dimensional deformation parameter 40 corresponding to the size of the nose in the nose part as an example, after the target three-dimensional deformation parameters are set, the software can adjust the target vertex corresponding to the nose part according to the adjusting proportion corresponding to the target three-dimensional deformation parameters, and then a second target face model is obtained.
For the step S103, generating a target material according to the target three-dimensional parameter, where the target material is used to deform a target face portion of the image to be processed, and displaying a first preview image corresponding to the first face three-dimensional model in a second region of the graphical user interface; performing position transformation processing on pixel points in the first preview image based on the second human face three-dimensional model to obtain a second preview image; and displaying the second preview image.
Wherein the first preview image comprises an original preview image; or, performing first three-dimensional deformation processing on the original preview image by using the parameter configuration file to obtain an image.
Referring to fig. 5, which is a flowchart of an image processing method provided by the embodiment of the present disclosure, the method includes steps S501 to S503, wherein,
s501: displaying an image to be processed on a graphical user interface; the image to be processed comprises a target face;
s502: responding to the loading operation of a target material, and performing deformation processing on the image to be processed based on the target material to obtain a first target image;
s503: and displaying the first target image.
For example, in the case of applying the image processing method to a terminal device such as a mobile phone, an image including a face of a user may be acquired by a camera of the terminal device, or an image including a face of a user may be selected from an album of the terminal device, or an image including a face of a user may be received from another application installed in the terminal device.
For another example, when the image processing method is applied to a live broadcast scene, a video frame image including a human face may be determined from a plurality of frame video frame images included in a video stream acquired by a live broadcast device; and the video frame image containing the human face is taken as a target image. Here, the target image may have, for example, a plurality of frames; the multiple frames of target images may be obtained by sampling a video stream, for example.
Performing three-dimensional face reconstruction on the image to be processed to obtain a face three-dimensional model of a target face in the image to be processed; carrying out three-dimensional deformation processing on the human face three-dimensional model based on the target deformation coefficient to obtain a target human face three-dimensional model; and performing position transformation processing on pixel points in the image to be processed based on the target human face three-dimensional model to obtain the first target image.
Exemplarily, taking an execution main body as a mobile phone as an example, the number of the deformation items and the corresponding effect are customized at a mobile terminal by analyzing target materials generated from desktop target software; the target software can perform deformation processing on the human face in the human face image through the image processing method provided by the embodiment of the disclosure. For example, as shown in fig. 7, 1 to 200 deformed items may be customized according to actual needs, specifically, the number of deformed items is defined according to the number of corresponding bs information in an fbs file, where each bs information corresponds to one deformed item. After the mobile terminal is imported, the mobile terminal can perform retention or deletion operation on the deformation items, each deformation item corresponds to a 3D micro-shaping option card of the mobile terminal software, and a user can perform deformation processing on different parts corresponding to faces in the pictures or videos shot in real time or pictures or videos in a gallery through clicking or sliding operation of fingers and store the pictures or videos in real time.
The embodiment of the disclosure further provides another image processing method, which includes displaying a parameter setting panel on the graphical user interface, and displaying image data to be processed on the graphical user interface; the image data comprises a target face; the parameter setting panel includes: the parameter setting control is used for setting three-dimensional deformation parameters corresponding to the plurality of human face parts respectively; responding to the adjustment operation of the parameter control, and acquiring a target three-dimensional deformation parameter corresponding to the adjustment operation; the target three-dimensional deformation parameter is used for characterizing at least one of the following: carrying out deformation position, deformation mode and deformation amplitude on the target human face part; performing three-dimensional deformation processing on the image data to be processed based on the target three-dimensional deformation parameter to obtain a target image; and displaying the target image.
Illustratively, taking an execution subject as a mobile phone as an example, as shown in fig. 4, the embodiment of the present disclosure further provides a set of standardized schemes, where a mobile terminal accesses a 3D micro-shaping tab of target software to preview items and effects of 3D micro-shaping, so as to preview a shooting effect in real time. The standardization scheme has at least 25 deformation items, wherein each deformation item corresponds to a target three-dimensional deformation parameter with an adjustment range of [ -100, 100], as shown in fig. 2, taking the mouth width in the mouth region as an example, when a user adjusts the amplitude to-100, the user can feel the minimum deformation effect corresponding to the mouth width becoming smaller by 100, and when the user adjusts the amplitude to 100, the user can feel the maximum deformation effect corresponding to the mouth width becoming larger by 100.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a material generating apparatus corresponding to the material generating method, and since the principle of the apparatus in the embodiment of the present disclosure for solving the problem is similar to the material generating method in the embodiment of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated parts are not described again.
Referring to fig. 6, a schematic diagram of a material generating apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: a display module 61, a processing module 62 and a generating module 63; wherein the content of the first and second substances,
a presentation module 61 for presenting a first three-dimensional model of a face in a first region of a graphical user interface;
the processing module 62 is configured to, in response to the acquisition of the target three-dimensional deformation parameter, perform a first three-dimensional deformation process on the first face three-dimensional model based on the target three-dimensional deformation parameter to obtain a second face three-dimensional model; wherein the target three-dimensional deformation parameter is used for characterizing at least one of the following: carrying out deformation position, deformation mode and deformation amplitude on the target human face part;
the display module 61 is further configured to display the second face three-dimensional model;
the generation module 63: and responding to a human face deformation material generation instruction, and generating a target material containing the three-dimensional deformation parameters based on the target three-dimensional deformation parameters.
In an alternative embodiment, the first three-dimensional model of a face comprises: a plurality of vertexes, and position information of the plurality of vertexes in a model coordinate system;
the processing module 62 is further configured to:
determining a target vertex corresponding to a deformation position indicated by the target three-dimensional model parameter from a plurality of vertexes included in the first human face three-dimensional model;
and executing the deformation mode indicated by the target three-dimensional deformation parameter and the position adjustment corresponding to the deformation amplitude on the target vertex to obtain the second human face three-dimensional model.
In an alternative embodiment, the display module 61 is further configured to:
displaying a parameter setting panel in a third area of the graphical user interface; the parameter setting panel includes: the first parameter setting control is used for setting the three-dimensional deformation parameters corresponding to the plurality of human face parts respectively;
and responding to the setting operation of the first parameter setting control corresponding to any target face part, and determining a target three-dimensional deformation parameter corresponding to the target face part.
In an optional embodiment, the parameter setting panel further comprises: a second parameter setting control for setting display parameters, wherein the display parameters include at least one of the following: camera angle, background color, and three-dimensional model display size;
the display module is further configured to:
setting a control based on the second parameter, and acquiring target display parameters; and displaying the second human face three-dimensional model based on the target display parameters.
In an alternative embodiment, the first three-dimensional model of the face comprises: the method comprises the following steps of obtaining a standard human face three-dimensional model, or obtaining a three-dimensional model after carrying out deformation processing on the standard human face three-dimensional model by utilizing a preset parameter configuration file.
In an optional implementation manner, the first three-dimensional face model includes a three-dimensional model obtained by performing three-dimensional deformation processing on the standard three-dimensional face model by using a preset parameter configuration file;
the processing module 62 is further configured to:
analyzing the parameter configuration file in response to the importing of the parameter configuration file to obtain three-dimensional deformation parameters carried in the parameter configuration file;
and carrying out first three-dimensional deformation processing on the standard human face three-dimensional model by using the three-dimensional deformation parameters to obtain the first human face three-dimensional model and a first preview image corresponding to the first human face three-dimensional model.
In an optional embodiment, the parameter configuration file comprises a first parameter configuration file generated by using preset three-dimensional design software;
the processing module 62 is further configured to:
loading a software module corresponding to the deformation mode based on the deformation mode represented by the original three-dimensional deformation parameter; the software module is used for carrying out first three-dimensional deformation processing on the standard human face three-dimensional model;
initializing a parameter setting panel based on the deformation position and the deformation amplitude represented by the original three-dimensional deformation parameters;
initializing a configuration target character string; and the target character string is used for storing adjustment information for adjusting the original three-dimensional deformation parameters by a user.
In an optional implementation manner, the parameter configuration file includes a second parameter configuration file obtained by adjusting an original three-dimensional deformation parameter in the first parameter configuration file;
the processing module 62 is further configured to:
analyzing the target character string to obtain adjustment information for adjusting the three-dimensional deformation parameters in the first parameter configuration file; and updating the parameter setting panel based on the adjustment information obtained by analysis.
In an optional embodiment, the processing module 62 is further configured to:
obtaining an adjusted three-dimensional deformation parameter by using the adjustment information and the three-dimensional deformation parameter;
and performing first three-dimensional deformation processing on the standard human face three-dimensional model by using the adjusted three-dimensional deformation parameters to obtain the first human face three-dimensional model.
In an alternative embodiment, the display module 61 is further configured to:
displaying a first preview image corresponding to the first three-dimensional model of the face in a second area of the graphical user interface;
in response to the acquisition of the target three-dimensional deformation parameter, performing second three-dimensional deformation processing on the first preview image based on the target three-dimensional deformation parameter to obtain a second preview image;
and displaying the second preview image.
In an optional implementation, the processing module 62 is further configured to:
performing three-dimensional face reconstruction on the first preview image to obtain a third face three-dimensional model of the template face in the first preview image;
performing first three-dimensional deformation processing on the third face three-dimensional model based on the target deformation coefficient to obtain a fourth face three-dimensional model;
and performing position transformation processing on pixel points in the image to be processed based on the fourth face three-dimensional model to obtain the second preview image.
In an alternative embodiment, the first preview image comprises an original preview image; or, performing first three-dimensional deformation processing on the original preview image by using a parameter configuration file to obtain an image.
Based on the same inventive concept, an image processing apparatus corresponding to the image processing method is further provided in the embodiments of the present disclosure, and as shown in fig. 7, the apparatus is a schematic diagram of a material generating apparatus provided in the embodiments of the present disclosure, and the apparatus includes: a presentation module 71, a response module 72, wherein:
a display module 71, configured to display the image to be processed on a graphical user interface; the image to be processed comprises a target face;
the response module 72 is configured to, in response to a loading operation of a target material, perform three-dimensional deformation processing on the image to be processed based on the target material to obtain a first target image; the target material is generated by the material generation method in the embodiment;
the display module 71 is further configured to display the first target image.
Based on the same inventive concept, another image processing apparatus corresponding to the image processing method is also provided in the embodiments of the present disclosure, and the apparatus includes: a presentation module 71, a response module 72, a processing module 73, wherein:
a display module 71, configured to display a parameter setting panel on the gui, and display image data to be processed on the gui; the image data comprises a target face; the parameter setting panel includes: a parameter setting control for setting a plurality of human face parts corresponding to the three-dimensional deformation parameters respectively;
a response module 72, configured to respond to an adjustment operation on the parameter control, and obtain a target three-dimensional deformation parameter corresponding to the adjustment operation; the target three-dimensional deformation parameter is used for characterizing at least one of the following: carrying out deformation position, deformation mode and deformation amplitude on the target human face part;
the processing module 73 is configured to perform three-dimensional deformation processing on the image data to be processed based on the target three-dimensional deformation parameter to obtain a target image;
the display module 71 is further configured to display the target image.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 8, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes:
a processor 81 and a memory 82; the memory 82 stores machine-readable instructions executable by the processor 81, the processor 81 being configured to execute the machine-readable instructions stored in the memory 82, the processor 81 performing the following steps when the machine-readable instructions are executed by the processor 81:
displaying a first three-dimensional model of a face in a first region of a graphical user interface;
responding to the acquired target three-dimensional deformation parameters, and performing first three-dimensional deformation processing on a first human face three-dimensional model based on the target three-dimensional deformation parameters to obtain a second human face three-dimensional model; wherein the target three-dimensional deformation parameter is used for characterizing at least one of the following: carrying out deformation position, deformation mode and deformation amplitude on the target human face part;
and responding to a human face deformation material generation instruction, and generating a target material containing the target three-dimensional deformation parameter based on the target three-dimensional deformation parameter, wherein the target material is used for deforming a target human face part of an image to be processed, and the deformation effect of the image to be processed is matched with that of the second human face three-dimensional model.
Or performing the following steps:
displaying an image to be processed on a graphical user interface; the image to be processed comprises a target face;
responding to the loading operation of a target material, and performing deformation processing on the image to be processed based on the target material to obtain a first target image; wherein the target material is generated using the material generation method of any one of claims 1-10;
and displaying the first target image.
The storage 82 includes a memory 821 and an external storage 822; the memory 821 is also referred to as an internal memory and temporarily stores operation data in the processor 81 and data exchanged with the external memory 822 such as a hard disk, and the processor 81 exchanges data with the external memory 822 through the memory 821.
For the specific execution process of the instruction, reference may be made to the steps of the material generation method and the image processing method described in the embodiments of the present disclosure, and details are not described here.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program executes the steps of the material generation method and the image processing method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the material generation method and the image processing method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the system and the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solutions of the present disclosure, which are essential or part of the technical solutions contributing to the prior art, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some of the technical features, within the technical scope of the disclosure; such modifications, changes and substitutions do not depart from the spirit and scope of the embodiments disclosed herein, and they should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (18)

1. A method for generating material, comprising:
displaying a first three-dimensional model of a face in a first region of a graphical user interface;
responding to the acquired target three-dimensional deformation parameters, and performing first three-dimensional deformation processing on a first human face three-dimensional model based on the target three-dimensional deformation parameters to obtain a second human face three-dimensional model; wherein the target three-dimensional deformation parameter is used for characterizing at least one of the following: carrying out deformation position, deformation mode and deformation amplitude on the target human face part;
and responding to a human face deformation material generation instruction, and generating a target material containing the target three-dimensional deformation parameters based on the target three-dimensional deformation parameters, wherein the target material is used for deforming the target human face part of the image to be processed, and the deformation effect of the image to be processed is matched with the deformation effect of the second human face three-dimensional model.
2. The method of claim 1, wherein the first three-dimensional model of the face comprises: a plurality of vertexes, and position information of the plurality of vertexes in a model coordinate system;
the performing a first three-dimensional deformation process on the first human face three-dimensional model based on the target three-dimensional deformation parameter to obtain a second human face three-dimensional model comprises the following steps:
determining a target vertex corresponding to a deformation position indicated by the target three-dimensional model parameter from a plurality of vertexes included in the first human face three-dimensional model;
and executing the deformation mode indicated by the target three-dimensional deformation parameter and the position adjustment corresponding to the deformation amplitude on the target vertex to obtain the second human face three-dimensional model.
3. The method according to claim 1 or 2, wherein the obtaining the target three-dimensional deformation parameter comprises:
displaying a parameter setting panel in a third area of the graphical user interface; the parameter setting panel includes: the first parameter setting control is used for setting the three-dimensional deformation parameters corresponding to the plurality of human face parts respectively;
and responding to the setting operation of the first parameter setting control corresponding to any target face part, and determining a target three-dimensional deformation parameter corresponding to the target face part.
4. The method of claim 3, wherein the parameter setting panel further comprises: a second parameter setting control for setting display parameters, wherein the display parameters include at least one of: camera angle, background color, and three-dimensional model display size;
the method further comprises the following steps: setting a control based on the second parameter to obtain target display parameters; and displaying the second human face three-dimensional model based on the target display parameters.
5. The method of claim 4, wherein the first three-dimensional model of the face comprises: the method comprises the following steps of obtaining a standard human face three-dimensional model, or obtaining a three-dimensional model after carrying out deformation processing on the standard human face three-dimensional model by utilizing a preset parameter configuration file.
6. The method according to claim 5, wherein the first three-dimensional model of the human face comprises a three-dimensional model obtained by performing three-dimensional deformation processing on the standard three-dimensional model of the human face by using a preset parameter configuration file;
before the first three-dimensional model of the face is displayed in the first area of the graphical user interface, the method further comprises the following steps:
analyzing the parameter configuration file in response to the importing of the parameter configuration file to obtain three-dimensional deformation parameters carried in the parameter configuration file;
and carrying out first three-dimensional deformation processing on the standard human face three-dimensional model by using the three-dimensional deformation parameters to obtain the first human face three-dimensional model and a first preview image corresponding to the first human face three-dimensional model.
7. The method of claim 6, wherein the parameter configuration file comprises a first parameter configuration file generated using pre-set three-dimensional design software;
after analyzing the parameter configuration file to obtain the original three-dimensional deformation parameters carried in the parameter configuration file, the method further comprises at least one of the following steps:
loading a software module corresponding to the deformation mode based on the deformation mode represented by the original three-dimensional deformation parameters; the software module is used for carrying out first three-dimensional deformation processing on the standard human face three-dimensional model;
initializing a parameter setting panel based on the deformation position and the deformation amplitude represented by the original three-dimensional deformation parameters;
initializing a configuration target character string; and the target character string is used for storing adjustment information for adjusting the original three-dimensional deformation parameters by a user.
8. The method according to claim 7, wherein the parameter configuration file comprises a second parameter configuration file obtained by adjusting the original three-dimensional deformation parameters in the first parameter configuration file;
after analyzing the parameter configuration file to obtain the original three-dimensional deformation parameters carried in the parameter configuration file, the method further includes:
analyzing the target character string to obtain adjustment information for adjusting the three-dimensional deformation parameters in the first parameter configuration file; and updating the parameter setting panel based on the adjustment information obtained by analysis.
9. The method according to claim 8, wherein the performing a first three-dimensional deformation process on the standard three-dimensional model of the human face by using the three-dimensional deformation parameters to obtain the first three-dimensional model of the human face comprises:
obtaining an adjusted three-dimensional deformation parameter by using the adjustment information and the three-dimensional deformation parameter;
and performing first three-dimensional deformation processing on the standard human face three-dimensional model by using the adjusted three-dimensional deformation parameters to obtain the first human face three-dimensional model.
10. The method according to any one of claims 1-9, further comprising:
displaying a first preview image corresponding to the first three-dimensional model of the face in a second area of the graphical user interface;
in response to the acquisition of the target three-dimensional deformation parameter, performing second three-dimensional deformation processing on the first preview image based on the target three-dimensional deformation parameter to obtain a second preview image;
and displaying the second preview image.
11. The method according to claim 10, wherein performing a second three-dimensional deformation process on the first preview image based on the target three-dimensional deformation parameter to obtain the second preview image comprises:
performing three-dimensional face reconstruction on the first preview image to obtain a third face three-dimensional model of the template face in the first preview image;
performing first three-dimensional deformation processing on the third face three-dimensional model based on the target deformation coefficient to obtain a fourth face three-dimensional model;
and performing position transformation processing on pixel points in the image to be processed based on the fourth face three-dimensional model to obtain the second preview image.
12. The method of claim 10, wherein the first preview image comprises an original preview image; or, performing first three-dimensional deformation processing on the original preview image by using the parameter configuration file to obtain an image.
13. An image processing method, comprising:
displaying an image to be processed on a graphical user interface; the image to be processed comprises a target face;
responding to the loading operation of a target material, and performing deformation processing on the image to be processed based on the target material to obtain a first target image; wherein the target material is generated using the material generation method of any one of claims 1-10;
and displaying the first target image.
14. The method according to claim 13, wherein the target material includes a target three-dimensional deformation parameter, and the three-dimensional deformation processing of the image to be processed based on the target material to obtain a first target image includes:
carrying out three-dimensional face reconstruction on the image to be processed to obtain a face three-dimensional model of a target face in the image to be processed;
performing first three-dimensional deformation processing on the human face three-dimensional model based on the target deformation coefficient to obtain a target human face three-dimensional model;
and performing position transformation processing on pixel points in the image to be processed based on the target human face three-dimensional model to obtain the first target image.
15. A material generation apparatus, characterized in that the apparatus comprises:
the first display module is used for displaying the first face three-dimensional model in a first area of the graphical user interface;
the processing module is used for responding to the acquired target three-dimensional deformation parameters and carrying out first three-dimensional deformation processing on the first human face three-dimensional model based on the target three-dimensional deformation parameters to obtain a second human face three-dimensional model; wherein the target three-dimensional deformation parameter is used for characterizing at least one of the following: carrying out deformation position, deformation mode and deformation amplitude on the target human face part;
and the generating module is used for responding to a human face deformation material generating instruction and generating a target material containing the target three-dimensional deformation parameters based on the target three-dimensional deformation parameters, wherein the target material is used for deforming the target human face part of the image to be processed, and the deformation effect of the image to be processed is matched with the deformation effect of the second human face three-dimensional model.
16. An image processing apparatus characterized by comprising:
the second display module is used for displaying the image to be processed on the graphical user interface; the image to be processed comprises a target face;
the response module is used for responding to the loading operation of a target material and carrying out three-dimensional deformation processing on the image to be processed based on the target material to obtain a first target image; wherein the target material is generated using the material generation method of any one of claims 1-12;
the second display module is further used for displaying the first target image.
17. An electronic device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the processor being configured to execute the machine-readable instructions stored in the memory, the machine-readable instructions, when executed by the processor, causing the processor to perform the steps of the material generation method of any of claims 1-12 or to perform the steps of the image processing method of any of claims 13-14.
18. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when executed by an electronic device, executes the steps of the material generation method according to any one of claims 1 to 12, or the steps of the image processing method according to any one of claims 13 to 14.
CN202210417761.1A 2022-04-20 2022-04-20 Material generation method, image processing method, device, electronic device and storage medium Pending CN114742951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210417761.1A CN114742951A (en) 2022-04-20 2022-04-20 Material generation method, image processing method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210417761.1A CN114742951A (en) 2022-04-20 2022-04-20 Material generation method, image processing method, device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN114742951A true CN114742951A (en) 2022-07-12

Family

ID=82283516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210417761.1A Pending CN114742951A (en) 2022-04-20 2022-04-20 Material generation method, image processing method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114742951A (en)

Similar Documents

Publication Publication Date Title
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
US20200020173A1 (en) Methods and systems for constructing an animated 3d facial model from a 2d facial image
KR20210119438A (en) Systems and methods for face reproduction
CN101055646B (en) Method and device for processing image
US20180204052A1 (en) A method and apparatus for human face image processing
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN108986016B (en) Image beautifying method and device and electronic equipment
CN106021550B (en) Hair style design method and system
KR20200014280A (en) An image processing apparatus, an image processing system, and an image processing method, and a program
CN106682632A (en) Method and device for processing face images
JP2024500896A (en) Methods, systems and methods for generating 3D head deformation models
TWI780919B (en) Method and apparatus for processing face image, electronic device and storage medium
CN113628327A (en) Head three-dimensional reconstruction method and equipment
CN111066026A (en) Techniques for providing virtual light adjustments to image data
CN110322571B (en) Page processing method, device and medium
CN116997933A (en) Method and system for constructing facial position map
JP2024503794A (en) Method, system and computer program for extracting color from two-dimensional (2D) facial images
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
KR20230110787A (en) Methods and systems for forming personalized 3D head and face models
JPH1125253A (en) Method for drawing eyebrow
CN111652792B (en) Local processing method, live broadcasting method, device, equipment and storage medium for image
EP3871194A1 (en) Digital character blending and generation system and method
US12039675B2 (en) High quality AR cosmetics simulation via image filtering techniques
CN114742951A (en) Material generation method, image processing method, device, electronic device and storage medium
CN113223128B (en) Method and apparatus for generating image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination