CN111260754B - Face image editing method and device and storage medium - Google Patents

Face image editing method and device and storage medium Download PDF

Info

Publication number
CN111260754B
CN111260754B CN202010341415.0A CN202010341415A CN111260754B CN 111260754 B CN111260754 B CN 111260754B CN 202010341415 A CN202010341415 A CN 202010341415A CN 111260754 B CN111260754 B CN 111260754B
Authority
CN
China
Prior art keywords
face
image
attribute
hidden variable
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010341415.0A
Other languages
Chinese (zh)
Other versions
CN111260754A (en
Inventor
朱飞达
邰颖
汪铖杰
李季檩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010341415.0A priority Critical patent/CN111260754B/en
Publication of CN111260754A publication Critical patent/CN111260754A/en
Application granted granted Critical
Publication of CN111260754B publication Critical patent/CN111260754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a face image editing method, a face image editing device and a storage medium, which can display an image editing page; responding to attribute editing operation aiming at the face image, and determining target face attributes to be edited in the face image and editing information of the target face attributes; simulating the face image to generate a simulated face image based on the first hidden variable; obtaining an adjusted hidden variable of the first hidden variable based on the difference between the two images; therefore, the face image can be accurately mapped into the hidden variable space, and the accurate expression of the adjusted hidden variable on the face image is realized; by acquiring the incidence relation between the adjusted hidden variable and the change of the target face attribute, the adjusted hidden variable can be adjusted based on the incidence relation and the editing information to obtain a target hidden variable, and an edited face image is generated based on the target hidden variable; the target hidden variable can accurately represent the edited face image in the hidden variable space, and the editing effect of the face image is favorably ensured.

Description

Face image editing method and device and storage medium
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a device for editing a face image and a storage medium.
Background
With the rapid development of computer vision technology and graphics, images become the main carrier of daily information communication of people gradually due to the intuitive representation and convenient transmission. The human face is the most part of the human body containing the differentiated information and is preferred by the majority of users in information exchange.
At present, there is a scheme for editing a face image in the related art, which can edit the face image, that is, adjust various elements in the face image, such as a nose, a mouth, and eyes, to generate a new face image. However, these editing schemes have insufficient understanding of the face attributes of the face and cannot well adjust the face attributes of the face, especially the real face.
Disclosure of Invention
The embodiment of the invention provides a face image editing method, a face image editing device and a storage medium, which can improve the accuracy of hidden variables of a face image in a hidden variable space and improve the face editing effect.
The embodiment of the invention provides a face image editing method, which comprises the following steps:
displaying an image editing page, wherein the image editing page comprises a face image to be edited;
responding to attribute editing operation aiming at the face image, and determining target face attributes to be edited in the face image and editing information of the target face attributes;
simulating the face image based on a first hidden variable to generate a simulated face image, wherein the first hidden variable is an initial hidden variable of the face image in a hidden variable space;
adjusting the first hidden variable based on image difference information between the simulated face image and the face image to obtain an adjusted hidden variable;
acquiring the change of the adjusted hidden variable in the hidden variable space and the incidence relation between the change of the target face attribute, and adjusting the adjusted hidden variable based on the incidence relation and the editing information to obtain a target hidden variable;
and generating an edited face image corresponding to the target face attribute based on the target hidden variable, and displaying the edited face image.
An embodiment of the present invention further provides a face image editing apparatus, where the face image editing apparatus includes:
the editing page display unit is used for displaying an image editing page, wherein the image editing page comprises a face image to be edited;
the acquiring unit is used for responding to attribute editing operation aiming at the face image and determining target face attributes to be edited in the face image and editing information of the target face attributes;
the simulation unit is used for simulating the face image based on a first hidden variable to generate a simulated face image, wherein the first hidden variable is an initial hidden variable of the face image in a hidden variable space;
a first adjusting unit, configured to adjust the first hidden variable based on image difference information between the simulated face image and the face image to obtain an adjusted hidden variable;
a second adjusting unit, configured to obtain an association relationship between a change of an adjusted hidden variable in the hidden variable space and a change of the target face attribute, and adjust the adjusted hidden variable based on the association relationship and the editing information to obtain a target hidden variable;
and the image display unit is used for generating an edited face image corresponding to the target face attribute based on the target hidden variable and displaying the edited face image.
An embodiment of the present invention further provides a storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the above-mentioned face image editing method.
The embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the steps of the above facial image editing method when executing the computer program.
The embodiment of the invention provides a face image editing method, a face image editing device, computer equipment and a storage medium, which can display an image editing page; responding to attribute editing operation aiming at the face image, and determining target face attributes to be edited in the face image and editing information of the target face attributes; simulating a human face in the human face image based on a first hidden variable to generate a simulated human face image, wherein the first hidden variable is an initial hidden variable of the human face image in a hidden variable space; adjusting the first hidden variable based on image difference information between the simulated face image and the face image to obtain an adjusted hidden variable; therefore, the face image can be accurately mapped into the hidden variable space, the expression accuracy of the adjusted hidden variable in the hidden variable space on the face information of the face image is improved, and particularly for a real face, the position of the hidden variable of the real face image acquired by the method in the hidden variable space is more accurate; then, by acquiring the change of the adjusted hidden variable in the hidden variable space and the incidence relation between the change of the target face attribute, the adjusted hidden variable can be adjusted based on the incidence relation and the editing information to obtain the corresponding target hidden variable after the face image is edited; therefore, the accuracy of the edited face image in the hidden variable space is ensured, so that the edited face image generated based on the target hidden variable is more accurate and real in adjustment of the target face attribute, and the editing effect of the face image is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of a face image editing method provided by an embodiment of the present invention;
fig. 2 is a flowchart of a method for editing a face image according to an embodiment of the present invention;
FIG. 3a is a diagram illustrating editing of face attributes according to an embodiment of the present invention;
FIG. 3b is a diagram illustrating face attribute editing according to an embodiment of the present invention;
FIG. 3c is a diagram illustrating face attribute editing according to an embodiment of the present invention;
FIG. 3d is a diagram illustrating face attribute editing according to an embodiment of the present invention;
FIG. 3e is a schematic diagram of face property editing implemented by an emulated target emulation image in an embodiment of the present invention;
FIG. 4 is a block diagram of a generator in an embodiment of the invention;
fig. 5a is a schematic flowchart of a method for obtaining a direction vector of a hidden variable according to an embodiment of the present invention;
FIG. 5b is a schematic diagram of determining a direction vector based on two classes according to an embodiment of the present invention;
FIG. 5c is a schematic diagram of different smiling faces resulting from setting different degrees of "smile" using a method of an embodiment of the present invention;
fig. 5d is a schematic diagram of an acquisition process of an edited face image based on a direction vector according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a face image editing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a face image editing method and device, computer equipment and a storage medium. Specifically, the present embodiment provides a face image editing method suitable for a face image editing apparatus, which may be integrated in a computer device.
The computer device may be a terminal or other device, such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, or other device.
The computer device may also be a device such as a server, and the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform, but is not limited thereto.
The face image editing method of the embodiment can be realized by a terminal, and can also be realized by the terminal and a server together.
The following describes a face image editing method by taking an example in which the terminal and the server implement the face image editing method together.
Referring to fig. 1, a face image editing system provided by the embodiment of the present invention includes a terminal 10, a server 20, and the like; the terminal 10 and the server 20 are connected through a network, for example, a wired or wireless network connection, wherein the facial image editing apparatus on the terminal side can be integrated in the terminal in the form of a client.
The terminal 10 may be configured to display an image editing page, where the image editing page includes a face image to be edited; and in response to the attribute editing operation for the face image, determining a target face attribute to be edited in the face image and editing information of the target face attribute, sending the editing information and the face image to the server 20, triggering the server to obtain an edited face image corresponding to the face image based on the editing information and the face image, and sending the edited face image to the terminal.
The server 20 may be configured to receive editing information and a face image, and simulate a face in the face image based on a first hidden variable to generate a simulated face image, where the first hidden variable is an initial hidden variable of the face image in a hidden variable space; adjusting the first hidden variable based on image difference information between the simulated face image and the face image to obtain an adjusted hidden variable; acquiring the change of the adjusted hidden variable in the hidden variable space and the incidence relation between the change of the target face attribute, and adjusting the adjusted hidden variable based on the incidence relation and the editing information to obtain a target hidden variable; and generating an edited face image corresponding to the target face attribute based on the target hidden variable, and sending the edited face image to the terminal 10.
The terminal 10 may also be configured to receive the edited face image sent by the server, and display the edited face image.
In one example, the above scheme of the server 20 acquiring the edited face image may be completed by the terminal 10.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiment of the invention will be described from the perspective of a face image editing device, which can be specifically integrated in a terminal. For example in the form of a client or integrated in the terminal in the form of a service module of the client. The service module can be understood as a component formed by codes for implementing the face image editing method of the embodiment. The face image editing method of the present embodiment may be executed by a processor of a terminal. Or a part of the facial image editing device can be integrated in the terminal, and a part of the facial image editing device can be integrated in the server. For example, a part for obtaining the edited face image based on the editing information and the face image may be integrated in the server, and other parts, for example, a part for obtaining the face image, obtaining the editing information, and displaying the edited face image, may be integrated in the terminal.
The embodiment of the invention provides a Face image editing method, which relates to a Face Recognition (Face Recognition) technology in a Computer Vision technology (CV), in particular to a Face Attribute Recognition (Face Attribute Recognition) technology in the Face Recognition technology.
Computer vision is a science for researching how to make a machine "see", and further, it means that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. The computer vision technology generally includes technologies such as image processing, image Recognition, image semantic understanding, image retrieval, OCR (Optical Character Recognition), video processing, video semantic understanding, video content/behavior Recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction, and the like, and also includes common biometric technologies such as face Recognition, fingerprint Recognition, and the like. The face image editing method in this embodiment may edit the face attributes of the face image based on a face recognition technology, particularly a face attribute recognition technology.
As shown in fig. 2, the flow of the face image editing method of the present embodiment may be as follows:
201. displaying an image editing page, wherein the image editing page comprises a face image to be edited;
optionally, the image editing page of this embodiment may be an image editing page displayed by a client, or may also be an image editing page displayed by a web page of a web client, which is not limited in this embodiment, where the client may be any type of client, such as a video client and an instant messaging client.
The face image in the image editing page of the embodiment may be a generated virtual face image or a real face image. The real face image can be obtained by real-time shooting, can be obtained from an image set of a terminal, or can be downloaded from the internet. The image set may be an album or a set of cached images corresponding to the client, and the image set may be an image set locally stored by the terminal or an image set stored by a user of the terminal on the cloud storage platform, which is not limited in this embodiment.
It can be understood that the image editing page is triggered differently for different clients, in one example, the client is an instant messaging client, and the step "show image editing page" may include:
displaying a chat session page of the instant messaging client;
responding to an image sending triggering operation aiming at the chat conversation page, and displaying an image acquisition page;
and responding to the image acquisition operation aiming at the image acquisition page, and displaying an image editing page of the instant messaging client, wherein the face image to be edited in the image editing page is the image acquired through the image acquisition operation.
The chat session page may be a group chat page or a single chat page, which is not limited in this embodiment.
In one example, the chat session page includes an information sending trigger control, the image obtaining page is an image selection page, and the step "presenting the image obtaining page in response to an image sending trigger operation for the chat session page" may include:
displaying a sending information selection sub-page based on a triggering operation aiming at an information sending triggering control, wherein the sending information selection sub-page comprises an existing image selection control;
and displaying an image selection page of the terminal based on the triggering operation aiming at the existing image selection control, wherein the image selection page comprises candidate images from the image set of the terminal.
The triggering operation for the control can be a touch operation such as clicking, sliding, double clicking and the like.
Correspondingly, the step of "responding to the image acquisition operation for the image acquisition page, and presenting the image editing page of the instant messaging client" may include:
and responding to the selection operation of the candidate image in the image selection page, and displaying the image editing page of the instant messaging client, wherein the face image to be edited in the image editing page is the selected candidate image.
The selection operation for the candidate image may be a touch operation such as clicking, long-pressing, double-clicking, and finger joint clicking for the candidate image.
Optionally, in another example, the chat session page includes an information sending trigger control, the image obtaining page is an image capturing page, and the step "presenting the image obtaining page in response to an image sending trigger operation for the chat session page" may include:
displaying a sending information selection sub-page based on a triggering operation aiming at an information sending triggering control, wherein the sending information selection sub-page comprises an image shooting control;
and displaying an image shooting page of the terminal based on the triggering operation aiming at the image shooting control, wherein the image shooting page comprises the image shooting control.
The triggering operation for the control can be a touch operation such as clicking, sliding, double clicking and the like.
Correspondingly, the step of "responding to the image acquisition operation for the image acquisition page, and presenting the image editing page of the instant messaging client" may include:
and responding to the shooting operation aiming at the image shooting control, and displaying an image editing page of the instant messaging client, wherein the face image to be edited in the image editing page is an image shot based on the shooting operation.
The shooting operation may be a touch operation such as clicking on an image shooting control, and the like, which is not limited in this embodiment.
202. Responding to attribute editing operation aiming at the face image, and determining target face attributes to be edited in the face image and editing information of the target face attributes;
the face attributes in the embodiment can represent a series of biological characteristics of face features, have strong self-stability and individual difference, and identify the identity of a person. Facial attributes include, but are not limited to, gender, skin tone, age, expression, race, hair color, beard, and the like.
Different sub-attributes can be included under different attributes, for example, the expression attribute includes laughing, crying, happy, sad, angry, surprise, and the like. The in-person attribute may include yellow race, black race, white race, and the like, and may further include smaller attributes of chinese, korean, japanese, and the like. The target face attribute in this embodiment may be the above-mentioned attribute in a large range, or may be the attribute in a small range, and this embodiment does not limit this.
In this embodiment, the editing operation in step 202 may be an operation formed by combining a series of touch operations.
In this embodiment, through editing operation, a target face attribute can be selected from a plurality of face attributes provided by the client, and editing information for the target face attribute can be set,
optionally, the image editing page includes a first attribute editing control, and the step "determining a target face attribute to be edited in the face image and editing information of the target face attribute in response to the attribute editing operation for the face image" may include:
displaying a property editing page of the face image based on the triggering operation aiming at the first property editing control, wherein the property editing page comprises the face image and candidate face properties;
displaying an editing control of the target face attribute based on the selection operation aiming at the target face attribute in the candidate face attributes;
and acquiring the editing information of the target face attribute in response to the setting operation aiming at the editing control.
The attribute editing page may be a page obtained by switching and displaying the attribute of the candidate face in the image editing page.
The candidate face attributes of the present embodiment include, but are not limited to, age, gender, expression, race, skin color, and the like.
Optionally, in this embodiment, for different face attributes, the content of the editing control and the corresponding setting operation may be different, for example, for expression attributes, the setting operation of the editing control may be a combination of a series of touch operations.
For example, referring to fig. 3a, 301 in fig. 3a is an image editing page, where the page includes a self-photographed image (i.e., a face image to be edited) of a user a of the terminal, and the image editing page includes controls for performing different edits on an image, for example, a first property editing control for editing a face property of the image, such as a control named "property editing". When a click operation is detected for the "property edit" control in the page indicated by 301, a property edit page indicated by 302 in fig. 3a is presented, where the property edit page includes the self-photographed image of user a and candidate face properties such as facial properties of expression, age, gender, race, and the like.
When a selection operation for the "expression" attribute in the page indicated by 302 is detected, showing 303 an editing control 3031 for the "expression" attribute in the page; the editing control 3031 includes each expression sub-control 30311, and a change degree selection control 30312 and a confirmation control 30313 corresponding to the expression sub-controls, where the expression sub-controls correspondingly display sub-attribute names of expression attributes, such as "smile", "anger", "sorrow" and other expression sub-controls in the expression sub-controls 3031. As shown in 30312, the variation degree selection control may be represented in the form of a bar control, and a sliding operation of the bar control by the user may change the position of the anchor point in the bar control to determine the variation degree of the sub-expression. For example, when the user selects the expression sub-control of "smile" in 30311, the point a in 30312 indicates that the initial attribute information of the photo of the user a on the "smile" attribute, for example, the degree of "smile" is "not smile", and when a selection operation of the user on the change degree selection control, for example, a sliding operation on the bar control is detected, the amount of change in the anchor point is determined, thereby determining the edit information of the photo of the user a on the "smile" attribute, for example, sliding from the point a to the point b on 30312 indicates that the attribute information on the "smile" attribute from the photo of the user a is changed from "not smile" to "smile". When the anchor point of 30312 is located at the point b, if a trigger operation for a confirmation control named "finish" is detected on the page 3031, editing of the photo of the user a based on the editing information is started, and after the editing is finished, a page shown in 304 is displayed, wherein the smile attribute of the user a in the page is changed from "not smile" to "smile" in 303. It will be appreciated that the point b is located differently on the progress bar, so will the degree of "smiling" in 304, for example if b is located at b1, the magnitude of "smiling" in 304 is greater than "smiling".
The editing information of this embodiment includes information that can represent the change direction and change degree of the target face attribute before and after editing, and the change direction includes, but is not limited to, forward change and reverse change. For example, for the "smile" attribute, the changing direction includes an increasing direction and a decreasing direction of the "smile" magnitude, and the changing degree is the changing magnitude of the "smile".
For example, the editing information may include original attribute information of the face image on the target face attribute, and target attribute information of the face image on the target face attribute after the face attribute is edited. And reflecting the change direction and the change degree of the target face attribute before and after editing by using the original attribute information and the target attribute information. For example, also taking "smile" as an example, the edit information may include original attribute information "smile" of the "smile" attribute, and target attribute information "smile".
In addition to smiling, this embodiment may also edit other face attributes, for example, assuming that the photo of the user a is a photo in the first two pages in fig. 3b, if the target face attribute selected by the editing control 3031 in 305 is "anger", c indicates that the original attribute information of the photo on the target face attribute is not anger, c1 indicates that the target attribute information on the edited target face attribute is "anger", and the edited face image may be as shown in the page 306 in fig. 3 b.
For example, referring to fig. 3c, if the target face attribute is "surprise" in the page indicated in 307, d indicates that the target face attribute of the photo is not surprise, and d1 indicates that the edited target face attribute is "surprise", the edited face image may be as indicated in the page 308 of fig. 3 c.
For example, referring to fig. 3d, if the target face attribute is "beard" in the page shown in 309, f indicates that there are fewer "beards" in the photo, and f1 indicates that "beard" is increased after editing. The edited face image may be as shown in page 310 of fig. 3 d.
In order to avoid the portrait right problem, the example of the face images in fig. 3a to 3d in this embodiment uses a generated virtual face, but in actual use, the face image in the page may be a real face image such as a self-photograph taken by a user with a front-facing camera.
The face attribute editing method of the embodiment can also be applied to a face attribute simulation scene, a target simulation image of a certain simulation object such as a certain star can be selected through editing operation, and the terminal edits the face image by taking certain face attributes in the target simulation image as the target face attributes, so that a user can obtain the face image simulating the simulation object based on the face image of the user.
Optionally, the image editing page of this embodiment includes a second attribute editing control, and the step "determining a target face attribute to be edited in the face image and editing information of the target face attribute in response to the attribute editing operation for the face image" may include:
displaying a property editing page of the face image based on the triggering operation aiming at the second property editing control, wherein the property editing page comprises the face image and the imitation control;
displaying an imitation image selection page based on a triggering operation for the imitation control, wherein the imitation image selection page comprises candidate imitation images;
responding to the imitation trigger operation aiming at the target imitation image in the candidate imitation images, and acquiring the imitated face attribute of the target imitation image as the target face attribute of the face image;
acquiring attribute information of the target face attribute in the target imitation image, and determining editing information of the target face attribute in the face image based on the attribute information.
In this embodiment, the simulated face attribute may be a preset face attribute, such as an "expression" attribute, a "race" attribute, an "age" attribute, or the like. The number of the simulated human face attributes can be multiple.
The attribute recognition technology can be used for carrying out attribute recognition on the face in the target imitation image to obtain attribute information of the attribute of the imitated face.
In one example, the face attribute recognition may be performed on the target imitation image, attribute information of the target imitation image on a plurality of (e.g., 40) preset face attributes is obtained, and the imitated face attribute of the target imitation image is selected based on the attribute information.
For example, a selection condition based on the attribute information may be set in advance for each preset face attribute, for example, a smile selection condition is set as "smile" in the attribute information, so as to select an attribute in which the target imitation image shows prominence on the face attribute as the imitation face attribute.
The candidate imitation images in this embodiment may be provided by the client, or may be stored in the image set in advance by the user, which is not limited in this embodiment.
The first property editing control and the second property editing control of the present embodiment may be the same control.
For example, referring to fig. 3e, 311 in fig. 3e is an image editing page, where the page includes a self-photographed image (i.e., a face image to be edited) of the user B of the terminal, and the image editing page includes a control for performing different edits on an image, for example, a second property editing control for editing a face property of an image, such as a control named "property editing". When a click operation is detected for the "property edit" control in the page indicated at 311, a property edit page, indicated at 312 in fig. 3e, is presented, which includes a selfie of user B, and an impersonation control, such as the control named "face impersonation" in 312, and when a trigger operation for the "face impersonation" control is detected, an impersonation image selection page, indicated at 313, is displayed, wherein the impersonation image selection page includes candidate impersonation images. When an attribute emulation selection operation such as a click operation for the target emulation image is detected, determining an emulated face attribute of the target emulation image, for example, "smile", as a target face attribute of the self-photograph of the user B; attribute information of a target face attribute in the target imitation image, for example, the degree of smiling is "laugh", and edit information of the target face attribute in the face image, for example, the edit information is the degree of adjusting the "smile" attribute of a self-portrait of the user B to "laugh", is determined based on the attribute information.
203. Simulating the face image based on a first hidden variable to generate a simulated face image, wherein the first hidden variable is an initial hidden variable of the face image in a hidden variable space;
204. adjusting the first hidden variable based on image difference information between the simulated face image and the face image to obtain an adjusted hidden variable;
the initial hidden variable of the present embodiment may be understood as an initial description of a face image in a hidden variable space, in one example, different face images may use the same initial hidden variable, and in another example, a corresponding initial hidden variable may be set for a face image that shows different attribute information on a certain face attribute based on the certain face attribute, where the certain face attribute may be a gender, an age, a race, and the like. If the specific face attribute is gender, corresponding initial hidden variables may be set for face images of males and females according to the attribute information, and for example, if the specific face attribute is age, corresponding initial hidden variables may be set for face images of different ages according to the attribute information.
In this embodiment, the adjusted hidden variable may be obtained based on a generated countermeasure Network (GAN). A method for generating a countermeasure Network for unsupervised learning, comprising at least a Generator Network (Generator Network) and a Discriminator Network (Discriminator Network).
In this embodiment, the generator network may be configured to generate a simulated face image based on the hidden variable, and the discriminator network may be configured to determine a difference between the simulated face image and the face image to be edited in step 201.
The generation countermeasure network used in this embodiment may be a network trained on high definition face image data (e.g., face images with a resolution of 1024 × 1024), and the generator in the network may be regarded as a high definition face generator. The generator based on the high-definition face image data can process the high-definition face, and the generated new expression can keep the resolution. When the confrontation network is generated in the training mode, a generator network can be used for generating a simulation image simulating a human face based on random variables, the simulation image and a real human face image are input into a discriminator network, the discriminator network determines whether the human face in the simulation image is real or not based on the input two types of images, and the generator network and the discriminator network are adjusted based on the discrimination result of the discriminator network to achieve the training of generating the confrontation network.
In this embodiment, the step of simulating the face image based on the first hidden variable to generate a simulated face image may include:
and generating a generator network in the countermeasure network, and simulating the face image based on the first hidden variable to generate a simulated face image.
Correspondingly, the step of adjusting the first hidden variable based on the image difference information between the simulated face image and the face image to obtain an adjusted hidden variable may include:
comparing the simulated face image with the face image by generating a discriminator network in the countermeasure network to obtain image difference information of the simulated face image and the face image;
and adjusting the first hidden variable based on the image difference information and the face image to obtain an adjusted hidden variable.
As can be seen from the foregoing description, in this embodiment, a corresponding initial hidden variable may be set for a face image representing different attribute information on the face attribute based on a certain specific face attribute, so that when the initial hidden variable needs to be obtained, the corresponding initial hidden variable may be selected as the first hidden variable of the face image based on the attribute information of the face image to be edited represented on the certain specific face attribute, so as to increase the speed of generating a simulated face image with a higher similarity to a real face image to be edited based on the first hidden variable, and reduce the time required for obtaining the adjusted hidden variable.
Optionally, before the step of simulating the face image based on the first hidden variable to generate the simulated face image, the method may further include: identifying actual attribute information of a face image to be edited, which is expressed on a specific face attribute, and selecting an initial hidden variable corresponding to the actual attribute information as a first hidden variable based on the actual attribute information.
For example, in consideration of the influence of the race attributes on the facial muscles, the initial hidden variables may be set based on the race attributes, and different initial hidden variables may be set for different races. Before generating the simulated face image, identifying real race attribute information of the face image to be edited, which is expressed on the race attributes, and selecting an initial hidden variable corresponding to the real race attribute information as a first hidden variable.
Of course, in other examples, the hidden variable input to the generator network may also be a random variable, and in this example, the time required for the generator network to generate a simulated face image similar to the face image to be edited is a little bit longer.
In an example, the generating the confrontation Network may further include a Mapping Network (Mapping Network), which is connected to the generator Network of this embodiment, and is configured to encode the input variable into an intermediate variable, that is, a hidden variable of this embodiment, and input the hidden variable into the generator Network to generate the simulated face image. It can be understood that, in this embodiment, the hidden variable includes an element for controlling different face attributes, and in this embodiment, the adjustment of the face attribute on the face image to be edited is also implemented by adjusting the hidden variable, in practice, when the face attribute is adjusted, it is desirable that only the adjusted face attribute in the face image is changed, and other face attributes are kept unchanged, for example, when the face attribute of hair color is adjusted, it is undesirable to adjust a face attribute of eyebrow density, so that the less the feature correlation of different face attributes in the hidden variable is, the better the feature entanglement is, the less the feature entanglement is, and the feature entanglement of different face attributes in the hidden variable can be reduced by using the mapping network.
The hidden variable in this embodiment may be an intermediate variable mapped by the mapping network based on a random variable. The Network structure used for generating the countermeasure Network in this embodiment is not limited, and may be CGAN (Conditional generative adaptive Network), PGAN (predictive generative adaptive Network), StyleGAN, BigGAN, and the like.
Referring to fig. 4, fig. 4 shows a network framework of a generated network implemented based on a StyleGAN network structure, where the generated Image = generator (w) and w = mapping (z).
Where Generator represents the Generator network of FIG. 4, Mapping represents the Mapping network of FIG. 4, and z is the random variable input to the Mapping network used to encode the random variable z into the latent variable w used to generate the image. Referring to fig. 4, the random variable z may be a random variable having a certain dimension (e.g., 512 dimensions).
In this embodiment, before the step "simulating the face image based on the first hidden variable to generate a simulated face image" of the face image, the method may further include: and mapping the random variable into a hidden variable based on a mapping network in the generated countermeasure network, and taking the hidden variable as a first hidden variable corresponding to the face image to be edited.
In another example, some hidden variables simulating the face image may be obtained in advance based on the mapping network and stored in a specific storage space, for example, stored in a shared book of a block chain to which the face image editing apparatus belongs, and when the face attribute editing of the face image to be edited is triggered, a suitable hidden variable may be obtained from the pre-stored hidden variables and used as an initial hidden variable. For example, the hidden variables may be randomly acquired from pre-stored hidden variables, or the actual attribute information of the face image to be edited, which is expressed on the specific face attribute, may be determined, and the hidden variables corresponding to the actual attribute information are selected from the pre-stored hidden variables to serve as the initial hidden variables of the face image to be edited.
In one example, when generating the simulated face image, the initial hidden variable may be input into the mapping network instead of the random variable, and the simulated face image is obtained through mapping of the mapping network and processing of the generator network.
In this embodiment, the lower the decoupling degree of the face attribute in the hidden variable, that is, the lower the feature correlation of the face attribute in the hidden variable, the better, and considering that in the StyleGAN, various face attributes in the hidden variable w are decoupled better by a mapping network, so that in this embodiment, a generator network of the StyleGAN may be adopted as a generator network in generating the countermeasure network.
In this embodiment, an optimization method may be used to solve the adjusted hidden variable w corresponding to the face image. The following describes an algorithm flow for solving the adjusted hidden variable w. Wherein, I represents the face image to be edited, G represents a high-definition generator, w represents an adjusted hidden variable,
Figure 449398DEST_PATH_IMAGE001
representing a first hidden variable, wherein the first hidden variable may be given first
Figure 163276DEST_PATH_IMAGE001
An appropriate initialization, which may be by averagingHidden variable
Figure 474172DEST_PATH_IMAGE002
The solving steps of the adjusted hidden variable w are as follows:
1. will be provided with
Figure 400540DEST_PATH_IMAGE003
Inputting a high definition generator to obtain a simulated face image G (
Figure 746070DEST_PATH_IMAGE003
);
2. Calculate loss function L oss (G) (G)
Figure 99691DEST_PATH_IMAGE003
),I);
3. Based on the loss function, adopting BP (Error Back Propagation) algorithm, reversely propagating the Error calculated by the loss function in a generator network, and solving the adjustment quantity of the hidden variable by adopting a gradient descent method
Figure 897883DEST_PATH_IMAGE004
So as to obtain the adjusted hidden variable w =
Figure 129406DEST_PATH_IMAGE005
4. Inputting the adjusted hidden variable w into a high-definition generator to obtain a simulated face image G (w);
5. calculating a loss function L oss (G (w), I), continuing to step 6 if the loss function does not satisfy the convergence condition, and if the loss function satisfies the convergence condition, using the adjusted hidden variable w as an adjusted hidden variable to be finally used;
6. based on the loss function, adopting BP (Error Back Propagation) algorithm, reversely propagating the Error calculated by the loss function in a generator network, and then adopting a gradient descent method to obtain the adjustment quantity of the hidden variable
Figure 329444DEST_PATH_IMAGE004
So as to obtain the adjusted hidden variable w =
Figure 588387DEST_PATH_IMAGE006
And returning to execute the step 4.
The convergence condition may be that the loss function converges below a predetermined loss threshold. Or the generation times of the simulated face images exceed a preset minimum time threshold.
In this embodiment, the dimensions of the first hidden variable, the adjusted hidden variable, and the target hidden variable are the same. In this embodiment, when the BP algorithm and the gradient descent algorithm are used to optimize the hidden variables, the weights of the generator network may not be adjusted.
Wherein, assuming that w is an n-dimensional variable, the n-dimensional variable w is represented by [ w1, w2, w3,. cndot.. wn]The above-mentioned
Figure 873874DEST_PATH_IMAGE004
Is also an n-dimensional variable.
In one example, the corresponding loss of the simulated face image can be used as the loss on the side of the hidden variable,
Figure 142045DEST_PATH_IMAGE007
in the formula, loss represents L oss (G) ((G))
Figure 931009DEST_PATH_IMAGE003
) I), or L oss (G (w), I).
In another example, after the loss is transmitted to the hidden variable side based on the BP algorithm, the loss corresponding to the element of each dimension in the hidden variable can be obtained, i.e. loss1-lossn (i.e. loss corresponding to w 1-wn) is obtained based on the BP algorithm, then
Figure 892012DEST_PATH_IMAGE008
In the above-mentioned scheme, the first step of the method,
Figure 664796DEST_PATH_IMAGE009
to adjust the weight parameters, one canAnd is set to any value according to actual needs, for example, to any value between 0 and 1.
The difference between the simulated face image G (w) generated by w and the face image I can be minimized by the scheme.
In one example, L PIPS (L experienced Perceptual image batch Similarity) loss function may be used as a loss function to measure the difference of w generated simulated face image G (w) from face image I, wherein loss = L PIPS (G (w), I).
The loss value of the loss function can be used as image difference information, and it can be understood that the number of times of adjustment of the first hidden variable may be multiple times, and after the adjustment of the first hidden variable is finished, the loss values of the simulated face image and the face image generated based on the adjusted hidden variable are smaller than a preset loss threshold.
In practice, the iterative adjustment number of the first hidden variable may be set to be 1000 times (or may be other values), and the hidden variable obtained after 1000 iterations is used as the final adjusted hidden variable.
205. Acquiring the change of the adjusted hidden variable in the hidden variable space and the incidence relation between the change of the target face attribute, and adjusting the adjusted hidden variable based on the incidence relation and the editing information to obtain a target hidden variable;
in the embodiment, the connection between the adjusted hidden variable and the face image is established through the high-definition generator, and the adjusted hidden variable can accurately represent the face image in the hidden variable space.
Optionally, the step of adjusting the adjusted hidden variable based on the association relationship and the editing information to obtain the target hidden variable may include:
determining attribute change information of the target face attribute of the face image before and after editing based on the editing information of the target face attribute in the face image;
determining the target variable quantity of the adjusted hidden variable in the hidden variable space based on the attribute change information and the association relation;
and obtaining the target hidden variable based on the target variable quantity and the adjusted hidden variable.
The attribute change information of the present embodiment refers to change information that can describe attribute information of the target face attribute before and after editing, including but not limited to the attribute change direction and the attribute change degree of the target face attribute before and after editing.
In one example, the attribute change information may be included in the editing information, and the attribute change information of the target face attribute of the face image before and after editing is determined based on the editing information of the target face attribute in the face image, including directly extracting the attribute change information of the target face attribute of the face image before and after editing from the editing information. For example, the attribute information may be described as the attribute of "smile" changed from "smile" to "laugh".
In one example, the edit information may include: the edited attribute information of the face image on the target face attribute, for example, the edited information may include: the edited attribute information of the face image on the smile attribute is smile and the like.
According to the embodiment, the user can freely set the editing information, and the flexible adjustment of the change degree of the target face attribute is realized.
Optionally, in an example, the association relationship between the change of the target face attribute and the change of the adjusted hidden variable in the hidden variable space includes: the change of the target face attribute and the incidence relation between the change direction and the change degree of the adjusted hidden variable in the hidden variable space;
the step of determining the target variable quantity of the adjusted hidden variable in the hidden variable space based on the attribute change information and the association relationship may include:
determining the target change direction and the target change degree of the adjusted hidden variables in the hidden variable space before and after the target face attribute is edited based on the attribute change information and the association relation;
and determining the target variable quantity of the adjusted hidden variable in the hidden variable space based on the target change direction and the target change degree of the adjusted hidden variable.
In this embodiment, the association relationship between the change of the target face attribute and the change direction and the change degree of the adjusted hidden variable in the hidden variable space may include: the target direction vector of the adjusted hidden variable in the hidden variable space and the corresponding relation between the change degree of the target face attribute and the change degree of the adjusted hidden variable in the direction indicated by the target direction vector; if the adjusted hidden variable changes along the direction indicated by the target direction vector, the face image changes on the target face attribute (that is, the attribute information of the target face attribute changes).
The step of determining a target change direction and a target change degree of the adjusted hidden variable in the hidden variable space before and after the target face attribute is edited based on the attribute change information and the association relationship may include:
determining the attribute change direction and the attribute change degree of the attribute of the target face based on the attribute change information;
acquiring a target direction vector of the adjusted hidden variable based on the attribute change direction and the association relation, wherein the direction indicated by the target direction vector is the target change direction of the adjusted hidden variable;
and acquiring the target change degree of the adjusted hidden variable along the target direction vector based on the attribute change degree and the incidence relation.
In this embodiment, the target direction vector may be a unit vector, and the target variation degree may be understood as a coefficient of the unit vector.
For example, the target face attribute is taken as the smile attribute. Suppose that in the association relationship, the direction vector of the adjusted hidden variable is
Figure 205499DEST_PATH_IMAGE010
If the adjusted hidden variable is adjusted along the direction indicated by the direction vector, the amplitude of the smile increases, and it is assumed that the attribute change information indicates that the attribute of the smile changes from smile to smile "Smile "the direction of the attribute change of the smile attribute is the direction of the decrease of the amplitude of the smile," the degree of the attribute change of the smile "is the degree of change of the smile" to the smile ". Then adjust the target direction vector of the hidden variable to
Figure 380128DEST_PATH_IMAGE011
The target change degree is a coefficient corresponding to the change from "laugh" to "smile".
In one example, the association relationship comprises the change of the target face attribute and the association relationship of the change of the adjusted hidden variable in the hidden variable space; the step of determining the target variable quantity of the adjusted hidden variable in the hidden variable space based on the attribute change information and the association relationship may include:
determining the variation corresponding to the attribute variation information based on the association relationship;
and taking the determined variable quantity as a target variable quantity of the adjusted hidden variable of the face image in a hidden variable space.
In this example, the association relationship may directly include a correspondence between a change in the target face attribute, such as attribute change information of the target face attribute, and a variation of the adjusted hidden variable in the hidden variable space, where the correspondence may exist in a form of a table or a key value peer. It will be appreciated that the target variation varies for different variations in the attributes of the target face.
The unit variation and the unit variation degree of the target face attribute corresponding to the unit variation can be set, the actual variation degree of the target face attribute is determined based on the attribute variation information, the adjustment coefficient of the unit variation is determined based on the ratio of the actual variation degree to the unit variation degree, and the product of the adjustment coefficient and the unit variation is used as the target variation of the adjusted hidden variable of the face image in the hidden variable space.
For example, also take the target face attribute as "smile", the unit variation of the "smile" attribute is
Figure 744988DEST_PATH_IMAGE012
The unit change degree of the target face attribute is 1, and if the attribute change information is from smile to laugh, and the corresponding change degree is 6, the target change amount corresponding to the attribute change information is 6
Figure 739489DEST_PATH_IMAGE012
206. And generating an edited face image corresponding to the target face attribute based on the target hidden variable, and displaying the edited face image.
In this embodiment, the high definition generator may be used to generate an edited face image corresponding to the target face attribute based on the target hidden variable.
If the client is an instant messaging client, the user initiates acquisition of the edited face image through a chat session page of the instant messaging client, and the step of displaying the edited face image may include:
and sending the edited face image to a session object of the chat session page so as to display the edited face image on the chat session page.
For example, if the user a in fig. 3a triggers the above-mentioned obtaining step of the edited face image through the chat page with the user B on the instant messaging client, after obtaining the edited face image, the edited face image sent by the user a is displayed in the single chat page of the user a and the user B.
In one embodiment, the number of the target face attributes is multiple, the editing information further includes a combination sequence of edited face images corresponding to the multiple target face attributes, and the step "displaying the edited face images" may include:
combining the edited face images based on the combination sequence to obtain an expression change dynamic image;
and displaying the expression change dynamic graph.
In one example, different attribute information may also be set for the same target face attribute, for example, setting multiple amplitudes of the user's smile.
In one example, in a scene simulating a human face image, the edited human face image can further improve the similarity with the target simulation image.
Optionally, the step of "displaying the edited face image" may include:
acquiring the face in the edited face image, and replacing the face in the target imitation image with the face to obtain a new edited face image;
and displaying the new edited face image.
In this embodiment, a high definition generator establishes a connection between a hidden variable and an image, the image has some face attributes, such as age, gender, expression, and the like, if an accurate association relationship between a change of the hidden variable in a hidden variable space and a change of a target face attribute can be explored, the face attribute can be edited and operated by adjusting the hidden variable.
Optionally, in this embodiment, the step "before obtaining the association relationship between the change of the adjusted hidden variable in the hidden variable space and the change of the target face attribute" may further include:
acquiring an actual hidden variable of a face sample image and attribute information of the face sample image on the attribute of a target face, wherein the actual hidden variable is a hidden variable of the face sample image in a hidden variable space;
generating an attribute sample corresponding to the attribute of the target face based on the attribute information and the actual hidden variable, wherein the attribute sample contains the actual hidden variable, and a sample label of the attribute sample comprises the attribute information corresponding to the actual hidden variable;
and determining the incidence relation between the change of the actual hidden variable in the hidden variable space and the change of the target face attribute based on the distribution of the attribute samples in the hidden variable space.
In this embodiment, the actual hidden variable of the face sample image and the adjusted hidden variable of the face image to be edited have the same function, and may be used to generate a simulated face image that is very similar to the face sample image and the face image to be edited, respectively.
The incidence relation between the change of the actual hidden variable in the hidden variable space and the change of the target face attribute can be used as the incidence relation between the change of the adjusted hidden variable in the hidden variable space of the face image to be edited and the change of the target face attribute.
The face image sample of this embodiment may be a virtual face image generated by the generator, and the actual hidden variable may directly adopt a hidden variable for generating the virtual face image. In one example, the face image sample may also be a face image.
Optionally, the step of "obtaining an actual hidden variable of the face sample image" may include:
acquiring a face sample image, wherein the face sample image comprises a face;
simulating the face sample image based on a second hidden variable to generate a simulated sample image, wherein the second hidden variable is an initial hidden variable of the face sample image in a hidden variable space;
and adjusting the second hidden variable based on image difference information between the simulation sample image and the face sample image to obtain an adjusted hidden variable serving as an actual hidden variable of the face sample image.
In this embodiment, the face in the face sample image may be a virtual face or a real face.
Wherein the generation of the simulated sample image may also be implemented on the basis of the generator. The simulated sample image may be generated by generating a generator network in the countermeasure network, simulating the face sample image based on the second hidden variable. Comparing the simulated sample image with the human face sample image by generating a discriminator network in the countermeasure network to obtain image difference information between the simulated sample image and the human face sample image; and adjusting the second hidden variable based on the image difference information and the face sample image to obtain an adjusted hidden variable serving as an actual hidden variable of the face sample image. In this embodiment, the image difference between the simulated sample image generated based on the adjusted hidden variable and the corresponding face sample image is smaller than a certain degree. In this embodiment, the image difference may be measured by using a loss function, a loss value of the loss function is image difference information, and a loss value of a simulated sample image generated by the adjusted hidden variable and a loss value of a corresponding face sample image is smaller than a preset loss threshold.
Optionally, in an example, the association relationship includes a change of the attribute of the target face and an association relationship between a change direction and a change degree of an actual hidden variable in a hidden variable space; the attribute samples comprise attribute positive samples and attribute negative samples, and the attribute information corresponding to the attribute positive samples and the attribute negative samples is different;
the step of determining the correlation between the change of the actual hidden variable in the hidden variable space and the change of the target face attribute based on the distribution of the attribute samples in the hidden variable space may include:
determining the distribution of actual hidden variables of different attribute information in a hidden variable space based on the distribution of the attribute positive samples and the attribute negative samples in the hidden variable space;
and determining the change of the target face attribute and the incidence relation between the change direction and the change degree of the actual hidden variable in the hidden variable space based on the distribution of the actual hidden variables of different attribute information in the hidden variable space.
The attribute information of the positive attribute sample and the negative attribute sample of the present embodiment may be set as needed, and the present embodiment does not limit this, for example, taking "smile" as an example, the attribute information of the negative attribute sample is "not smile", and the attribute information of the positive attribute sample is "smile". In one example, attribute information of the attribute negative sample may be used to describe that the attribute sample does not have the target face attribute.
In this embodiment, based on the distribution of the actual hidden variables of different attribute information in the hidden variable space, it may be determined that when the attribute information changes from one attribute information to another attribute information, the change direction and the change degree of the actual hidden variable in the hidden variable space are obtained, so as to obtain the association relationship between the change of the target face attribute and the change direction and the change degree of the actual hidden variable in the hidden variable space.
In an example, the association relationship between the change of the target face attribute and the change direction and the change degree of the adjusted hidden variable in the hidden variable space may include: and the target direction vector of the adjusted hidden variable in the hidden variable space and the corresponding relation between the change degree of the target face attribute and the change degree of the adjusted hidden variable in the direction indicated by the target direction vector.
The following takes the attribute of the target face as smile as an example, and the direction vector
Figure 349462DEST_PATH_IMAGE013
The acquisition scheme of (a) is described.
Fig. 5a shows a flow chart of analyzing the association relationship between the change of the hidden variable and the change of the face attribute from the statistical point of view. A large amount of real face data with labels can be obtained by face recognition to serve as face sample images, and the labels comprise attribute information of face attributes, such as laughing, non-laughing, crying, non-crying and the like.
And then, data deleting can be carried out, data with low confidence coefficient are deleted, and the actual hidden variable of the rest face sample image solver is solved.
Two types of samples are extracted from the face sample image, wherein one type of the samples is smiling attribute positive samples, and the other type of the samples is non-smiling attribute negative samples. Each hidden variable in attribute positive sample
Figure 644177DEST_PATH_IMAGE014
All represent smiling face, negative attribute samples each hidden variable
Figure 946982DEST_PATH_IMAGE014
Represent a non-smiling face.
Referring to fig. 5b, the correlation between the change of the hidden variable and the change of the smiling face may be analyzed from a statistical point of view.
In one example, the direction vector of the hidden variable from "not laughing" to "laughing" may be determined based on the way the binary problem is solved.
Optionally, the two classification problems may be solved by using SVM (Support Vector Machines) to obtain normal vectors of the segmentation planes of the positive and negative samples
Figure 428779DEST_PATH_IMAGE015
As the adjustment direction of the hidden variable. For example, in fig. 5b, the distribution diagram of the positive and negative samples in the hidden variable space is taken as an example, in fig. 5b, the black dots represent the negative samples, the white dots represent the positive samples, the straight lines represent the division planes of the positive and negative samples, and the direction lines with arrows represent normal vectors
Figure 842443DEST_PATH_IMAGE015
In the direction of (a).
In this embodiment, the SVM loss may be:
Figure 460506DEST_PATH_IMAGE016
where i denotes the ith sample,
Figure 435678DEST_PATH_IMAGE014
is an implicit variable of the sample i,
Figure 404771DEST_PATH_IMAGE017
a label representing the sample i is attached to the sample i,
Figure 90967DEST_PATH_IMAGE018
are coefficients.
In this embodiment, the direction vector corresponding to the smile attribute is solved
Figure 94695DEST_PATH_IMAGE015
Then, along the direction vector
Figure 739303DEST_PATH_IMAGE015
The smile attribute of the input image can be adjusted by changing the hidden variable of the input image.
Figure 195692DEST_PATH_IMAGE019
Wherein the content of the first and second substances,
Figure 951159DEST_PATH_IMAGE020
for the adjusted hidden variables of the face image to be edited,
Figure 543814DEST_PATH_IMAGE021
in order to target the hidden variables,
Figure 359323DEST_PATH_IMAGE022
is a direction vector corresponding to the attribute smile of the target face
Figure 801544DEST_PATH_IMAGE015
May be determined based on the editing information, i.e., the target degree of change, which is adjusted in the present embodiment
Figure 360701DEST_PATH_IMAGE021
The input generator can generate high-definition portrait with adjusted attribute. As shown in FIG. 5c, FIG. 5c is a vector of directions for "smiling
Figure 807863DEST_PATH_IMAGE015
And setting a smiling face change schematic diagram obtained by different adjustment amplitudes. It is understood that if the edit information indicates that the magnitude of the smile is not increased but decreased, then
Figure 528694DEST_PATH_IMAGE023
By the scheme, the direction vector corresponding to each face attribute can be obtained
Figure 959675DEST_PATH_IMAGE015
These direction vectors
Figure 791365DEST_PATH_IMAGE015
May be pre-stored for later use. The method for obtaining the edited face image may refer to the flow shown in fig. 5d, and after obtaining the face image to be edited, such as the user avatar, and the editing information, the hidden variable of the user avatar (i.e., the adjusted hidden variable in the above example) may be solved, and the direction vector of the target face attribute may be determined based on the editing information
Figure 624192DEST_PATH_IMAGE015
And coefficients of the vector
Figure 515925DEST_PATH_IMAGE024
And an
Figure 434202DEST_PATH_IMAGE015
The preceding calculation symbol (assumed to be +), based on the formula
Figure 836627DEST_PATH_IMAGE025
And obtaining a target hidden variable, and inputting the target hidden variable into a high-definition generator to obtain an edited face image.
Optionally, in another example, the association relationship includes a change of a target face attribute and a corresponding relationship between a change of an actual hidden variable in a hidden variable space, and the attribute information of the attribute sample is at least two; the step of determining the correlation between the change of the actual hidden variable in the hidden variable space and the change of the target face attribute based on the distribution of the attribute samples in the hidden variable space may include:
determining the distribution of the attribute samples corresponding to the attribute information in the hidden variable space based on the attribute information of the attribute samples;
respectively calculating the average hidden variables of the actual hidden variables for the attribute samples corresponding to the attribute information;
and determining the corresponding relation between the change of the target face attribute and the change of the actual hidden variable in the hidden variable space based on the change of the average hidden variable among the attribute information.
When the corresponding relation is determined, the average hidden variable of each attribute information is used as a standard hidden vector of the face image corresponding to the attribute information, and the corresponding relation between the change of the target face attribute and the change of the actual hidden variable in the hidden variable space can be determined based on the change of the standard hidden vector when the target face attribute changes among different attribute information.
For example, taking "smile" as an example, the attribute information of the attribute samples includes "smile" and "not smile", and based on the attribute samples of "smile" and "not smile", the calculation is performed respectivelyAverage hidden variable w1 of the actual hidden variables of the "smile" sample; and average hidden variables w2 of actual hidden variables of the non-smiling sample, calculating the variation of the average hidden variables of the smile and the non-smile
Figure 258381DEST_PATH_IMAGE026
With the addition of
Figure 321015DEST_PATH_IMAGE027
As the change of the attribute of the target face from "not smiling" to "smiling", the amount of change of the corresponding actual hidden variable in the hidden variable space.
For fine adjustment, a corresponding relationship may be set between the degree of change of the target face attribute and the degree of change of the hidden variable, for example, based on
Figure 461009DEST_PATH_IMAGE027
Setting unit change, e.g. to
Figure 165660DEST_PATH_IMAGE028
Setting a unit change degree of the target face attribute corresponding to the unit change amount, determining an actual change degree of the target face attribute based on attribute change information, determining an adjustment coefficient of the unit change amount based on a ratio of the actual change degree to the unit change degree, and taking a product of the adjustment coefficient and the unit change amount as a target change amount of the adjusted hidden variable of the face image in a hidden variable space.
By adopting the scheme of the embodiment, the image editing page can be displayed; responding to attribute editing operation aiming at the face image, and determining target face attributes to be edited in the face image and editing information of the target face attributes; based on the first hidden variable, simulating a face image to generate a simulated face image; obtaining an adjusted hidden variable of the first hidden variable based on the difference between the two images; therefore, the face image can be accurately mapped to the hidden variable space, and the accurate expression of the adjusted hidden variable to the face image is realized; by acquiring the incidence relation between the adjusted hidden variable and the change of the target face attribute, the adjusted hidden variable can be adjusted based on the incidence relation and the editing information to obtain a target hidden variable, and an edited face image is generated based on the target hidden variable; the human face image with the editable target hidden variable can be accurately represented in the hidden variable space, and the editing effect of the human face image can be ensured.
In order to better implement the above method, correspondingly, an embodiment of the present invention further provides a facial image editing apparatus, which may be specifically integrated in a terminal, for example, in a form of a client.
Referring to fig. 6, the face image editing apparatus includes:
an editing page display unit 601, configured to display an image editing page, where the image editing page includes a face image to be edited;
an obtaining unit 602, configured to determine, in response to an attribute editing operation for the face image, a target face attribute to be edited in the face image and editing information of the target face attribute;
a simulation unit 603, configured to simulate the face image based on a first hidden variable, so as to generate a simulated face image, where the first hidden variable is an initial hidden variable of the face image in a hidden variable space;
a first adjusting unit 604, configured to adjust the first hidden variable based on image difference information between the simulated face image and the face image to obtain an adjusted hidden variable;
a second adjusting unit 605, configured to obtain an association relationship between a change of an adjusted hidden variable in the hidden variable space and a change of the target face attribute, and adjust the adjusted hidden variable based on the association relationship and the editing information to obtain a target hidden variable;
an image display unit 606, configured to generate an edited face image corresponding to the target face attribute based on the target hidden variable, and display the edited face image.
Optionally, the image editing page includes a first property editing control, and the obtaining unit is configured to:
displaying a property editing page of the face image based on the triggering operation aiming at the first property editing control, wherein the property editing page comprises the face image and candidate face properties;
displaying an editing control of the target face attribute based on the selection operation aiming at the target face attribute in the candidate face attributes;
and acquiring the editing information of the target face attribute in response to the setting operation aiming at the editing control.
Optionally, the image editing page includes a second property editing control, and the obtaining unit is configured to:
displaying a property editing page of the face image based on the triggering operation aiming at the second property editing control, wherein the property editing page comprises the face image and the imitation control;
displaying an imitation image selection page based on a triggering operation for the imitation control, wherein the imitation image selection page comprises candidate imitation images;
in response to the imitation trigger operation aiming at the target imitation image in the candidate imitation images, the imitated face attribute of the target imitation image is obtained to serve as the target face attribute of the face image, the attribute information of the target face attribute in the target imitation image is obtained, and the editing information of the target face attribute in the face image is determined based on the attribute information.
Optionally, the simulation unit is configured to generate a generator network in the countermeasure network, and simulate the face image based on the first hidden variable to generate a simulated face image;
the first adjusting unit is used for comparing the simulated face image with the face image by generating a discriminator network in the countermeasure network to obtain image difference information of the simulated face image and the face image; and adjusting the first hidden variable based on the image difference information and the face image to obtain an adjusted hidden variable.
Optionally, the second adjusting unit is configured to:
determining attribute change information of the target face attribute of the face image before and after editing based on the editing information of the target face attribute in the face image;
determining the target variable quantity of the adjusted hidden variable in the hidden variable space based on the attribute change information and the association relation;
and obtaining the target hidden variable based on the target variable quantity and the adjusted hidden variable.
Optionally, the association relationship includes a change of the target face attribute and an association relationship between a change direction and a change degree of the adjusted hidden variable in the hidden variable space;
a second adjustment unit for:
determining the target change direction and the target change degree of the adjusted hidden variables in the hidden variable space before and after the target face attribute is edited based on the attribute change information and the association relation;
and determining the target variable quantity of the adjusted hidden variable in the hidden variable space based on the target change direction and the target change degree of the adjusted hidden variable.
Optionally, the association relationship includes an association relationship between a change of the target face attribute and a variation of the adjusted hidden variable in a hidden variable space;
a second adjustment unit for:
determining the variation corresponding to the attribute variation information based on the association relationship;
and taking the determined variable quantity as a target variable quantity of the adjusted hidden variable of the face image in a hidden variable space.
Optionally, the apparatus of this embodiment further includes an association relationship establishing module, configured to obtain an actual hidden variable of the face sample image and attribute information of the face sample image on the target face attribute before obtaining, by the second adjusting unit, an association relationship between a change of the hidden variable after adjustment in the hidden variable space and a change of the target face attribute, where the actual hidden variable is a hidden variable of the face sample image in the hidden variable space; generating an attribute sample corresponding to the attribute of the target face based on the attribute information and the actual hidden variable, wherein the attribute sample contains the actual hidden variable, and a sample label of the attribute sample comprises the attribute information corresponding to the actual hidden variable; and determining the incidence relation between the change of the actual hidden variable in the hidden variable space and the change of the target face attribute based on the distribution of the attribute samples in the hidden variable space.
Optionally, the association relationship establishing module is configured to obtain a face sample image, where the face sample image includes a face; simulating the face sample image based on a second hidden variable to generate a simulated sample image, wherein the second hidden variable is an initial hidden variable of the face sample image in a hidden variable space; and adjusting the second hidden variable based on image difference information between the simulation sample image and the face sample image to obtain an adjusted hidden variable serving as an actual hidden variable of the face sample image.
Optionally, the association relationship includes a change of the target face attribute, and an association relationship between a change direction and a change degree of the actual hidden variable in the hidden variable space; the attribute samples comprise attribute positive samples and attribute negative samples, and the attribute information corresponding to the attribute positive samples and the attribute negative samples is different;
the incidence relation establishing module is used for determining the distribution of actual hidden variables of different attribute information in the hidden variable space based on the distribution of the attribute positive samples and the attribute negative samples in the hidden variable space; and determining the change of the target face attribute and the incidence relation between the change direction and the change degree of the actual hidden variable in the hidden variable space based on the distribution of the actual hidden variables of different attribute information in the hidden variable space.
Optionally, the association relationship includes a corresponding relationship between a change of a target face attribute and a change of an actual hidden variable in a hidden variable space, and the attribute information of the attribute sample is at least two;
the incidence relation establishing module is used for determining the distribution of the attribute samples corresponding to the attribute information in the hidden variable space based on the attribute information of the attribute samples; respectively calculating the average hidden variables of the actual hidden variables for the attribute samples corresponding to the attribute information; and determining the corresponding relation between the change of the target face attribute and the change of the actual hidden variable in the hidden variable space based on the change of the average hidden variable among the attribute information.
Optionally, the client in this embodiment is an instant messaging client, and the edit page display unit is configured to:
displaying a chat session page of the instant messaging client;
responding to an image sending triggering operation aiming at the chat conversation page, and displaying an image acquisition page;
responding to image acquisition operation aiming at an image acquisition page, and displaying an image editing page of the instant messaging client, wherein a face image to be edited in the image editing page is an image acquired through the image acquisition operation;
an image presentation unit for:
and sending the edited face image to a session object of the chat session page so as to display the edited face image on the chat session page.
Optionally, in an example, the number of the target face attributes is multiple, the editing information further includes a combination sequence of the edited face images corresponding to the multiple target face attributes, and the image display unit is configured to: combining the edited face images based on the combination sequence to obtain an expression change dynamic image; and displaying the expression change dynamic graph.
By adopting the scheme of the embodiment, the image editing page can be displayed; responding to attribute editing operation aiming at the face image, and determining target face attributes to be edited in the face image and editing information of the target face attributes; based on the first hidden variable, simulating a face image to generate a simulated face image; obtaining an adjusted hidden variable of the first hidden variable based on the difference between the two images; therefore, the face image can be accurately mapped to the hidden variable space, and the accurate expression of the adjusted hidden variable to the face image is realized; by acquiring the incidence relation between the adjusted hidden variable and the change of the target face attribute, the adjusted hidden variable can be adjusted based on the incidence relation and the editing information to obtain a target hidden variable, and an edited face image is generated based on the target hidden variable; the human face image with the editable target hidden variable can be accurately represented in the hidden variable space, and the editing effect of the human face image can be ensured.
In addition, an embodiment of the present invention further provides a computer device, where the computer device may be a terminal or a server, as shown in fig. 7, which shows a schematic structural diagram of the computer device according to the embodiment of the present invention, and specifically:
the computer device may include components such as a processor 701 of one or more processing cores, memory 702 of one or more computer-readable storage media, a power supply 703, and an input unit 704. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 7 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the processor 701 is a control center of the computer apparatus, connects various parts of the entire computer apparatus using various interfaces and lines, and performs various functions of the computer apparatus and processes data by running or executing software programs and/or modules stored in the memory 702 and calling data stored in the memory 702, thereby monitoring the computer apparatus as a whole. Optionally, processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701.
The memory 702 may be used to store software programs and modules, and the processor 701 executes various functional applications and data processing by operating the software programs and modules stored in the memory 702. The memory 702 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 702 may also include a memory controller to provide the processor 701 with access to the memory 702.
The computer device further includes a power supply 703 for supplying power to the various components, and preferably, the power supply 703 is logically connected to the processor 701 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system. The power supply 703 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The computer device may also include an input unit 704, the input unit 704 being operable to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 701 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 702 according to the following instructions, and the processor 701 runs the application program stored in the memory 702, thereby implementing various functions as follows:
displaying an image editing page, wherein the image editing page comprises a face image to be edited;
responding to attribute editing operation aiming at the face image, and determining target face attributes to be edited in the face image and editing information of the target face attributes;
simulating the face image based on a first hidden variable to generate a simulated face image, wherein the first hidden variable is an initial hidden variable of the face image in a hidden variable space;
adjusting the first hidden variable based on image difference information between the simulated face image and the face image to obtain an adjusted hidden variable;
acquiring the change of the adjusted hidden variable in the hidden variable space and the incidence relation between the change of the target face attribute, and adjusting the adjusted hidden variable based on the incidence relation and the editing information to obtain a target hidden variable;
and generating an edited face image corresponding to the target face attribute based on the target hidden variable, and displaying the edited face image.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present invention further provides a storage medium, where a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the method for editing a face image according to the embodiment of the present invention.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The instructions stored in the storage medium may execute the steps in the face image editing method provided in the embodiment of the present invention, so that the beneficial effects that can be achieved by the face image editing method provided in the embodiment of the present invention may be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The face image editing method, apparatus, computer device and storage medium provided by the embodiments of the present invention are described in detail above, and a specific example is applied in this document to explain the principle and implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and its core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. A face image editing method is characterized by comprising the following steps:
displaying an image editing page, wherein the image editing page comprises a face image to be edited;
responding to attribute editing operation aiming at the face image, and determining target face attributes to be edited in the face image and editing information of the target face attributes;
simulating the face image based on a first hidden variable to generate a simulated face image, wherein the first hidden variable is an initial hidden variable of the face image in a hidden variable space;
adjusting the first hidden variable based on image difference information between the simulated face image and the face image to obtain an adjusted hidden variable;
acquiring an association relationship between the change of the adjusted hidden variable in the hidden variable space and the change of the target face attribute, wherein the association relationship comprises a target direction vector corresponding to the target face attribute in the hidden variable space, and the target direction vector is as follows: the change direction between the actual hidden variables of the attribute positive sample and the attribute negative sample of the target face attribute in the hidden variable space;
determining attribute change information of the target face attribute of the face image before and after editing based on the editing information of the target face attribute in the face image;
determining a target change degree of the target direction vector based on the attribute change information;
determining a target variation of the adjusted hidden variable in the hidden variable space based on the target direction vector and the target variation degree;
adjusting the adjusted hidden variable based on the target variable quantity to obtain a target hidden variable;
and generating an edited face image corresponding to the target face attribute based on the target hidden variable, and displaying the edited face image.
2. The facial image editing method according to claim 1, wherein the image editing page includes a first attribute editing control, and the determining of the target facial attribute to be edited in the facial image and the editing information of the target facial attribute in response to the attribute editing operation for the facial image includes:
displaying a property editing page of the face image based on the triggering operation aiming at the first property editing control, wherein the property editing page comprises the face image and candidate face properties;
displaying an editing control of the target face attribute based on the selection operation aiming at the target face attribute in the candidate face attributes;
and responding to the setting operation aiming at the editing control, and acquiring the editing information of the target face attribute.
3. The facial image editing method according to claim 1, wherein the image editing page includes a second property editing control, and the determining of the target facial property to be edited in the facial image and the editing information of the target facial property in response to the property editing operation for the facial image includes:
displaying a property editing page of the facial image based on triggering operation aiming at the second property editing control, wherein the property editing page comprises the facial image and an imitation control;
displaying an imitation image selection page based on a triggering operation for the imitation control, wherein the imitation image selection page comprises candidate imitation images;
in response to the imitation trigger operation aiming at a target imitation image in the candidate imitation images, acquiring the attribute of the imitated face of the target imitation image as the target face attribute of the face image, acquiring the attribute information of the target face attribute in the target imitation image, and determining the editing information of the target face attribute in the face image based on the attribute information.
4. The method for editing a face image according to claim 1, wherein the simulating the face image based on the first hidden variable to generate a simulated face image comprises:
simulating the face image based on a first hidden variable by generating a generator network in a countermeasure network to generate a simulated face image;
the adjusting the first hidden variable based on the image difference information between the simulated face image and the face image to obtain an adjusted hidden variable, includes:
comparing the simulated face image with the face image through a discriminator network in the generation countermeasure network to obtain image difference information of the simulated face image and the face image;
and adjusting the first hidden variable based on the image difference information and the face image to obtain an adjusted hidden variable.
5. The method for editing a face image according to any one of claims 1 to 4, wherein before obtaining the association relationship between the change of the adjusted hidden variable in the hidden variable space and the change of the target face attribute, the method further comprises:
acquiring an actual hidden variable of a face sample image and attribute information of the face sample image on the target face attribute, wherein the actual hidden variable is a hidden variable of the face sample image in a hidden variable space;
generating an attribute sample corresponding to the target face attribute based on the attribute information and an actual hidden variable, wherein the attribute sample contains the actual hidden variable, and a sample label of the attribute sample comprises the attribute information corresponding to the actual hidden variable;
and determining the incidence relation between the change of the actual hidden variable in the hidden variable space and the change of the target face attribute based on the distribution of the attribute samples in the hidden variable space.
6. The method for editing human face images according to claim 5, wherein the obtaining of the actual hidden variables of the human face sample images comprises:
acquiring a face sample image, wherein the face sample image comprises a face;
simulating the face sample image based on a second hidden variable to generate a simulated sample image, wherein the second hidden variable is an initial hidden variable of the face sample image in the hidden variable space;
and adjusting the second hidden variable based on image difference information between the simulation sample image and the face sample image to obtain an adjusted hidden variable serving as an actual hidden variable of the face sample image.
7. The facial image editing method according to claim 5, wherein the attribute samples include attribute positive samples and attribute negative samples, and the attribute information corresponding to the attribute positive samples and the attribute negative samples are different;
determining an incidence relation between the change of the actual hidden variable in the hidden variable space and the change of the target face attribute based on the distribution of the attribute samples in the hidden variable space, wherein the incidence relation comprises the following steps:
determining the distribution of actual hidden variables of different attribute information in the hidden variable space based on the distribution of the attribute positive samples and the attribute negative samples in the hidden variable space;
determining the change direction between the actual hidden variables of the attribute positive sample and the attribute negative sample of the target face attribute based on the distribution of the actual hidden variables of different attribute information in the hidden variable space, obtaining a target direction vector indicating the change direction, and determining the corresponding relation between the change degree of the target face attribute and the change degree of the actual hidden variables in the change direction indicated by the target direction vector;
and determining the change of the target face attribute based on the target direction vector and the corresponding relation, and the incidence relation between the change direction and the change degree of the actual hidden variable in the hidden variable space.
8. The method for editing a face image according to claim 7, wherein the determining the target change degree of the target direction vector based on the attribute change information includes:
and acquiring the target change degree of the adjusted hidden variable along the target direction vector based on the attribute change degree and the corresponding relation between the change degree of the target face attribute and the change degree of the actual hidden variable in the change direction indicated by the target direction vector in the association relation.
9. The face image editing method according to any one of claims 1 to 4, wherein the displaying the image editing page includes:
displaying a chat session page of the instant messaging client;
responding to an image sending triggering operation aiming at the chat conversation page, and displaying an image acquisition page;
responding to an image acquisition operation aiming at the image acquisition page, and displaying an image editing page of the instant messaging client, wherein a face image to be edited in the image editing page is an image acquired through the image acquisition operation;
the displaying the edited face image includes:
and sending the edited face image to a session object of the chat session page so as to display the edited face image on the chat session page.
10. The method for editing a face image according to any one of claims 1 to 4, wherein the number of the target face attributes is plural, the editing information further includes a combination sequence of edited face images corresponding to the plural target face attributes, and the displaying the edited face images includes:
combining the edited face images based on the combination sequence to obtain an expression change dynamic image;
and displaying the expression change dynamic graph.
11. A face image editing apparatus, comprising:
the editing page display unit is used for displaying an image editing page, wherein the image editing page comprises a face image to be edited;
the acquiring unit is used for responding to attribute editing operation aiming at the face image and determining target face attributes to be edited in the face image and editing information of the target face attributes;
the simulation unit is used for simulating the face image based on a first hidden variable to generate a simulated face image, wherein the first hidden variable is an initial hidden variable of the face image in a hidden variable space;
a first adjusting unit, configured to adjust the first hidden variable based on image difference information between the simulated face image and the face image to obtain an adjusted hidden variable;
a second adjusting unit, configured to obtain an association relationship between a change of an adjusted hidden variable in the hidden variable space and a change of the target face attribute, where the association relationship includes a target direction vector of the adjusted hidden variable in the hidden variable space corresponding to the target face attribute, and the target direction vector is: the change direction between the actual hidden variables of the attribute positive sample and the attribute negative sample of the target face attribute in the hidden variable space; determining attribute change information of the target face attribute of the face image before and after editing based on the editing information of the target face attribute in the face image; determining a target change degree of the target direction vector based on the attribute change information; determining a target variation of the adjusted hidden variable in the hidden variable space based on the target direction vector and the target variation degree; adjusting the adjusted hidden variable based on the target variable quantity to obtain a target hidden variable;
and the image display unit is used for generating an edited face image corresponding to the target face attribute based on the target hidden variable and displaying the edited face image.
12. A storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the steps of the method according to any of claims 1-10.
CN202010341415.0A 2020-04-27 2020-04-27 Face image editing method and device and storage medium Active CN111260754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010341415.0A CN111260754B (en) 2020-04-27 2020-04-27 Face image editing method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010341415.0A CN111260754B (en) 2020-04-27 2020-04-27 Face image editing method and device and storage medium

Publications (2)

Publication Number Publication Date
CN111260754A CN111260754A (en) 2020-06-09
CN111260754B true CN111260754B (en) 2020-08-07

Family

ID=70951674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010341415.0A Active CN111260754B (en) 2020-04-27 2020-04-27 Face image editing method and device and storage medium

Country Status (1)

Country Link
CN (1) CN111260754B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861954A (en) * 2020-06-22 2020-10-30 北京百度网讯科技有限公司 Method and device for editing human face, electronic equipment and readable storage medium
CN111932444B (en) * 2020-07-16 2023-09-19 中国石油大学(华东) Face attribute editing method based on generation countermeasure network and information processing terminal
CN112184876B (en) * 2020-09-28 2021-04-27 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112613411B (en) * 2020-12-25 2022-05-27 浙江大学 Pedestrian re-recognition data set attitude data augmentation method based on generation of countermeasure network
CN113221794B (en) * 2021-05-24 2024-05-03 厦门美图之家科技有限公司 Training data set generation method, device, equipment and storage medium
CN113426128B (en) * 2021-06-24 2024-04-30 网易(杭州)网络有限公司 Method, device, terminal and storage medium for adjusting appearance of custom roles
CN113655999B (en) * 2021-08-05 2024-01-09 上海硬通网络科技有限公司 Page control rendering method, device, equipment and storage medium
CN113902671A (en) * 2021-08-31 2022-01-07 北京影谱科技股份有限公司 Image steganography method and system based on random texture
CN113793254B (en) * 2021-09-07 2024-05-10 中山大学 Face image attribute editing method, system, computer equipment and storage medium
CN116630147B (en) * 2023-07-24 2024-02-06 北京隐算科技有限公司 Face image editing method based on reinforcement learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996052A (en) * 2014-05-12 2014-08-20 深圳市唯特视科技有限公司 Three-dimensional face gender classification device and method based on three-dimensional point cloud
CN109584162A (en) * 2018-11-30 2019-04-05 江苏网进科技股份有限公司 A method of based on the image super-resolution reconstruct for generating network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140153832A1 (en) * 2012-12-04 2014-06-05 Vivek Kwatra Facial expression editing in images based on collections of images
CN110135226B (en) * 2018-02-09 2023-04-07 腾讯科技(深圳)有限公司 Expression animation data processing method and device, computer equipment and storage medium
CN108765261B (en) * 2018-04-13 2022-07-05 北京市商汤科技开发有限公司 Image transformation method and device, electronic equipment and computer storage medium
CN109377535A (en) * 2018-10-24 2019-02-22 电子科技大学 Facial attribute automatic edition system, method, storage medium and terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996052A (en) * 2014-05-12 2014-08-20 深圳市唯特视科技有限公司 Three-dimensional face gender classification device and method based on three-dimensional point cloud
CN109584162A (en) * 2018-11-30 2019-04-05 江苏网进科技股份有限公司 A method of based on the image super-resolution reconstruct for generating network

Also Published As

Publication number Publication date
CN111260754A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN111260754B (en) Face image editing method and device and storage medium
CN110798636B (en) Subtitle generating method and device and electronic equipment
CN111464834B (en) Video frame processing method and device, computing equipment and storage medium
CN110555896B (en) Image generation method and device and storage medium
CN111833236B (en) Method and device for generating three-dimensional face model for simulating user
CN111369428A (en) Virtual head portrait generation method and device
CN112633425B (en) Image classification method and device
CN114973349A (en) Face image processing method and training method of face image processing model
CN113344184A (en) User portrait prediction method, device, terminal and computer readable storage medium
Zheng et al. Facial expression recognition based on texture and shape
Tang et al. Memories are one-to-many mapping alleviators in talking face generation
Roy et al. Tips: Text-induced pose synthesis
CN111783587A (en) Interaction method, device and storage medium
US20210158565A1 (en) Pose selection and animation of characters using video data and training techniques
CN116310028A (en) Style migration method and system of three-dimensional face model
CN117011449A (en) Reconstruction method and device of three-dimensional face model, storage medium and electronic equipment
CN114373034A (en) Image processing method, image processing apparatus, image processing device, storage medium, and computer program
CN113569809A (en) Image processing method, device and computer readable storage medium
CN114943799A (en) Face image processing method and device and computer readable storage medium
CN113706399A (en) Face image beautifying method and device, electronic equipment and storage medium
CN116542846B (en) User account icon generation method and device, computer equipment and storage medium
Shen et al. Rethinking the Spatial Inconsistency in Classifier-Free Diffusion Guidance
US20240013500A1 (en) Method and apparatus for generating expression model, device, and medium
TWI779784B (en) Feature analysis system, method and computer readable medium thereof
WO2024066549A1 (en) Data processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025232

Country of ref document: HK