CN112785670A - Image synthesis method, device, equipment and storage medium - Google Patents

Image synthesis method, device, equipment and storage medium Download PDF

Info

Publication number
CN112785670A
CN112785670A CN202110139541.2A CN202110139541A CN112785670A CN 112785670 A CN112785670 A CN 112785670A CN 202110139541 A CN202110139541 A CN 202110139541A CN 112785670 A CN112785670 A CN 112785670A
Authority
CN
China
Prior art keywords
image
expression
parameter
picture
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110139541.2A
Other languages
Chinese (zh)
Other versions
CN112785670B (en
Inventor
焦少慧
张启军
王悦
崔越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202110139541.2A priority Critical patent/CN112785670B/en
Publication of CN112785670A publication Critical patent/CN112785670A/en
Application granted granted Critical
Publication of CN112785670B publication Critical patent/CN112785670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses an image synthesis method, an image synthesis device, image synthesis equipment and a storage medium. The method comprises the following steps: acquiring image driving parameters and image picture samples, inputting a pre-trained image synthesis network, and outputting a target sheet by the pre-trained image synthesis network; the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module. According to the scheme, a three-dimensional model of the image in the image picture sample is not required to be constructed, image synthesis operation is simplified, when image synthesis is carried out, firstly, an affine transformation module is used for adjusting the head posture of the image in the image picture sample and the position of an image key point according to image driving parameters, then, a motion estimation module is used for adjusting other pixel points according to the position change of the image key point, on the basis, a confrontation generation module is used for correcting the obtained image, and the accuracy of an image synthesis result is improved.

Description

Image synthesis method, device, equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, in particular to an image synthesis method, an image synthesis device, image synthesis equipment and a storage medium.
Background
The network live broadcast can lead the anchor broadcast to play on the network, talk with audiences, sing songs and the like by utilizing a video mode and release the contents in real time. However, most of the current anchor live broadcast methods are anchor by real people, and the number of the real people producing short videos or live broadcast contents has a certain limitation. How to produce high-quality short video or live content at low cost is a current research hotspot.
Currently, network live broadcast is performed by using a virtual anchor, but a three-dimensional model and animation driving corresponding to the virtual anchor need to be constructed, the process is complex, and the accuracy is low.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
The embodiment of the disclosure provides an image synthesis method, an image synthesis device and a storage medium, which can simplify the image synthesis process and improve the accuracy of an image synthesis result.
In a first aspect, an embodiment of the present disclosure provides an image synthesis method, including:
acquiring image driving parameters and image picture samples containing image key point detection results;
inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target picture containing an image synthesis result by the pre-trained image synthesis network;
the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the confrontation generation module is used for correcting the second image picture to obtain a target picture.
In a second aspect, an embodiment of the present disclosure further provides an image synthesis apparatus, including:
the acquisition module is used for acquiring image driving parameters and image picture samples containing image key point detection results;
the image synthesis module is used for inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target picture containing an image synthesis result by the pre-trained image synthesis network;
the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the countermeasure generation module is used for correcting the second image picture to obtain a target image picture.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, implement the character composition method of the first aspect.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the avatar synthesis method according to the first aspect.
The embodiment of the disclosure provides an image synthesis method, an image synthesis device, an image synthesis equipment and a storage medium, wherein image driving parameters and image picture samples containing image key point detection results are obtained; inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target picture containing an image synthesis result by the pre-trained image synthesis network; the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the confrontation generation module is used for correcting the second image picture to obtain a target picture. According to the scheme, a three-dimensional model of the image in the image picture sample is not required to be constructed, image synthesis operation is simplified, the position of the image key point is firstly adjusted based on the image driving parameter when image synthesis is carried out, then other pixel points are adjusted based on the position change of the image key point, the obtained image is corrected on the basis, and the accuracy of the image synthesis result is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a flowchart of an image synthesis method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an image synthesis method provided in the second embodiment of the present disclosure;
fig. 3 is a schematic diagram of a process for determining expression parameters and head pose parameters according to a second embodiment of the present disclosure;
fig. 4 is a schematic diagram of a special expression template according to a second embodiment of the disclosure;
fig. 5 is a flowchart of an image synthesis method provided in the third embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating a position change of an image key point in different modes according to a third embodiment of the disclosure;
fig. 7 is a structural diagram of an image synthesis apparatus according to a fourth embodiment of the present disclosure;
fig. 8 is a structural diagram of an electronic device according to a fifth embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in this disclosure are only used for distinguishing different objects, and are not used for limiting the order of the functions performed by the objects or the interdependence relationship.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Example one
Fig. 1 is a flowchart of an image synthesis method provided in an embodiment of the present disclosure, which is applicable to the case of synthesizing an image, so as to perform live webcasting or video generation according to the synthesized image. The method can be executed by an image synthesis device, which can be realized in a software and/or hardware manner and can be configured in an electronic device with a data processing function. As shown in fig. 1, the method may include the steps of:
s110, obtaining image driving parameters and image picture samples containing image key point detection results.
The image driving parameters are used for driving the currently selected image, so that the currently selected image generates the expression and the head posture corresponding to the image driving parameters. The currently selected image in this embodiment may be an image in an image picture sample, the image may be a real image or an avatar, the real image may be an image existing in reality, and the avatar may be an image not existing in reality, for example, an avatar or cartoon image applied in a television play, a cartoon, a game, and other works. Optionally, a picture containing an image may be obtained from a local picture library as an image picture sample, a picture containing an image may also be obtained on line through a web page as an image picture sample, an image frame containing a certain image may also be captured from a video as an image picture sample, and the image picture sample may also be obtained by a camera. The image picture sample can comprise an image or a plurality of images, and when the image picture sample comprises a plurality of images, one image can be selected as the image to be driven by the scheme according to the requirement.
The avatar-driving parameters may include expression parameters and head pose parameters. The expression parameters are used for driving the image in the image picture sample to display the expression corresponding to the expression parameters, for example, if the expression corresponding to the expression parameters is happy, the image in the image picture sample can be driven to display the happy expression. Optionally, the expression parameter may include position information of key points corresponding to the expression, for example, the position information of key points such as eyebrows, eyes, nose, and mouth may be included. The head posture parameter is used for driving the head posture of the image in the image picture sample to be consistent with the head posture corresponding to the head posture parameter, for example, the head posture corresponding to the head posture parameter is a low head, so that the head of the image in the image picture sample can be driven to present the low head posture. Optionally, the head posture parameter may include a rotation angle parameter of the head, for example, a pitch angle pitch, a yaw angle yaw, and a roll angle roll of the head. The embodiment does not limit the manner of acquiring the avatar driving parameters, and may acquire the avatar driving parameters through text or voice signals, video, or expression templates, for example. The image in the image picture sample is driven by the image driving parameters, so that the image in the image picture sample can generate various expressions and head gestures, the problem that a real person cannot imitate a certain expression or head gesture is effectively solved, particularly, when the synthetic image is used for network live broadcasting, rich expressions and head gestures can be displayed for a user, and the experience of the user is improved.
Avatar keypoint detection, also known as avatar keypoint localization or avatar alignment, refers to locating the positions of avatar face key regions, which may include eyebrows, eyes, nose, mouth, face contours, and the like. The embodiment does not limit the specific way of detecting the avatar key points, and for example, a method based on deep learning or a method based on Cascaded Position Regression (CPR) may be used to detect the avatar key points. The embodiment detects key points of the image in the image picture sample, and can improve the accuracy of the image synthesis result when performing image synthesis subsequently, so that the synthesized image is more real.
S120, inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target picture containing an image synthesis result by the pre-trained image synthesis network.
The pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the confrontation generation module is used for correcting the second image picture to obtain a target picture.
The image synthesis network is used for driving the image in the image picture sample based on the image driving parameters and outputting the picture containing the image synthesis result. The structure of the image synthesis network can be set according to actual conditions, and in order to improve the accuracy of the synthesis result, the image synthesis network provided by the embodiment comprises an affine transformation module, a motion estimation module and a countermeasure generation module. The affine transformation module is used for adjusting the head posture of the image in the image picture sample and the position of the image key point according to the image driving parameter, so that the position change of the image key point can be determined, namely the position of the adjusted image key point is changed relative to the position of the image key point before adjustment, the position of the adjusted image key point is the position of the image key point in the first image picture, and the position of the image key point before adjustment is the position of the image key point in the image picture sample. The motion estimation module is configured to correspondingly adjust positions of other pixel points in the first image picture according to the position change of the image key point to obtain a rough expression synthesis result, that is, the second image picture according to the embodiment. The confrontation generation module is used for correcting the second image picture output by the motion estimation module to obtain an image synthesis result with higher accuracy. In the embodiment, when the image is synthesized, the affine transformation module is firstly utilized to perform affine transformation on the image key points in the image picture sample to obtain the position change of the image key points, then the motion estimation module is utilized to perform the same transformation on other pixel points of the image to obtain a rough result, and then the countermeasure generation module is utilized to correct the rough result, so that the accuracy of the image synthesis result is improved, the synthesized image is more real, and especially, when the synthesized image is utilized to perform network live broadcast or generate video, the experience of a user can be improved.
In one example, the confrontation generation module may include a generator for generating a new image from the image in the second image picture output by the dense motion estimation module to obtain a third image picture, and a discriminator for performing a true-false judgment on the images in the second image picture and the third image picture, that is, determining which image of the two pictures is the original image and which image is the image generated by the generator. Before application, the image synthesis network can be trained, and when the discriminator cannot determine whether the image generated by the generator is real, the training process is finished. Therefore, the image driving parameters and the image picture samples can be input into the trained image synthesis network, and the trained image synthesis network outputs the picture containing the image synthesis result. The synthesized image can be used for replacing the real image to carry out network live broadcast or batch generation of videos and the like, so that the problem that the number of generated short videos and the number of live broadcast contents have certain limitation when the real image is used for generating the videos or the network live broadcast is effectively solved. The embodiment does not limit the type of the video, and may be short videos of education, fun, talent, and the like.
The embodiment of the disclosure provides an image synthesis method, which includes obtaining image driving parameters and image picture samples containing image key point detection results; inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target picture containing an image synthesis result by the pre-trained image synthesis network; the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the confrontation generation module is used for correcting the second image picture to obtain a target picture. According to the scheme, a three-dimensional model of the image in the image picture sample is not required to be constructed, image synthesis operation is simplified, the position of the image key point is firstly adjusted based on the image driving parameter when image synthesis is carried out, then other pixel points are adjusted based on the position change of the image key point, the obtained image is corrected on the basis, and the accuracy of the image synthesis result is improved.
Example two
Fig. 2 is a flowchart of an image synthesis method provided in the second embodiment of the present disclosure, in this embodiment, the image synthesis process is described by taking an image driving parameter obtained in a text manner as an example, and with reference to fig. 2, the method may include the following steps:
s210, obtaining a given text for the virtual main broadcasting performance, and converting the given text into an audio signal.
Wherein the avatar of the avatar is an avatar in the avatar picture sample. Alternatively, the given text may be acquired from the network or the local, or a part may be selected from the scenario as the given text. Embodiments are not limited to the language type of a given text, and may be, for example, Chinese, English, or both. Considering that a text lacks expression information and cannot accurately express the expression and posture of a character, the embodiment converts a given text into an audio signal and generates an avatar driving parameter based on the audio signal. Alternatively, a TTS (Text to Speech) module may be used to convert the given Text into an audio signal, and specifically, the given Text may be converted into time-aligned features, such as mel spectrum or fundamental frequency, and then the time-aligned features may be converted into an audio signal by using methods such as wavenet, waveglow or transform.
S220, generating first expression parameters and head posture parameters corresponding to the time points according to the voice features of the audio signals.
The given text of the embodiment is exemplified by a chinese text, an english text, or a text containing both chinese and english, and accordingly, the audio signal is a chinese audio signal, an english audio signal, or an audio signal containing both chinese and english. The accuracy of the voice characteristics directly influences the accuracy of subsequent image driving parameters, and further influences the accuracy of an image synthesis result. Alternatively, the voice characteristics of the audio signal may be determined using an end-to-end voice recognition network, which may include, but is not limited to, a deep speech network. Taking the example of extracting the speech features of the audio signal through the deepSeech network as an example, the dimension of the speech features output through the deepSpeech network is FT W D, where F is a frame rate of the audio signal, which may be 30fps, and certainly may be other frame rates, T is a sampling duration of the audio signal, W is a size of a time domain window corresponding to the audio signal, and D is a dictionary corresponding to the audio signal, for example, when the audio signal contains English, the corresponding dictionary may include 26 English letters and 3 markers, the markers may include silence, pause, and short pause, when the audio signal contains Chinese, the corresponding dictionary may include 62 phonemes, 5 tone symbols, and 3 markers, where 62 phonemes may include 23 initials and 39 initials, and 5 tone symbols may include negative vowels, positive vowels, upper vowels, lower vowels, and lower vowels.
In one example, referring to fig. 3, the audio signal may be input into a speech recognition network deep speech to obtain a speech feature with a dimension FT × W × D, the speech feature is input into a pre-trained parameter generation network, and the pre-trained parameter generation network outputs an expression parameter and a head pose parameter corresponding to each time point. The structure of the parameter generation network may be set according to an actual situation, and the parameter generation network shown in fig. 3 includes a convolution module and a deconvolution module, and the convolution module may include a time convolution layer and a full-link layer in consideration of a voice feature as a time-related feature, where the time convolution layer is used to perform time convolution on an input voice feature to obtain a first feature; and the full connection layer is used for determining the first expression parameters and the head posture parameters corresponding to the time points according to the first characteristics. For example, when a single individual voice feature is input, the dimension of the voice feature of the input time convolution layer is FT × W × D, and the dimension of the first feature output by the time convolution layer is W × D, so that the influence of the time factor is eliminated. The embodiment does not limit the specific implementation of the time convolution layer and the full link layer. The deconvolution module corresponds to the convolution module, so that the expression parameters and the head posture parameters corresponding to each time point can be obtained. In order to obtain the position information of other points and more intuitively obtain the expressions and head gestures corresponding to each time point, fig. 3 drives a neutral image by using the expression parameters and the head gesture parameters, so that the neutral image displays the expressions corresponding to the expression parameters and the head gestures corresponding to the head gesture parameters, optionally, the driven neutral image can be called a driving mesh, and a basis is provided for synthesis of subsequent images. The neutral character may be a neutral-expression, undeflected-head character model. Accordingly, the amount of offset/movement of each vertex of the driving mesh with respect to each vertex of the neutral character may be regarded as the first expression parameter.
And S230, judging whether the current time point has corresponding expression script information or not for each time point in the time points, if so, executing S240, and otherwise, executing S250.
It can be understood that the audio signal may not accurately reflect the expression at the time point, and at this time, the expression that is not included in the audio signal may be compensated by adding the expression script information, for example, when the expression script information exists at a certain time point, it is considered that a special expression exists at the time point, the special expression may be a happy, surprised, or angry expression, and the like, and for example, the expression script information may be a smile at x seconds.
S240, taking the first expression parameter corresponding to the current time point as an expression driving parameter.
When the expression script information does not exist at the current time point, the first expression parameter can be directly used as an expression driving parameter for driving the image in the image picture sample.
And S250, analyzing the corresponding expression script information, and acquiring an expression template corresponding to the corresponding expression script information to obtain a second expression parameter.
When the expression script information exists at the current time point, the expression script information can be analyzed, expression words corresponding to the expression script information are determined, and a special expression template is obtained based on the expression words. Optionally, the expression script information may be processed by a Natural Language Processing (NLP) module, so as to identify the expression words contained in the expression script information, and obtain the corresponding special expression template based on the expression words. Fig. 4 exemplarily provides several special expression templates corresponding to the expression words, and the special expression templates and the corresponding expression words may be stored in association, so that the corresponding special expression templates may be obtained based on the expression words. The second expression parameter may be an offset/motion amount of a vertex of the model corresponding to the special expression template with respect to a vertex of the model corresponding to the neutral expression template.
And S260, fusing the first expression parameters and the second expression parameters to obtain expression driving parameters.
When the expression template information exists at the current time point, the first expression parameter and the second expression parameter can be fused, and the fusion result is used as the expression driving parameter, so that the accuracy of the expression driving parameter can be improved, and the accuracy of the subsequent image synthesis result is improved. Optionally, the time point when the expression corresponding to the expression script information appears and the duration of the existence of the corresponding expression may be determined; determining the weight of the second expression parameter at the current time point according to the time point of the appearance of the expression corresponding to the expression script information at the current time point and the duration of the existence of the corresponding expression; and weighting the second expression parameter according to the weight, and taking the sum of the weighted second expression parameter and the first expression parameter at the current time point as an expression driving parameter. For example, the expression driving parameter at the current time point may be expressed as: vdisplacemennt ═ α × V exp restriction: + β: (T '), where Vdisplacemennt is an expression driving parameter, α and β are coefficients, and specific numerical values can be set according to actual conditions, for example, the numerical values can be set to 0.5 and 0.5, δ is a weight, δ ═ 1-2 | -T ' -T |/T, T ' is a current time point, T is a time point at which an expression corresponding to expression script information appears, T is a time during which an expression corresponding to expression script information lasts, T ' and T are both time points between the beginning and the end of the duration, V exp restriction is a second expression parameter, and Vtalking (T ') is a first expression parameter at the current time. For example, assuming that T is 2s and starts from the first second to the end of the third second, assuming that T is 2 and T' is 2, δ is 1, which indicates that the expression corresponding to the expression script information has the greatest degree of change when the time point is 2 s.
And S270, taking the expression driving parameters and the head posture parameters as image driving parameters.
S280, obtaining an image picture sample containing an image key point detection result.
The image driving parameters and the image picture samples can be obtained simultaneously, or the image driving parameters can be obtained first and then the image picture samples are obtained, or the image picture samples are obtained first and then the image driving parameters are obtained.
S290, inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target picture containing an image synthesis result by the pre-trained image synthesis network.
And S2100, adding a synthesis mark into the target picture.
Wherein the synthetic mark is used for representing that the image in the target picture is a synthetic image. The synthesis mark can be displayed in the target picture or hidden by adopting a certain technical means. The embodiment does not limit the form of the synthetic mark.
The second embodiment of the disclosure provides an image synthesis method, which includes acquiring image driving parameters through a text, and in the process of acquiring the image driving parameters, if expression script information does not exist at the current time point, directly using the expression parameters acquired through the text as expression driving parameters to drive an image picture sample, and if expression script information exists at the current time point, acquiring an expression template corresponding to the expression script information, and fusing the expression parameters acquired through the text and the expression parameters acquired through the expression template to acquire expression driving parameters to drive the image picture sample, so that the accuracy of the expression driving parameters is improved, and further the accuracy of an image synthesis result is improved.
EXAMPLE III
Fig. 5 is a flowchart of an image synthesis method provided in a third embodiment of the present disclosure, where this embodiment describes an image synthesis process by taking an image driving parameter obtained in a video manner as an example, and with reference to fig. 5, the method may include the following steps:
s310, collecting a video stream containing a sample image to obtain an image frame, and determining head posture parameters and key point positions of the sample image in the image frame.
Optionally, a video stream including a sample image may be acquired by an image acquisition device such as a camera, so as to obtain an image frame, and a key point detection is performed on the image frame, so as to obtain a position of a key point of the image in the current image frame and a head posture parameter.
S320, determining the mode of the sample image according to the position change of the key point in the set time.
And the position change is the position change of the key point at each time point in the set time. Optionally, the position change of each key point in a period of time may be counted, and the mode in which the sample image is located may be determined based on the position change of each key point. The positions of the key points corresponding to different modes are different, for example, when the mode is an expression mode, the position of each key point is changed greatly, and when the mode is a speaking mode, the position of each key point is changed slightly. Fig. 6 shows the position change of some key points in two modes. The abscissa represents a number of key points, and the ordinate represents the change in position of each key point within a set time. Wherein the solid lines represent speech patterns and the dashed lines represent expression patterns. And if the position change of each key point in the set time is large, the sample image is considered to be in the expression mode, and if the position change of each key point in the set time is small, the sample image is considered to be in the speaking mode. Optionally, when the sample image is determined to be in the expression mode, the expression may be further classified based on the positions of the key points in the expression mode, for example, when a difference between the position of each key point in the expression mode and the position of each key point in the angry expression is smaller than a set threshold, the sample image is considered to be currently in the angry mode.
S330, determining an expression base corresponding to the mode and the weight of the expression base.
And the expression base comprises the positions of key points corresponding to the expressions. The expression base can be self-defined or can be directly called to be created in advance. The expression bases adopted by different modes are different, for example, the expression base corresponding to the speaking mode can comprise a linear expression base, the expression base corresponding to the expression mode can comprise a linear expression base and a nonlinear expression base, the linear expression base is a linear decouplable expression base, the nonlinear expression base is an inseuplable expression base, for example, laugh is a nonlinear expression base which shows that the mouth is open, the mouth angle is changed, and eyes and eyebrows are also changed correspondingly.
Optionally, when the mode is a speaking mode, acquiring a first expression base corresponding to the speaking mode; determining the weight of the first expression base according to the position of the key point in the first expression base and the position of the key point in a sample image; when the mode is an expression mode, acquiring a second expression base corresponding to the expression mode; and determining the weight of the second expression base according to the position of the key point in the second expression base and the position of the key point in the sample image.
The first expression base can be a linear decoupling expression base, and the second expression base can be a nonlinear non-decoupling expression base. Assume that the image of the sample in the image frame includes m key points, B1, B2, …, Bm, and the corresponding positions are B1, B2, …, Bm. When in the speaking mode, the positions of key points b1, b2, … and bm in the expression base A1 are a11, a12, … and A1m respectively, the positions of key points b1, b2, … and bm in the expression base A2 are a21, a22, … and A2m respectively, and so on, and the positions of key points b1, b2, … and bm in the expression base An are An1, An2, … and An m respectively. Then, a mapping relationship between the expression bases and the image of the sample in the image frame may be established, that is, W1 × B1+ W2 × B1+ … + Wn × B1 ═ B1, W1 × B2+ W2 × B2+ … + Wn × B2 ═ B2, …, W1 Bm + W2 × Bm + … + Wn × Bm ═ Bm, the above equation set is solved to obtain the weights corresponding to the expression bases, where W1 is the weight corresponding to the expression base a1, W2 is the weight corresponding to the expression base a2, and in turn, Wn is the weight corresponding to An expression base by analogy. When the expression mode is in the expression mode, the determination process of the weight of each expression base is similar.
S340, determining expression driving parameters of the sample image in the mode according to the expression base and the weight of the expression base.
Still taking the speaking mode as an example, after the weight is determined, the weight is substituted into the left side of the equation, so that the position of each key point in the sample image can be obtained, and the expression driving parameter in the mode can be obtained.
And S350, taking the head posture parameters and the expression driving parameters as image driving parameters.
And S360, obtaining an image picture sample containing an image key point detection result.
S370, inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target picture containing an image synthesis result by the pre-trained image synthesis network.
And S380, adding a synthesis mark into the target picture.
The third embodiment of the disclosure provides an image synthesis method, which is based on the above embodiments, and may obtain an image driving parameter in a video mode, and when generating the image driving parameter, determine a mode where the image driving parameter is located based on a position change of a key point in a sample image, further obtain an expression base and a weight corresponding to the mode, and obtain an expression driving parameter based on the expression base and the corresponding weight, so that the generated expression driving parameter is smoother and more real, and accuracy of an image synthesis result is improved.
On the basis of the above embodiment, the image driving parameters can also be directly obtained in an audio mode, and the mode reduces the conversion process from text to voice. The method and the device can drive the image in the image picture sample in the modes of text, voice or video and the like, do not need to construct a three-dimensional model, simplify the process, and have higher accuracy and reality of the synthetic result.
Example four
Fig. 7 is a structural diagram of an image synthesis apparatus according to a fourth embodiment of the present disclosure, which may perform the image synthesis method according to the foregoing embodiment, and with reference to fig. 7, the apparatus may include:
an obtaining module 41, configured to obtain an image driving parameter and an image picture sample including an image key point detection result;
an image synthesis module 42, for inputting the image driving parameters and image picture samples into a pre-trained image synthesis network, and outputting a target picture containing an image synthesis result by the pre-trained image synthesis network;
the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the confrontation generation module is used for correcting the second image picture to obtain a target picture.
The fourth embodiment of the present disclosure provides an image synthesis apparatus, which inputs the image driving parameters and the image picture samples into a pre-trained image synthesis network by obtaining image driving parameters and image picture samples including image key point detection results, and outputs target pictures including image synthesis results from the pre-trained image synthesis network; the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the confrontation generation module is used for correcting the second image picture to obtain a target picture. According to the scheme, a three-dimensional model of the image in the image picture sample is not required to be constructed, image synthesis operation is simplified, the position of the image key point is firstly adjusted based on the image driving parameter when image synthesis is carried out, then other pixel points are adjusted based on the position change of the image key point, the obtained image is corrected on the basis, and the accuracy of the image synthesis result is improved.
On the basis of the above embodiment, the obtaining module 41 includes:
a given text acquisition unit, configured to acquire a given text for a virtual anchor performance, and convert the given text into an audio signal, where an avatar of the virtual anchor is an avatar in the avatar picture sample;
the head posture parameter determining unit is used for generating first expression parameters and head posture parameters corresponding to all time points according to the voice characteristics of the audio signals;
the expression driving parameter determining unit is used for determining, for each time point in the time points, if no corresponding expression script information exists at the current time point, the first expression parameter corresponding to the current time point as an expression driving parameter; if the corresponding expression script information exists at the current time point, analyzing the corresponding expression script information, and acquiring an expression template corresponding to the corresponding expression script information to obtain a second expression parameter; fusing the first expression parameter and the second expression parameter to obtain an expression driving parameter;
and the image driving parameter determining unit is used for taking the expression driving parameters and the head posture parameters as image driving parameters.
On the basis of the above embodiment, the head posture parameter determining unit is specifically configured to:
determining a speech characteristic of the audio signal;
inputting the voice features into a pre-trained parameter generation network, and outputting first expression parameters and head posture parameters corresponding to each time point by the pre-trained parameter generation network;
the pre-trained parameter generation network comprises a convolution module and a deconvolution module, wherein the convolution module comprises a convolution layer and a full-link layer, and the convolution layer is used for performing time convolution on input voice features to obtain first features; and the full connection layer is used for determining a first expression parameter and a head posture parameter corresponding to each time point according to the first characteristic.
On the basis of the above embodiment, the dimension of the speech feature is FT × W × D, F is the frame rate of the audio signal, T is the sampling duration, W is the size of a time domain window corresponding to the audio signal, and D is a dictionary corresponding to the audio signal;
the dictionary comprises a Chinese dictionary and/or an English dictionary, the English dictionary comprises English letters and markers, and the markers are used for representing silence, pause or short stop; the Chinese dictionary includes phonemes, tone symbols, and markers.
On the basis of the foregoing embodiment, the expression driving parameter determining unit is specifically configured to:
determining the time point of the appearance of the expression corresponding to the expression script and the duration of the existence of the corresponding expression;
determining the weight of the second expression parameter at the current time point according to the current time point, the time point of the appearance of the expression corresponding to the expression script and the duration of the existence of the corresponding expression, wherein the current time point is the time point between the beginning and the end of the duration;
and weighting the second expression parameter according to the weight, and taking the sum of the weighted second expression parameter and the first expression parameter at the current time point as an expression driving parameter.
On the basis of the above embodiment, the obtaining module 41 includes:
the image acquisition unit is used for acquiring a video stream containing a sample image to obtain an image frame and determining a head posture parameter of the sample image and the position of a key point in the image frame;
the mode determining unit is used for determining the mode of the sample image according to the position change of the key point within the set time, wherein the position change is the position change of the key point at each time point within the set time;
the weight determining unit is used for determining an expression base corresponding to the mode and the weight of the expression base, wherein the expression base comprises the position of a key point;
the expression driving parameter determining unit is used for determining expression driving parameters of the sample image in the mode according to the expression base and the weight of the expression base;
and the image driving parameter determining unit is used for taking the head posture parameter and the expression driving parameter as image driving parameters.
On the basis of the above embodiment, the mode includes a speaking mode or an expression mode;
a weight determination unit, specifically configured to:
when the mode is a speaking mode, acquiring a first expression base corresponding to the speaking mode; determining the weight of the first expression base according to the position of the key point in the first expression base and the position of the key point in a sample image;
when the mode is an expression mode, acquiring a second expression base corresponding to the expression mode; and determining the weight of the second expression base according to the position of the key point in the second expression base and the position of the key point in the sample image.
On the basis of the above embodiment, the apparatus may further include:
and the synthetic mark generation module is used for adding a synthetic mark into the target picture after the target picture containing the image synthetic result is output by the pre-trained image synthetic network, wherein the synthetic mark is used for indicating that the image in the target picture is a synthetic image.
The image synthesis device provided by the embodiment of the present disclosure and the image synthesis method provided by the above embodiment belong to the same inventive concept, and the technical details that are not described in detail in the embodiment can be referred to the above embodiment, and the embodiment has the same beneficial effects as performing the image synthesis method.
EXAMPLE five
Referring now to FIG. 8, shown is a schematic diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 8 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
EXAMPLE six
The computer readable medium described above in this disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring image driving parameters and image picture samples containing image key point detection results; inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target picture containing an image synthesis result by the pre-trained image synthesis network; the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the confrontation generation module is used for correcting the second image picture to obtain a target picture.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not in some cases constitute a limitation to the module itself, and for example, the obtaining module may also be described as a "module for obtaining avatar driving parameters and avatar picture samples including avatar key point detection results".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In accordance with one or more embodiments of the present disclosure, there is provided a character synthesis method including:
acquiring image driving parameters and image picture samples containing image key point detection results;
inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target picture containing an image synthesis result by the pre-trained image synthesis network;
the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the confrontation generation module is used for correcting the second image picture to obtain a target picture.
According to one or more embodiments of the present disclosure, in an avatar synthesis method provided by the present disclosure, the obtaining of an avatar driving parameter includes:
acquiring a given text for a virtual anchor performance, and converting the given text into an audio signal, wherein the image of the virtual anchor is the image in the image picture sample;
generating a first expression parameter and a head posture parameter corresponding to each time point according to the voice characteristics of the audio signal;
for each time point in the time points, if the current time point does not have corresponding expression script information, taking the first expression parameter corresponding to the current time point as an expression driving parameter; if the corresponding expression script information exists at the current time point, analyzing the corresponding expression script information, and acquiring an expression template corresponding to the corresponding expression script information to obtain a second expression parameter; fusing the first expression parameter and the second expression parameter to obtain an expression driving parameter;
and taking the expression driving parameters and the head posture parameters as image driving parameters.
According to one or more embodiments of the present disclosure, in an image synthesis method provided by the present disclosure, the generating a first expression parameter and a head pose parameter corresponding to each time point according to a speech feature of the audio signal includes:
determining a speech characteristic of the audio signal;
inputting the voice features into a pre-trained parameter generation network, and outputting first expression parameters and head posture parameters corresponding to each time point by the pre-trained parameter generation network;
the pre-trained parameter generation network comprises a convolution module and a deconvolution module, wherein the convolution module comprises a convolution layer and a full-link layer, and the convolution layer is used for performing time convolution on input voice features to obtain first features; and the full connection layer is used for determining a first expression parameter and a head posture parameter corresponding to each time point according to the first characteristic.
According to one or more embodiments of the present disclosure, in the image synthesis method provided by the present disclosure, a dimension of the speech feature is FT × W × D, F is a frame rate of the audio signal, T is a sampling duration, W is a size of a time domain window corresponding to the audio signal, and D is a dictionary corresponding to the audio signal;
the dictionary comprises a Chinese dictionary and/or an English dictionary, the English dictionary comprises English letters and markers, and the markers are used for representing silence, pause or short stop; the Chinese dictionary includes phonemes, tone symbols, and markers.
According to one or more embodiments of the present disclosure, in the image synthesis method provided by the present disclosure, the fusing the first expression parameter and the second expression parameter to obtain an expression driving parameter includes:
determining the time point of the appearance of the expression corresponding to the expression script information and the duration of the existence of the corresponding expression;
determining the weight of the second expression parameter at the current time point according to the current time point, the time point of the appearance of the expression corresponding to the expression script information and the duration of the existence of the corresponding expression, wherein the current time point is the time point between the beginning and the end of the duration;
and weighting the second expression parameter according to the weight, and taking the sum of the weighted second expression parameter and the first expression parameter at the current time point as an expression driving parameter.
According to one or more embodiments of the present disclosure, in an avatar synthesis method provided by the present disclosure, the obtaining of an avatar driving parameter includes:
acquiring a video stream containing a sample image to obtain an image frame, and determining a head posture parameter and a key point position of the sample image in the image frame;
determining the mode of the sample image according to the position change of the key point within set time, wherein the position change is the position change of the key point at each time point within the set time;
determining an expression base corresponding to the mode and the weight of the expression base, wherein the expression base comprises the position of a key point;
determining expression driving parameters of the sample image in the mode according to the expression base and the weight of the expression base;
and taking the head posture parameter and the expression driving parameter as image driving parameters.
According to one or more embodiments of the present disclosure, in the character synthesis method provided by the present disclosure, the mode includes a speaking mode or an expression mode;
the determining the expression base corresponding to the mode and the weight of the expression base includes:
when the mode is a speaking mode, acquiring a first expression base corresponding to the speaking mode; determining the weight of the first expression base according to the position of the key point in the first expression base and the position of the key point in a sample image;
when the mode is an expression mode, acquiring a second expression base corresponding to the expression mode; and determining the weight of the second expression base according to the position of the key point in the second expression base and the position of the key point in the sample image.
According to one or more embodiments of the present disclosure, the present disclosure provides an avatar synthesis method, after outputting a target picture including an avatar synthesis result by the pre-trained avatar synthesis network, further including:
and adding a synthetic mark in the target picture, wherein the synthetic mark is used for indicating that the image in the target picture is a synthetic image.
In accordance with one or more embodiments of the present disclosure, there is provided an avatar synthesis apparatus including:
the acquisition module is used for acquiring image motion parameters and image picture samples containing image key point detection results;
the image synthesis module is used for inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target image picture containing an image synthesis result by the pre-trained image synthesis network;
the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the confrontation generation module is used for correcting the second image picture to obtain a target picture.
In accordance with one or more embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, implement a character synthesis method according to any of the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a character synthesis method according to any one of the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (11)

1. An image synthesis method, comprising:
acquiring image driving parameters and image picture samples containing image key point detection results;
inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target picture containing an image synthesis result by the pre-trained image synthesis network;
the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the confrontation generation module is used for correcting the second image picture to obtain a target picture.
2. The method of claim 1, wherein said obtaining avatar-driving parameters comprises:
acquiring a given text for a virtual anchor performance, and converting the given text into an audio signal, wherein the image of the virtual anchor is the image in the image picture sample;
generating a first expression parameter and a head posture parameter corresponding to each time point according to the voice characteristics of the audio signal;
for each time point in the time points, if the current time point does not have corresponding expression script information, taking the first expression parameter corresponding to the current time point as an expression driving parameter; if the corresponding expression script information exists at the current time point, analyzing the corresponding expression script information, and acquiring an expression template corresponding to the corresponding expression script information to obtain a second expression parameter; fusing the first expression parameter and the second expression parameter to obtain an expression driving parameter;
and taking the expression driving parameters and the head posture parameters as image driving parameters.
3. The method according to claim 2, wherein the generating the first expression parameters and the head pose parameters corresponding to each time point according to the voice features of the audio signals comprises:
determining a speech characteristic of the audio signal;
inputting the voice features into a pre-trained parameter generation network, and outputting first expression parameters and head posture parameters corresponding to each time point by the pre-trained parameter generation network;
the pre-trained parameter generation network comprises a convolution module and a deconvolution module, wherein the convolution module comprises a convolution layer and a full-link layer, and the convolution layer is used for performing time convolution on input voice features to obtain first features; and the full connection layer is used for determining a first expression parameter and a head posture parameter corresponding to each time point according to the first characteristic.
4. The method according to claim 3, wherein the dimension of the speech feature is FT W D, F is a frame rate of the audio signal, T is a sampling duration, W is a size of a time domain window corresponding to the audio signal, and D is a dictionary corresponding to the audio signal;
the dictionary comprises a Chinese dictionary and/or an English dictionary, the English dictionary comprises English letters and markers, and the markers are used for representing silence, pause or short stop; the Chinese dictionary includes phonemes, tone symbols, and markers.
5. The method according to claim 2, wherein the fusing the first expression parameter and the second expression parameter to obtain an expression driving parameter comprises:
determining the time point of the appearance of the expression corresponding to the expression script information and the duration of the existence of the corresponding expression;
determining the weight of the second expression parameter at the current time point according to the current time point, the time point of the appearance of the expression corresponding to the expression script information and the duration of the existence of the corresponding expression, wherein the current time point is the time point between the beginning and the end of the duration;
and weighting the second expression parameter according to the weight, and taking the sum of the weighted second expression parameter and the first expression parameter at the current time point as an expression driving parameter.
6. The method of claim 1, wherein said obtaining avatar-driving parameters comprises:
acquiring a video stream containing a sample image to obtain an image frame, and determining a head posture parameter and a key point position of the sample image in the image frame;
determining the mode of the sample image according to the position change of the key point within set time, wherein the position change is the position change of the key point at each time point within the set time;
determining an expression base corresponding to the mode and the weight of the expression base, wherein the expression base comprises the position of a key point;
determining expression driving parameters of the sample image in the mode according to the expression base and the weight of the expression base;
and taking the head posture parameter and the expression driving parameter as image driving parameters.
7. The method of claim 6, wherein the pattern comprises a speaking pattern or an expression pattern;
the determining the expression base corresponding to the mode and the weight of the expression base includes:
when the mode is a speaking mode, acquiring a first expression base corresponding to the speaking mode; determining the weight of the first expression base according to the position of the key point in the first expression base and the position of the key point in a sample image;
when the mode is an expression mode, acquiring a second expression base corresponding to the expression mode; and determining the weight of the second expression base according to the position of the key point in the second expression base and the position of the key point in the sample image.
8. The method according to any one of claims 1-7, further comprising, after outputting, by the pre-trained avatar synthesis network, a target picture containing an avatar synthesis result:
and adding a synthetic mark in the target picture, wherein the synthetic mark is used for indicating that the image in the target picture is a synthetic image.
9. An image composition apparatus, comprising:
the acquisition module is used for acquiring image motion parameters and image picture samples containing image key point detection results;
the image synthesis module is used for inputting the image driving parameters and the image picture samples into a pre-trained image synthesis network, and outputting a target image picture containing an image synthesis result by the pre-trained image synthesis network;
the pre-trained image synthesis network comprises an affine transformation module, a motion estimation module and a confrontation generation module; the affine transformation module is used for adjusting the positions of image key points in the image picture sample according to the image driving parameters to obtain a first image picture; the motion estimation module is used for adjusting the positions of other pixel points in the first image picture except the image key points according to the position change of the image key points to obtain a second image picture, wherein the position change of the image key points is the change of the positions of the image key points in the first image picture relative to the positions of the image key points in the image picture sample; and the confrontation generation module is used for correcting the second image picture to obtain a target picture.
10. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
the avatar synthesis method of any of claims 1-8, when the one or more programs are executed by the one or more processors.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the avatar synthesis method according to any one of claims 1-8.
CN202110139541.2A 2021-02-01 2021-02-01 Image synthesis method, device, equipment and storage medium Active CN112785670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110139541.2A CN112785670B (en) 2021-02-01 2021-02-01 Image synthesis method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110139541.2A CN112785670B (en) 2021-02-01 2021-02-01 Image synthesis method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112785670A true CN112785670A (en) 2021-05-11
CN112785670B CN112785670B (en) 2024-05-28

Family

ID=75760386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110139541.2A Active CN112785670B (en) 2021-02-01 2021-02-01 Image synthesis method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112785670B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188649A (en) * 2023-04-27 2023-05-30 科大讯飞股份有限公司 Three-dimensional face model driving method based on voice and related device
CN116996695A (en) * 2023-09-27 2023-11-03 深圳大学 Panoramic image compression method, device, equipment and medium
CN117078811A (en) * 2023-08-31 2023-11-17 华院计算技术(上海)股份有限公司 Model training method, image generating method, animation generating method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403446A (en) * 2016-05-18 2017-11-28 西门子保健有限责任公司 Method and system for the image registration using intelligent human agents
CN111325851A (en) * 2020-02-28 2020-06-23 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111354079A (en) * 2020-03-11 2020-06-30 腾讯科技(深圳)有限公司 Three-dimensional face reconstruction network training and virtual face image generation method and device
CN111968203A (en) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 Animation driving method, animation driving device, electronic device, and storage medium
US20200394392A1 (en) * 2019-06-14 2020-12-17 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for detecting face image
CN112150638A (en) * 2020-09-14 2020-12-29 北京百度网讯科技有限公司 Virtual object image synthesis method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403446A (en) * 2016-05-18 2017-11-28 西门子保健有限责任公司 Method and system for the image registration using intelligent human agents
US20200394392A1 (en) * 2019-06-14 2020-12-17 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for detecting face image
CN111325851A (en) * 2020-02-28 2020-06-23 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111354079A (en) * 2020-03-11 2020-06-30 腾讯科技(深圳)有限公司 Three-dimensional face reconstruction network training and virtual face image generation method and device
CN111968203A (en) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 Animation driving method, animation driving device, electronic device, and storage medium
CN112150638A (en) * 2020-09-14 2020-12-29 北京百度网讯科技有限公司 Virtual object image synthesis method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于海涛;杨小汕;徐常胜;: "基于多模态输入的对抗式视频生成方法", 计算机研究与发展, no. 07 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188649A (en) * 2023-04-27 2023-05-30 科大讯飞股份有限公司 Three-dimensional face model driving method based on voice and related device
CN116188649B (en) * 2023-04-27 2023-10-13 科大讯飞股份有限公司 Three-dimensional face model driving method based on voice and related device
CN117078811A (en) * 2023-08-31 2023-11-17 华院计算技术(上海)股份有限公司 Model training method, image generating method, animation generating method and system
CN116996695A (en) * 2023-09-27 2023-11-03 深圳大学 Panoramic image compression method, device, equipment and medium
CN116996695B (en) * 2023-09-27 2024-04-05 深圳大学 Panoramic image compression method, device, equipment and medium

Also Published As

Publication number Publication date
CN112785670B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN112967212A (en) Virtual character synthesis method, device, equipment and storage medium
CN112785670B (en) Image synthesis method, device, equipment and storage medium
CN111476871B (en) Method and device for generating video
US11308671B2 (en) Method and apparatus for controlling mouth shape changes of three-dimensional virtual portrait
CN110446066B (en) Method and apparatus for generating video
CN109348277B (en) Motion pixel video special effect adding method and device, terminal equipment and storage medium
CN110880198A (en) Animation generation method and device
CN114331820A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111897976A (en) Virtual image synthesis method and device, electronic equipment and storage medium
EP4379508A1 (en) Data processing method and apparatus, electronic device and storage medium
US20230401764A1 (en) Image processing method and apparatus, electronic device and computer readable medium
CN112785669B (en) Virtual image synthesis method, device, equipment and storage medium
CN111967397A (en) Face image processing method and device, storage medium and electronic equipment
WO2022171114A1 (en) Image processing method and apparatus, and device and medium
CN113223555A (en) Video generation method and device, storage medium and electronic equipment
CN114429658A (en) Face key point information acquisition method, and method and device for generating face animation
CN111327960B (en) Article processing method and device, electronic equipment and computer storage medium
CN112785667A (en) Video generation method, device, medium and electronic equipment
WO2023065963A1 (en) Interactive display method and apparatus, electronic device, and storage medium
CN111916050A (en) Speech synthesis method, speech synthesis device, storage medium and electronic equipment
CN113593527B (en) Method and device for generating acoustic features, training voice model and recognizing voice
CN112487937B (en) Video identification method and device, storage medium and electronic equipment
CN113850716A (en) Model training method, image processing method, device, electronic device and medium
CN113079328A (en) Video generation method and device, storage medium and electronic equipment
CN110197230B (en) Method and apparatus for training a model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant