CN112734889A - Mouth shape animation real-time driving method and system for 2D character - Google Patents
Mouth shape animation real-time driving method and system for 2D character Download PDFInfo
- Publication number
- CN112734889A CN112734889A CN202110188571.2A CN202110188571A CN112734889A CN 112734889 A CN112734889 A CN 112734889A CN 202110188571 A CN202110188571 A CN 202110188571A CN 112734889 A CN112734889 A CN 112734889A
- Authority
- CN
- China
- Prior art keywords
- mouth shape
- animation
- phoneme
- mouth
- duration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a mouth shape animation real-time driving method and a mouth shape animation real-time driving system for a 2D role, wherein the method comprises the following steps: defining a basic mouth shape action set of the 2D role; defining a weight variation curve of elements in the basic mouth shape action set; mouth shape animations corresponding to different phonemes are designed; acquiring a phoneme sequence corresponding to an input audio stream; mapping each phoneme in the phoneme sequence to the corresponding mouth shape animation; and splicing the mouth shape animations to obtain the complete mouth shape animation of the 2D role driven by the audio stream. The invention maps the phoneme sequence corresponding to the input audio stream to the mouth shape animation segments through the predefined basic mouth shape action set and the weight change curve of each element in the basic mouth shape action set, then splices the mouth shape animation segments with each other, and finally outputs the 2D character mouth shape animation which is driven by the audio stream in real time and has high fidelity and high naturalness.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a mouth shape animation real-time driving method and system for a 2D character.
Background
The mouth shape is a key viewpoint of the character facial animation, and whether the mouth shape animation is vivid or not directly influences the reality degree of the whole character facial animation naturally, so that the mouth shape animation is made to play an important role in human-computer interaction modes such as movies, games and virtual reality.
Driving the mouth-animation by speech is one of the main methods of generating mouth-animation at present. The voice-driven mouth shape cartoon takes a section of voice signal as input to generate a section of mouth shape cartoon synchronous with the signal. The principle of speech-driven mouth animation is that a speech signal is converted into a sequence of a string of pronunciation units (phonemes), then the phonemes are expressed as visual appearances (visemes) of the mouth shape, and finally the visual appearances are spliced into the mouth animation. However, the existing method for driving the mouth-shaped animation by voice has the problems of unnatural phoneme transition and the like, and most importantly, the mouth-shaped animation is not vivid and natural because the mouth-shaped animation generation process is complex, the mouth-shaped animation and the audio input have time delay, and the input audio stream cannot be converted into the mouth-shaped animation in real time.
Disclosure of Invention
The invention aims to provide a mouth shape animation real-time driving method of a 2D character, which maps a phoneme sequence corresponding to an input audio stream onto mouth shape animation segments through a predefined basic mouth shape action set and a weight change curve of each element in the basic mouth shape action set, then splices the mouth shape animation segments with each other, and finally outputs the 2D character mouth shape animation which is driven by the audio stream in real time and has high fidelity and high naturalness.
In order to achieve the purpose, the invention adopts the following technical scheme:
the method for driving the mouth shape animation of the 2D character in real time is provided, and comprises the following specific steps:
1) defining a basic mouth shape action set of the 2D role;
2) defining a weight variation curve of elements in the basic mouth shape action set;
3) mouth shape animations corresponding to different phonemes are designed;
4) acquiring a phoneme sequence corresponding to an input audio stream;
5) mapping each phoneme in the phoneme sequence to the corresponding mouth shape animation;
6) and splicing the mouth shape animations to obtain the complete mouth shape animation of the 2D role driven by the audio stream.
As a preferred scheme of the present invention, in step 1), the basic mouth shape motions preset in the FaceGen face model creation software, in which 6 mouth shape motions are used as 2D characters, are selected to form the basic mouth shape motion set.
As a preferable aspect of the present invention, in step 2), the weight variation curve corresponding to each element in the basic mouth shape action set defined can be expressed by the following formula:
in the above formula, i represents the ith said element in the basic mouth shape action set;
"6" represents the number of said elements;
representing the weight change curve corresponding to one basic mouth shape action in the basic mouth shape action set at the time t;
the phoneme b is a successor of the current phoneme a, and the phoneme sequence formed by the phoneme a and the phoneme b corresponds to a section of mouth shape animation.
As a preferred scheme of the invention, the duration of each section of the mouth shape animation is 100 ms.
As a preferable aspect of the present invention, in step 5), the method for mapping the phoneme sequence to the mouth shape animation specifically includes:
5.1) associating each of said phonemes in said sequence of phonemes with a previously defined said mouth animation;
5.2) judging whether the duration of the current phoneme in the vocalization change stage is larger than or equal to a preset duration threshold value,
if so, carrying out corresponding scale scaling on the animation in the stable stage in the associated mouth shape animation according to the duration of the current phoneme in the stable sounding stage, and filling the animation in the change stage in the associated mouth shape animation to the sounding change stage of the current phoneme;
if not, directly filling the animation in the change stage in the associated mouth shape animation to the target position, so that the filled animation segment and the precursor phoneme of the current phoneme are mutually overlapped on a time axis.
The invention also provides a mouth shape animation real-time driving system of the 2D role, which can realize the mouth shape animation real-time driving method, and the system comprises:
the mouth shape action definition module is used for providing designers with basic mouth shape actions for defining 2D roles and forming a basic mouth shape action set;
a weight change curve definition module for providing the designer with a weight change curve for defining each element in the basic mouth shape action set;
the mouth shape animation design module is used for providing the designer with mouth shape animations corresponding to different phonemes;
the audio stream acquisition module is used for acquiring an input audio stream in real time;
the audio stream conversion module is connected with the audio stream acquisition module and used for converting the audio stream into a corresponding phoneme sequence;
the mouth shape animation mapping module is respectively connected with the audio stream conversion module and the mouth shape animation design module and is used for mapping the phoneme sequence to the corresponding mouth shape animation;
and the mouth shape animation splicing module is connected with the mouth shape animation mapping module and used for splicing the mouth shape animations and outputting the 2D role mouth shape animation driven by the audio stream.
As a preferred aspect of the present invention, the mouth shape animation mapping module includes:
a mouth shape cartoon matching unit, which is used for matching each phoneme in the phoneme sequence with the mouth shape cartoon which is defined in advance;
the time length threshold value setting unit is used for providing a preset time length threshold value for the designer;
the phoneme sounding change duration calculating unit is used for calculating whether the duration of the current phoneme in the sounding change stage is greater than a preset duration threshold value;
the duration judging unit is respectively connected with the duration threshold setting unit and the phoneme sounding change duration calculating unit and is used for judging whether the duration of the current phoneme in the sounding change stage is greater than or equal to the preset duration threshold;
the scale scaling unit is connected with the duration judging unit and is used for correspondingly scaling the animation in the stable stage in the mouth shape animation which has a matching relation with the current phoneme according to the duration of the current phoneme in the stable sounding stage when the duration of the current phoneme in the sounding variation stage is judged to be greater than or equal to the preset duration threshold;
and the animation filling unit is connected with the scale scaling unit and is used for filling the animation subjected to scale scaling and the animation not subjected to scale scaling to the corresponding target position so as to enable the filled animation segment and the precursor phoneme of the current phoneme to be mutually overlapped on a time axis.
The invention maps the phoneme sequence corresponding to the input audio stream to the mouth shape animation segments through the predefined basic mouth shape action set and the weight change curve of each element in the basic mouth shape action set, then splices the mouth shape animation segments with each other, and finally outputs the 2D character mouth shape animation which is driven by the audio stream in real time and has high fidelity and high naturalness.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a diagram illustrating implementation steps of a method for driving a mouth-shape animation of a 2D character in real time according to an embodiment of the present invention;
FIG. 2 is a diagram of method steps for mapping a phoneme sequence onto a mouth animation;
FIG. 3 is a schematic structural diagram of a mouth-shape animation real-time driving system for a 2D character according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the internal structure of a mouth-shape animation mapping module in the mouth-shape animation real-time driving system;
fig. 5 is a schematic diagram of a method for implementing real-time driving of mouth-type animation.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Wherein the showings are for the purpose of illustration only and are shown by way of illustration only and not in actual form, and are not to be construed as limiting the present patent; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and the specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the description of the present invention, unless otherwise explicitly specified or limited, the term "connected" or the like, if appearing to indicate a connection relationship between the components, is to be understood broadly, for example, as being fixed or detachable or integral; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other components or may be in an interactive relationship with one another. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
An embodiment of the present invention provides a method for driving a mouth shape animation of a 2D character in real time, as shown in fig. 1 and 5, specifically including:
step 1) defining a basic mouth shape action set of a 2D role;
step 2) defining a weight change curve (change rule of the weight of each basic mouth shape action along with time) of each element (basic mouth shape action) in the basic mouth shape action set;
step 3) designing mouth shape animation corresponding to different phonemes (the voice signal can be converted into a sequence consisting of a string of pronunciation units which are called phonemes);
step 4) acquiring a phoneme sequence corresponding to the input audio stream;
step 5) mapping each phoneme in the phoneme sequence to the corresponding mouth shape animation;
and 6) splicing the mouth shape animations to obtain the complete mouth shape animation of the 2D role driven by the audio stream.
The present invention first classifies phonemes before defining the basic mouth-shape action set of the 2D character. The Timet voice database is a voice database commonly used in the current acoustic voice research. The Timet Speech library contains a broad-band recording of 630 speakers made up of eight major American dialects of English, each having 10 speech-rich sentences. The Timet Speech library defines 46 different phonemes. The designer needs to consider every possible situation of its succeeding phoneme when designing the mouth shape animation corresponding to each phoneme, so for these 46 phonemes, the designer needs to design 2000 segments of different mouth shape animations, and this workload is definitely very heavy. Therefore, in order to reduce the number of mouth shape animation designs, the invention classifies different phonemes with similar mouth shape actions into one class, the classification method classifies 46 phonemes into 16 phoneme classes, and a designer only needs to design corresponding mouth shape animation for the 16 phoneme classes, so that the workload is greatly reduced, the mapping speed of the subsequent mouth shape animation is facilitated, and the generation speed of the complete mouth shape animation of the 2D character is greatly improved.
In the step 1), 6 mouth-shape actions preset in FaceGen face model creation software are selected as basic mouth-shape actions of a 2D role to form a basic mouth-shape action set.
A piece of mouth animation is determined by two phonemes, for example, the phoneme being pronounced in the piece of mouth animation is a, and the subsequent phoneme of phoneme a is b, then the piece of mouth animation can be represented as a (a, b). Each section of mouth shape animation is composed of 6 weight change curves, and each weight change curve correspondingly represents the change trend of one mouth shape action in the 6 mouth shape actions according to the time axis in the section of mouth shape animation. The weight change curve corresponding to each element in the basic mouth shape action can be expressed by the following formula:
in the above formula, i represents the ith element in the basic mouth shape action set;
"6" represents the number of elements;
representing a weight change curve corresponding to one basic mouth shape action in the basic mouth shape action set at the time t;
the phoneme b is a subsequent phoneme of the current phoneme a, and a phoneme sequence formed by the phoneme a and the phoneme b corresponds to a mouth shape cartoon.
In order to guarantee the fluency of the 2D character mouth shape animation and improve the reality degree of the 2D character mouth shape animation, the duration of each section of mouth shape animation is preferably 100 ms.
When a human produces a sound, the lips of the human make corresponding motions quickly, and then the human stabilizes on the motions and starts to produce the sound. Based on the characteristic, the invention divides the sound production process of the 2D role into two stages: a stabilization phase and a change phase. In the stable phase, the lips have a stable action corresponding to the phoneme currently being pronounced. In the change stage, the lip movement is quickly converted to the movement corresponding to the next phoneme. In general, the duration of the stable pronunciation phase is influenced by various factors such as the speed of speech, and the duration of the variable phase is close to be considered constant.
Each section of mouth shape animation comprises a stable pronunciation stage and a variable pronunciation stage. The stable phase represents the mouth shape action corresponding to the current phoneme, and the changing phase represents the excessive mouth shape action of the current factor to the subsequent phoneme. Because the durations of the pronunciation stabilization phases are usually inconsistent, the mouth shape animation needs to be scaled to match the durations of the pronunciations, so that the mouth shape animation of the 2D character looks more natural and vivid. Therefore, as shown in fig. 2, the method step of mapping the phoneme sequence onto the mouth shape animation in step 5) specifically includes:
step 5.1) associating each phoneme in the phoneme sequence with a mouth shape animation which is defined in advance;
step 5.2) judging whether the duration of the current phoneme in the vocalization change stage is greater than or equal to a preset duration threshold (preferably 30-50 ms),
if so, carrying out corresponding scale scaling on the animation in the stable stage in the associated mouth shape animation according to the duration of the current phoneme in the stable sounding stage, and filling the animation in the changing stage in the associated mouth shape animation to the sounding changing stage of the current phoneme;
if not, directly filling the animation in the change stage in the associated mouth shape animation to the target position, so that the filled animation segment and the precursor phoneme of the current phoneme are mutually overlapped on the time axis.
In summary, the invention maps the phoneme sequence corresponding to the input audio stream to the mouth-shaped animation segments through the predefined basic mouth-shaped action set and the weight change curve of each element in the basic mouth-shaped action set, then splices the mouth-shaped animation segments with each other, and finally outputs the 2D character mouth-shaped animation which is driven by the audio stream in real time and has high fidelity and high naturalness.
The invention also provides a mouth shape animation real-time driving system of the 2D role, which can realize the mouth shape animation real-time driving method, and as shown in figure 3, the system comprises:
the mouth shape action definition module is used for providing designers with basic mouth shape actions for defining 2D roles and forming a basic mouth shape action set; the method preferably selects 6 mouth shape actions preset in FaceGen face model creation software as basic mouth shape actions of the 2D character.
The weight change curve definition module is used for providing a designer with a weight change curve for defining each element in the basic mouth shape action set; the calculation method and the functional action of the weight change curve are specifically described in the mouth shape animation real-time driving method, and are not described herein again.
The mouth shape animation design module is used for providing a designer with mouth shape animations corresponding to different phonemes;
the audio stream acquisition module is used for acquiring an input audio stream in real time;
the audio stream conversion module is connected with the audio stream acquisition module and used for converting the input audio stream into a corresponding phoneme sequence;
the mouth shape animation mapping module is respectively connected with the audio stream conversion module and the mouth shape animation design module and is used for mapping the phoneme sequence to the corresponding mouth shape animation;
and the mouth shape animation splicing module is connected with the mouth shape animation mapping module and is used for splicing all mouth shape animations and outputting the 2D role mouth shape animation driven by the audio stream.
As shown in fig. 4, the mouth shape animation mapping module specifically includes:
the mouth shape cartoon matching unit is used for matching each phoneme in the phoneme sequence with a mouth shape cartoon which is defined in advance;
the time length threshold value setting unit is used for providing a designer with a preset time length threshold value;
the phoneme sounding change duration calculating unit is used for calculating whether the duration of the current phoneme in the sounding change stage is greater than a preset duration threshold value;
the duration judging unit is respectively connected with the duration threshold setting unit and the phoneme sounding change duration calculating unit and is used for judging whether the duration of the current phoneme in the sounding change stage is greater than or equal to a preset duration threshold;
the scale scaling unit is connected with the duration judging unit and used for correspondingly scaling the animation in the stable stage in the mouth shape animation which has a matching relation with the current phoneme according to the duration of the current phoneme in the stable sounding stage when the duration of the current phoneme in the sounding variation stage is judged to be larger than or equal to a preset duration threshold;
and the animation filling unit is connected with the scale scaling unit and is used for filling the animation subjected to scale scaling and the animation not subjected to scale scaling to a target position so as to enable the filled animation fragments and the precursor phonemes of the current phoneme to be mutually overlapped on a time axis.
It should be understood that the above-described embodiments are merely preferred embodiments of the invention and the technical principles applied thereto. It will be understood by those skilled in the art that various modifications, equivalents, changes, and the like can be made to the present invention. However, such variations are within the scope of the invention as long as they do not depart from the spirit of the invention. In addition, certain terms used in the specification and claims of the present application are not limiting, but are used merely for convenience of description.
Claims (7)
1. A mouth shape animation real-time driving method of a 2D character is characterized by comprising the following specific steps:
1) defining a basic mouth shape action set of the 2D role;
2) defining a weight variation curve of elements in the basic mouth shape action set;
3) mouth shape animations corresponding to different phonemes are designed;
4) acquiring a phoneme sequence corresponding to an input audio stream;
5) mapping each phoneme in the phoneme sequence to the corresponding mouth shape animation;
6) and splicing the mouth shape animations to obtain the complete mouth shape animation of the 2D role driven by the audio stream.
2. The mouth shape animation real-time driving method as claimed in claim 1, wherein in step 1), the basic mouth shape action set is composed by selecting 6 mouth shape actions preset in FaceGen face model creation software as basic mouth shape actions of the 2D character.
3. The method for driving mouth shape animation in real time according to claim 1, wherein the weight variation curve corresponding to each element in the basic mouth shape motion set defined in step 2) can be expressed by the following formula:
in the above formula, i represents the ith said element in the basic mouth shape action set;
"6" represents the number of said elements;
representing the weight change curve corresponding to one basic mouth shape action in the basic mouth shape action set at the time t;
the phoneme b is a successor of the current phoneme a, and the phoneme sequence formed by the phoneme a and the phoneme b corresponds to a section of mouth shape animation.
4. The method for driving mouth-shape animation in real time according to claim 3, wherein the duration of each segment of mouth-shape animation is 100 ms.
5. The method for driving mouth shape animation in real time as claimed in claim 1, wherein the step of mapping the phoneme sequence onto the mouth shape animation in step 5) specifically comprises the steps of:
5.1) associating each of said phonemes in said sequence of phonemes with a previously defined said mouth animation;
5.2) judging whether the duration of the current phoneme in the vocalization change stage is larger than or equal to a preset duration threshold value,
if so, carrying out corresponding scale scaling on the animation in the stable stage in the associated mouth shape animation according to the duration of the current phoneme in the stable sounding stage, and filling the animation in the change stage in the associated mouth shape animation to the sounding change stage of the current phoneme;
if not, directly filling the animation in the change stage in the associated mouth shape animation to the target position, so that the filled animation segment and the precursor phoneme of the current phoneme are mutually overlapped on a time axis.
6. A mouth shape animation real-time driving system of a 2D character, which can realize the mouth shape animation real-time driving method of any one of claims 1-5, and is characterized in that the system comprises:
the mouth shape action definition module is used for providing designers with basic mouth shape actions for defining 2D roles and forming a basic mouth shape action set;
a weight change curve definition module for providing the designer with a weight change curve for defining each element in the basic mouth shape action set;
the mouth shape animation design module is used for providing the designer with mouth shape animations corresponding to different phonemes;
the audio stream acquisition module is used for acquiring an input audio stream in real time;
the audio stream conversion module is connected with the audio stream acquisition module and used for converting the audio stream into a corresponding phoneme sequence;
the mouth shape animation mapping module is respectively connected with the audio stream conversion module and the mouth shape animation design module and is used for mapping the phoneme sequence to the corresponding mouth shape animation;
and the mouth shape animation splicing module is connected with the mouth shape animation mapping module and used for splicing the mouth shape animations and outputting the 2D role mouth shape animation driven by the audio stream.
7. The real-time driving system of the mouth shape animation according to claim 6, wherein the mouth shape animation mapping module comprises:
a mouth shape cartoon matching unit, which is used for matching each phoneme in the phoneme sequence with the mouth shape cartoon which is defined in advance;
the time length threshold value setting unit is used for providing a preset time length threshold value for the designer;
the phoneme sounding change duration calculating unit is used for calculating whether the duration of the current phoneme in the sounding change stage is greater than a preset duration threshold value;
the duration judging unit is respectively connected with the duration threshold setting unit and the phoneme sounding change duration calculating unit and is used for judging whether the duration of the current phoneme in the sounding change stage is greater than or equal to the preset duration threshold;
the scale scaling unit is connected with the duration judging unit and is used for correspondingly scaling the animation in the stable stage in the mouth shape animation which has a matching relation with the current phoneme according to the duration of the current phoneme in the stable sounding stage when the duration of the current phoneme in the sounding variation stage is judged to be greater than or equal to the preset duration threshold;
and the animation filling unit is connected with the scale scaling unit and is used for filling the animation subjected to scale scaling and the animation not subjected to scale scaling to the corresponding target position so as to enable the filled animation segment and the precursor phoneme of the current phoneme to be mutually overlapped on a time axis.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110188571.2A CN112734889A (en) | 2021-02-19 | 2021-02-19 | Mouth shape animation real-time driving method and system for 2D character |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110188571.2A CN112734889A (en) | 2021-02-19 | 2021-02-19 | Mouth shape animation real-time driving method and system for 2D character |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112734889A true CN112734889A (en) | 2021-04-30 |
Family
ID=75596697
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110188571.2A Pending CN112734889A (en) | 2021-02-19 | 2021-02-19 | Mouth shape animation real-time driving method and system for 2D character |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112734889A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113539240A (en) * | 2021-07-19 | 2021-10-22 | 北京沃东天骏信息技术有限公司 | Animation generation method and device, electronic equipment and storage medium |
CN113870396A (en) * | 2021-10-11 | 2021-12-31 | 北京字跳网络技术有限公司 | Mouth-shaped animation generation method and device, computer equipment and storage medium |
CN114359450A (en) * | 2022-01-17 | 2022-04-15 | 小哆智能科技(北京)有限公司 | Method and device for simulating virtual character speaking |
CN116721191A (en) * | 2023-08-09 | 2023-09-08 | 腾讯科技(深圳)有限公司 | Method, device and storage medium for processing mouth-shaped animation |
CN116863046A (en) * | 2023-07-07 | 2023-10-10 | 广东明星创意动画有限公司 | Virtual mouth shape generation method, device, equipment and storage medium |
CN116912376A (en) * | 2023-09-14 | 2023-10-20 | 腾讯科技(深圳)有限公司 | Method, device, computer equipment and storage medium for generating mouth-shape cartoon |
WO2024027307A1 (en) * | 2022-08-04 | 2024-02-08 | 腾讯科技(深圳)有限公司 | Method and apparatus for generating mouth-shape animation, device, and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190027129A1 (en) * | 2017-07-18 | 2019-01-24 | Baidu Online Network Technology (Beijing) Co., Ltd | Method, apparatus, device and storage medium for switching voice role |
CN109377540A (en) * | 2018-09-30 | 2019-02-22 | 网易(杭州)网络有限公司 | Synthetic method, device, storage medium, processor and the terminal of FA Facial Animation |
CN110853614A (en) * | 2018-08-03 | 2020-02-28 | Tcl集团股份有限公司 | Virtual object mouth shape driving method and device and terminal equipment |
CN111260761A (en) * | 2020-01-15 | 2020-06-09 | 北京猿力未来科技有限公司 | Method and device for generating mouth shape of animation character |
CN111915707A (en) * | 2020-07-01 | 2020-11-10 | 天津洪恩完美未来教育科技有限公司 | Mouth shape animation display method and device based on audio information and storage medium |
-
2021
- 2021-02-19 CN CN202110188571.2A patent/CN112734889A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190027129A1 (en) * | 2017-07-18 | 2019-01-24 | Baidu Online Network Technology (Beijing) Co., Ltd | Method, apparatus, device and storage medium for switching voice role |
CN110853614A (en) * | 2018-08-03 | 2020-02-28 | Tcl集团股份有限公司 | Virtual object mouth shape driving method and device and terminal equipment |
CN109377540A (en) * | 2018-09-30 | 2019-02-22 | 网易(杭州)网络有限公司 | Synthetic method, device, storage medium, processor and the terminal of FA Facial Animation |
CN111260761A (en) * | 2020-01-15 | 2020-06-09 | 北京猿力未来科技有限公司 | Method and device for generating mouth shape of animation character |
CN111915707A (en) * | 2020-07-01 | 2020-11-10 | 天津洪恩完美未来教育科技有限公司 | Mouth shape animation display method and device based on audio information and storage medium |
Non-Patent Citations (1)
Title |
---|
范鑫鑫 等: "语音驱动的口型同步算法", 《东华大学学报 自然科学版》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113539240A (en) * | 2021-07-19 | 2021-10-22 | 北京沃东天骏信息技术有限公司 | Animation generation method and device, electronic equipment and storage medium |
CN113870396A (en) * | 2021-10-11 | 2021-12-31 | 北京字跳网络技术有限公司 | Mouth-shaped animation generation method and device, computer equipment and storage medium |
CN113870396B (en) * | 2021-10-11 | 2023-08-15 | 北京字跳网络技术有限公司 | Mouth shape animation generation method and device, computer equipment and storage medium |
CN114359450A (en) * | 2022-01-17 | 2022-04-15 | 小哆智能科技(北京)有限公司 | Method and device for simulating virtual character speaking |
WO2024027307A1 (en) * | 2022-08-04 | 2024-02-08 | 腾讯科技(深圳)有限公司 | Method and apparatus for generating mouth-shape animation, device, and medium |
CN116863046A (en) * | 2023-07-07 | 2023-10-10 | 广东明星创意动画有限公司 | Virtual mouth shape generation method, device, equipment and storage medium |
CN116863046B (en) * | 2023-07-07 | 2024-03-19 | 广东明星创意动画有限公司 | Virtual mouth shape generation method, device, equipment and storage medium |
CN116721191A (en) * | 2023-08-09 | 2023-09-08 | 腾讯科技(深圳)有限公司 | Method, device and storage medium for processing mouth-shaped animation |
CN116721191B (en) * | 2023-08-09 | 2024-02-02 | 腾讯科技(深圳)有限公司 | Method, device and storage medium for processing mouth-shaped animation |
CN116912376A (en) * | 2023-09-14 | 2023-10-20 | 腾讯科技(深圳)有限公司 | Method, device, computer equipment and storage medium for generating mouth-shape cartoon |
CN116912376B (en) * | 2023-09-14 | 2023-12-22 | 腾讯科技(深圳)有限公司 | Method, device, computer equipment and storage medium for generating mouth-shape cartoon |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112734889A (en) | Mouth shape animation real-time driving method and system for 2D character | |
CN108447474B (en) | Modeling and control method for synchronizing virtual character voice and mouth shape | |
CN111145322B (en) | Method, apparatus, and computer-readable storage medium for driving avatar | |
US8224652B2 (en) | Speech and text driven HMM-based body animation synthesis | |
JP3664474B2 (en) | Language-transparent synthesis of visual speech | |
US20020024519A1 (en) | System and method for producing three-dimensional moving picture authoring tool supporting synthesis of motion, facial expression, lip synchronizing and lip synchronized voice of three-dimensional character | |
EP1269465B1 (en) | Character animation | |
KR102116309B1 (en) | Synchronization animation output system of virtual characters and text | |
JP2518683B2 (en) | Image combining method and apparatus thereof | |
CN110880315A (en) | Personalized voice and video generation system based on phoneme posterior probability | |
KR20120130627A (en) | Apparatus and method for generating animation using avatar | |
GB2516965A (en) | Synthetic audiovisual storyteller | |
CN113538641A (en) | Animation generation method and device, storage medium and electronic equipment | |
CN111145777A (en) | Virtual image display method and device, electronic equipment and storage medium | |
CN114895817B (en) | Interactive information processing method, network model training method and device | |
CN113383384A (en) | Real-time generation of speech animation | |
CN113077537A (en) | Video generation method, storage medium and equipment | |
KR20110081364A (en) | Method and system for providing a speech and expression of emotion in 3d charactor | |
CN113609255A (en) | Method, system and storage medium for generating facial animation | |
CN112002301A (en) | Text-based automatic video generation method | |
CN114255737B (en) | Voice generation method and device and electronic equipment | |
Tang et al. | Humanoid audio–visual avatar with emotive text-to-speech synthesis | |
Tang et al. | Real-time conversion from a single 2D face image to a 3D text-driven emotive audio-visual avatar | |
JP2003058908A (en) | Method and device for controlling face image, computer program and recording medium | |
Pan et al. | Vocal: Vowel and consonant layering for expressive animator-centric singing animation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210430 |