CN110796718A - Mouth-type switching rendering method, system, device and storage medium - Google Patents

Mouth-type switching rendering method, system, device and storage medium Download PDF

Info

Publication number
CN110796718A
CN110796718A CN201910846231.7A CN201910846231A CN110796718A CN 110796718 A CN110796718 A CN 110796718A CN 201910846231 A CN201910846231 A CN 201910846231A CN 110796718 A CN110796718 A CN 110796718A
Authority
CN
China
Prior art keywords
phoneme
curve
pinyin
mouth
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910846231.7A
Other languages
Chinese (zh)
Inventor
呼伦夫
陈炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lajin Zhongbo Technology Co ltd
Original Assignee
Tianmai Juyuan (hangzhou) Media Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmai Juyuan (hangzhou) Media Technology Co Ltd filed Critical Tianmai Juyuan (hangzhou) Media Technology Co Ltd
Priority to CN201910846231.7A priority Critical patent/CN110796718A/en
Publication of CN110796718A publication Critical patent/CN110796718A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Document Processing Apparatus (AREA)

Abstract

The invention discloses a mouth-type switching rendering method, a mouth-type switching rendering system, a mouth-type switching rendering device and a storage medium, wherein the mouth-type switching rendering method comprises the following steps: acquiring a Chinese text, and analyzing the Chinese text to obtain the pinyin of each Chinese character in the Chinese text; after the pinyin of each Chinese character is disassembled, obtaining a phoneme array corresponding to the pinyin; fusing the phoneme arrays by adopting a preset fusion curve, and obtaining a mixed curve; and driving the change of the mouth shape according to the mixed curve so as to render different mouth shapes. The invention disassembles the pinyin of the Chinese characters into phoneme arrays, combines the phoneme arrays with the fusion curve to generate a continuous mixing curve in time, and drives the mouth shape to change by the mixing curve, thereby enabling the mouth shape to be smoother and being widely applied to the field of human face expression animation research.

Description

Mouth-type switching rendering method, system, device and storage medium
Technical Field
The invention relates to the field of research of facial expression animation, in particular to a mouth shape switching rendering method, a mouth shape switching rendering system, a mouth shape switching rendering device and a storage medium.
Background
With the rapid development of computer technology and animation technology, virtual characters are more and more widely applied, and besides the traditional animation movie industry, the news broadcasting industry is also applied to virtual animation characters at present, for example, news is broadcasted by a virtual host. Because the development of Chinese mouth shape animation is relatively more extensive, when the present virtual character "speaks", the mouth shapes are rendered one by one mainly in a fixed mode, and the mouth shape can not be smoothly and effectively controlled according to the played voice, even if the same mouth shape is generated by different pronunciation, poor watching experience is caused to audiences, and the requirement for the mouth shape of the virtual character can not be met more and more.
Disclosure of Invention
In order to solve the above technical problems, it is an object of the present invention to provide a method, system, apparatus, and storage medium for controlling rendering for mouth shape switching more smoothly and efficiently.
The first technical scheme adopted by the invention is as follows:
a mouth-type switching rendering method comprises the following steps:
acquiring a Chinese text, and analyzing the Chinese text to obtain the pinyin of each Chinese character in the Chinese text;
after the pinyin of each Chinese character is disassembled, obtaining a phoneme array corresponding to the pinyin;
fusing the phoneme arrays by adopting a preset fusion curve, and obtaining a mixed curve;
and driving the change of the mouth shape according to the mixed curve so as to render different mouth shapes.
Further, the phoneme array comprises an initial consonant and a final consonant, and the step of obtaining the phoneme array corresponding to the pinyin after disassembling the pinyin of each Chinese character specifically comprises the following steps:
the pinyin of each Chinese character is sequentially decomposed into initial consonants and final consonants, and time weights are distributed to the initial consonants and the final consonants to obtain a phoneme array corresponding to the pinyin;
and sequencing the phoneme arrays according to the sequence of the Chinese characters to obtain a phoneme sequence.
Further, the step of sequentially disassembling the pinyin of each Chinese character into initials and finals specifically comprises the following steps:
sequentially acquiring first letters of pinyin, and detecting whether initial consonants exist or not by combining the first letters and a preset initial consonant table;
if the initials exist, traversing the initial table according to the pinyin to obtain the types of the initials and the types of the finals and the finals;
if the initial consonants do not exist, obtaining the final consonants and the types of the final consonants according to the pinyin.
Further, the step of fusing the phoneme array by using a preset fusion curve and obtaining a mixing curve specifically includes the following steps:
acquiring an initial curve according to the type of the initial and acquiring a final curve according to the type of the final;
fusing the initial curve and the final curve of the same phoneme array to obtain a phoneme curve;
and fusing the phoneme curves of the adjacent phoneme arrays on the phoneme sequence to obtain a mixed curve.
Further, the step of driving the change of the mouth shape according to the mixed curve so as to render different mouth shapes specifically comprises the following steps:
analyzing the mixed curve to obtain a continuous driving value;
and sequentially combining the driving value and the preset mouth model to drive the change of the mouth model, thereby rendering different mouth models.
The second technical scheme adopted by the invention is as follows:
a mouth-switching rendering system, comprising:
the pinyin conversion module is used for acquiring a Chinese text, and analyzing the Chinese text to acquire the pinyin of each Chinese character in the Chinese text;
the pinyin disassembling module is used for disassembling the pinyin of each Chinese character to obtain a phoneme array corresponding to the pinyin;
the phoneme mixing module is used for fusing the phoneme arrays by adopting a preset fusion curve and obtaining a mixing curve;
and the mouth shape driving module is used for driving the change of the mouth shape according to the mixed curve so as to render different mouth shapes.
Further, the pinyin disassembling module comprises a pinyin disassembling unit and a phoneme combining unit;
the pinyin disassembling unit is used for sequentially disassembling the pinyin of each Chinese character into initials and finals, and distributing time weights to the initials and the finals to obtain a phoneme array corresponding to the pinyin;
the phoneme combination unit is used for sequencing the phoneme arrays according to the sequence of the Chinese characters to obtain phoneme sequences.
Further, the phoneme mixing module comprises a curve obtaining unit, a curve fusing unit and a phoneme fusing unit;
the curve acquisition unit is used for acquiring an initial curve according to the type of the initial and acquiring a final curve according to the type of the final;
the curve fusion unit is used for fusing the initial curve and the final curve of the same phoneme array and obtaining a phoneme curve;
the phoneme fusion unit is used for fusing phoneme curves of adjacent phoneme arrays on the phoneme sequence to obtain a mixed curve.
The third technical scheme adopted by the invention is as follows:
an aperture switching rendering apparatus comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The fourth technical scheme adopted by the invention is as follows:
a storage medium having stored therein processor-executable instructions for performing the method as described above when executed by a processor.
The invention has the beneficial effects that: the invention disassembles the pinyin of the Chinese characters into phoneme arrays, combines the phoneme arrays with the fusion curve to generate a continuous mixing curve in time, and drives the mouth shape to change by the mixing curve, thereby enabling the mouth shape to be smoother.
Drawings
FIG. 1 is a flowchart illustrating steps of a mouth-switching rendering method according to the present invention;
FIG. 2 is a block diagram of a mouth-switching rendering system according to the present invention;
FIG. 3 is a diagram illustrating rendering with continuous switching of the die in an exemplary embodiment;
FIG. 4 is a graph of the phonemes of a single Pinyin in an exemplary embodiment;
FIG. 5 is a diagram illustrating the fusion of phoneme curves in an exemplary embodiment.
Detailed Description
As shown in fig. 1, the present embodiment provides a mouth-type switching rendering method, including the following steps:
s1, obtaining a Chinese text, and analyzing the Chinese text to obtain the pinyin of each Chinese character in the Chinese text;
s2, disassembling the pinyin of each Chinese character to obtain a phoneme array corresponding to the pinyin;
s3, fusing the phoneme arrays by adopting a preset fusion curve, and obtaining a mixed curve;
and S4, driving the change of the mouth shape according to the mixed curve, thereby rendering different mouth shapes.
When people speak, the mouth shape is connected many times, for example, when saying 'I love you', the mouth is always in an open state, and the lips are not closed. The pronunciation of the love character is generated on the premise of the pronunciation mouth shape of the love character, so that the mouth shape changes not only related to the pronunciation of the Chinese character but also related to the pronunciation of the previous Chinese character when the Chinese character is continuously spoken. Patent application No. 201410712164.7 discloses a mouth shape animation synthesis method, which describes in detail how to generate a corresponding mouth shape according to pinyin control, but the method cannot effectively solve the problem of excessive direct smooth connection of the mouth shape. The present embodiment thus provides a method that enables a smoother and more efficient control of the die.
In this embodiment, a chinese text is input, and the pinyin for each chinese character is obtained after the chinese text is parsed, and this step is implemented by using the existing technology, and specifically, the pinyin for each chinese character may be obtained by combining the existing pinyin library. After obtaining that each Chinese character is pinyin, disassembling each Chinese character to obtain a phoneme array of each Chinese character, for example, if the pinyin of "hello" is "NIHAO", the "your" character obtains the phoneme array as ("N", "I"), and the "good" character obtains the phoneme array as ("H", "AO") or ("H", "A", "O"), and fusing adjacent direct phoneme arrays by adopting a preset fusion curve, wherein the fusion curve is a pre-designed curve, each factor corresponds to a corresponding curve, the abscissa of the curve is time, and the ordinate can correspond to different mouth shapes. After all the phoneme arrays are fused, a continuous mixing curve can be obtained, and the mixing curve is obtained by mixing the fusion curves of a plurality of phoneme arrays in time. The mouth shapes are driven to change according to the mixed curve, so that different mouth shapes are rendered, and the mouth shapes are smoother, wherein the driving rendering engine can be realized by adopting the prior art. Referring to fig. 3, the mouth shape changes more smoothly in order to avoid the change of the mouth shape being too abrupt, or each chinese word is composed of an open mouth and a closed mouth, which is not real and natural enough, and the viewing experience of the audience is reduced.
Wherein the step S1 includes steps S11-S12:
s11, sequentially decomposing the pinyin of each Chinese character into an initial consonant and a final sound, and distributing time weights to the initial consonant and the final sound to obtain a phoneme array corresponding to the pinyin;
and S12, sequencing the phoneme arrays according to the sequence of the Chinese characters to obtain phoneme sequences.
In this embodiment, the pinyin is divided into two parts, namely an initial part and a final part, and since the pronunciation time of the initial and the final in the chinese pronunciation is different, specifically, the initial pronunciation precedes the final pronunciation, but the time length of the initial pronunciation is shorter than the time length of the final pronunciation, so different time weights need to be assigned to the initial and the final, and finally, the phoneme array of each chinese character is obtained. And sequencing the phoneme arrays according to the sequence of the Chinese characters in the Chinese text to obtain phoneme sequences.
The step of sequentially resolving the pinyin of each Chinese character into initials and finals in the step S11 specifically comprises the steps A1-A2:
a1, sequentially acquiring initial letters of the pinyin, and detecting whether the initial letters exist by combining the initial letters and a preset initial letter table;
a2, if the initials exist, traversing the initial table according to the pinyin to obtain the types of the initials and the types of the finals and the finals;
a3, if the initial consonant does not exist, obtaining the final and the type of the final according to the pinyin.
In this embodiment, the initials may be divided into 7 types of "bmp", "f", "dtnl", "gkh", "jqx", "zcs", and "zhchshr", and the finals may be mainly divided into types of "a", "o", "e", "i", and "u" 5, and how to divide the initials may be divided by using the existing technology, for example, by referring to the method disclosed in patent application No. 201410712164.7, which is not described herein again.
Referring to fig. 4 and 5, the step S3 specifically includes steps S31 to S33:
s31, acquiring an initial curve according to the type of the initial and acquiring a final curve according to the type of the final;
s32, fusing the initial curve and the final curve of the same phoneme array to obtain a phoneme curve;
and S33, fusing the phoneme curves of the adjacent phoneme arrays on the phoneme sequence to obtain a mixed curve.
Curve information of initials and finals of various types is designed in advance, and after the corresponding types are detected, corresponding initial curves and final curves are obtained from a database. Referring to fig. 4, an initial curve and a final curve are fused into a phoneme curve in combination with time weights of an initial and a final.
Referring to fig. 5, after obtaining the phoneme curve corresponding to each chinese character, the phoneme curves of the adjacent chinese characters are mixed, so that the mouth shape of the current chinese character is the mouth shape of the previous chinese character and the pronunciation control of the present chinese character can be realized. Specifically, the vowel curve of the previous chinese character needs to be mixed with the initial consonant curve of the present chinese character, and in this embodiment, a Lerp function is used to fuse adjacent phoneme curves.
Wherein the step S4 specifically includes steps S41 to S42:
s41, analyzing the mixed curve to obtain a continuous driving value;
and S42, sequentially combining the driving values and the preset mouth shape model to drive the change of the mouth shape, thereby rendering different mouth shapes.
In this embodiment, the hybrid curve is combined with a rendering tool, and a preset mouth model database is provided in the rendering tool, where the mouth model has a Morphtarget state, and specifically, the mouth model is driven to change by the Morphtarget, so as to render different mouth models.
As shown in fig. 2, the present embodiment further provides a mouth-type switching rendering system, including:
the pinyin conversion module is used for acquiring a Chinese text, and analyzing the Chinese text to acquire the pinyin of each Chinese character in the Chinese text;
the pinyin disassembling module is used for disassembling the pinyin of each Chinese character to obtain a phoneme array corresponding to the pinyin;
the phoneme mixing module is used for fusing the phoneme arrays by adopting a preset fusion curve and obtaining a mixing curve;
and the mouth shape driving module is used for driving the change of the mouth shape according to the mixed curve so as to render different mouth shapes.
Further as a preferred embodiment, the pinyin disassembling module comprises a pinyin disassembling unit and a phoneme combining unit;
the pinyin disassembling unit is used for sequentially disassembling the pinyin of each Chinese character into initials and finals, and distributing time weights to the initials and the finals to obtain a phoneme array corresponding to the pinyin;
the phoneme combination unit is used for sequencing the phoneme arrays according to the sequence of the Chinese characters to obtain phoneme sequences.
Further as a preferred embodiment, the phoneme mixing module includes a curve obtaining unit, a curve fusing unit and a phoneme fusing unit;
the curve acquisition unit is used for acquiring an initial curve according to the type of the initial and acquiring a final curve according to the type of the final;
the curve fusion unit is used for fusing the initial curve and the final curve of the same phoneme array and obtaining a phoneme curve;
the phoneme fusion unit is used for fusing phoneme curves of adjacent phoneme arrays on the phoneme sequence to obtain a mixed curve.
The mouth-type switching rendering system of the embodiment can execute the mouth-type switching rendering method provided by the method embodiment of the invention, can execute any combination of the implementation steps of the method embodiment, and has corresponding functions and beneficial effects of the method.
The present embodiment further provides a mouth-type switching rendering apparatus, including:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The mouth-type switching rendering device system of the embodiment can execute the mouth-type switching rendering method provided by the method embodiment of the invention, can execute any combination of the implementation steps of the method embodiment, and has corresponding functions and beneficial effects of the method.
The present embodiments also provide a storage medium having stored therein processor-executable instructions, which when executed by a processor, are configured to perform the method as described above.
The storage medium of this embodiment can execute the mouth-type switching rendering method provided by the method embodiment of the present invention, can execute any combination of the implementation steps of the method embodiment, and has the corresponding functions and advantages of the method.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A mouth-type switching rendering method is characterized by comprising the following steps:
acquiring a Chinese text, and analyzing the Chinese text to obtain the pinyin of each Chinese character in the Chinese text;
after the pinyin of each Chinese character is disassembled, obtaining a phoneme array corresponding to the pinyin;
fusing the phoneme arrays by adopting a preset fusion curve, and obtaining a mixed curve;
and driving the change of the mouth shape according to the mixed curve so as to render different mouth shapes.
2. The mouth-type switching rendering method of claim 1, wherein the phoneme array comprises an initial consonant and a final consonant, and the step of obtaining the phoneme array corresponding to the pinyin after the pinyin of each Chinese character is disassembled specifically comprises the following steps:
the pinyin of each Chinese character is sequentially decomposed into initial consonants and final consonants, and time weights are distributed to the initial consonants and the final consonants to obtain a phoneme array corresponding to the pinyin;
and sequencing the phoneme arrays according to the sequence of the Chinese characters to obtain a phoneme sequence.
3. The mouth-type switching rendering method of claim 2, wherein the step of sequentially splitting pinyin of each Chinese character into initials and finals specifically comprises the following steps:
sequentially acquiring first letters of pinyin, and detecting whether initial consonants exist or not by combining the first letters and a preset initial consonant table;
if the initials exist, traversing the initial table according to the pinyin to obtain the types of the initials and the types of the finals and the finals;
if the initial consonants do not exist, obtaining the final consonants and the types of the final consonants according to the pinyin.
4. The mouth-type switching rendering method according to claim 3, wherein the step of fusing the phoneme arrays by using a preset fusion curve and obtaining a mixing curve specifically comprises the following steps:
acquiring an initial curve according to the type of the initial and acquiring a final curve according to the type of the final;
fusing the initial curve and the final curve of the same phoneme array to obtain a phoneme curve;
and fusing the phoneme curves of the adjacent phoneme arrays on the phoneme sequence to obtain a mixed curve.
5. The mouth-type switching rendering method according to claim 4, wherein the step of driving the mouth-type change according to the mixing curve to render different mouth-types comprises the following steps:
analyzing the mixed curve to obtain a continuous driving value;
and sequentially combining the driving value and the preset mouth model to drive the change of the mouth model, thereby rendering different mouth models.
6. A mouth-switching rendering system, comprising:
the pinyin conversion module is used for acquiring a Chinese text, and analyzing the Chinese text to acquire the pinyin of each Chinese character in the Chinese text;
the pinyin disassembling module is used for disassembling the pinyin of each Chinese character to obtain a phoneme array corresponding to the pinyin;
the phoneme mixing module is used for fusing the phoneme arrays by adopting a preset fusion curve and obtaining a mixing curve;
and the mouth shape driving module is used for driving the change of the mouth shape according to the mixed curve so as to render different mouth shapes.
7. The mouth-switching rendering system of claim 6, wherein the pinyin disassembly module comprises a pinyin disassembly unit and a phoneme combination unit;
the pinyin disassembling unit is used for sequentially disassembling the pinyin of each Chinese character into initials and finals, and distributing time weights to the initials and the finals to obtain a phoneme array corresponding to the pinyin;
the phoneme combination unit is used for sequencing the phoneme arrays according to the sequence of the Chinese characters to obtain phoneme sequences.
8. The mouth-switched rendering system of claim 7, wherein the phoneme mixing module comprises a curve obtaining unit, a curve fusing unit and a phoneme fusing unit;
the curve acquisition unit is used for acquiring an initial curve according to the type of the initial and acquiring a final curve according to the type of the final;
the curve fusion unit is used for fusing the initial curve and the final curve of the same phoneme array and obtaining a phoneme curve;
the phoneme fusion unit is used for fusing phoneme curves of adjacent phoneme arrays on the phoneme sequence to obtain a mixed curve.
9. An oral switching rendering apparatus, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a mouth-switched rendering method as recited in any one of claims 1-5.
10. A storage medium having stored therein processor-executable instructions, which when executed by a processor, are configured to perform the method of any one of claims 1-5.
CN201910846231.7A 2019-09-09 2019-09-09 Mouth-type switching rendering method, system, device and storage medium Pending CN110796718A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910846231.7A CN110796718A (en) 2019-09-09 2019-09-09 Mouth-type switching rendering method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910846231.7A CN110796718A (en) 2019-09-09 2019-09-09 Mouth-type switching rendering method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN110796718A true CN110796718A (en) 2020-02-14

Family

ID=69427506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910846231.7A Pending CN110796718A (en) 2019-09-09 2019-09-09 Mouth-type switching rendering method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN110796718A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348932A (en) * 2020-11-13 2021-02-09 广州博冠信息科技有限公司 Mouth shape animation recording method and device, electronic equipment and storage medium
CN113112575A (en) * 2021-04-08 2021-07-13 深圳市山水原创动漫文化有限公司 Mouth shape generation method and device, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348932A (en) * 2020-11-13 2021-02-09 广州博冠信息科技有限公司 Mouth shape animation recording method and device, electronic equipment and storage medium
CN113112575A (en) * 2021-04-08 2021-07-13 深圳市山水原创动漫文化有限公司 Mouth shape generation method and device, computer equipment and storage medium
CN113112575B (en) * 2021-04-08 2024-04-30 深圳市山水原创动漫文化有限公司 Mouth shape generating method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114401438B (en) Video generation method and device for virtual digital person, storage medium and terminal
US20170040017A1 (en) Generating a Visually Consistent Alternative Audio for Redubbing Visual Speech
US7613613B2 (en) Method and system for converting text to lip-synchronized speech in real time
Campbell et al. No laughing matter.
CN110784662A (en) Method, system, device and storage medium for replacing video background
CN110782511A (en) Method, system, apparatus and storage medium for dynamically changing avatar
CN110796718A (en) Mouth-type switching rendering method, system, device and storage medium
JPH08339198A (en) Presentation device
CN111161755A (en) Chinese lip sound synchronization method based on 3D rendering engine
CN109376145B (en) Method and device for establishing movie and television dialogue database and storage medium
CN109460548B (en) Intelligent robot-oriented story data processing method and system
CN117493593A (en) Multi-terminal fusion lecture presentation method and system
CN110956020B (en) Method for presenting correction candidates, storage medium, and information processing apparatus
JP6231510B2 (en) Foreign language learning system
CN116403583A (en) Voice data processing method and device, nonvolatile storage medium and vehicle
Kolivand et al. Realistic lip syncing for virtual character using common viseme set
Munteanu et al. Measuring the acceptable word error rate of machine-generated webcast transcripts
CN117769739A (en) System and method for assisted translation and lip matching of dubbing
CN110782514A (en) Mouth shape switching rendering system and method based on unreal engine
CN109218843B (en) Personalized intelligent voice prompt method based on television equipment
CN111160051A (en) Data processing method and device, electronic equipment and storage medium
Kambara et al. Onomatopen: painting using onomatopoeia
KR20010088139A (en) Apparatus and method for displaying lips shape according to taxt data
JP2009500679A (en) Communication method and communication device
CN109903594A (en) Spoken language exercise householder method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220929

Address after: Room 1602, 16th Floor, Building 18, Yard 6, Wenhuayuan West Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176

Applicant after: Beijing Lajin Zhongbo Technology Co.,Ltd.

Address before: 310000 room 650, building 3, No. 16, Zhuantang science and technology economic block, Xihu District, Hangzhou City, Zhejiang Province

Applicant before: Tianmai Juyuan (Hangzhou) Media Technology Co.,Ltd.

TA01 Transfer of patent application right