CN110782511A - Method, system, apparatus and storage medium for dynamically changing avatar - Google Patents

Method, system, apparatus and storage medium for dynamically changing avatar Download PDF

Info

Publication number
CN110782511A
CN110782511A CN201910846147.5A CN201910846147A CN110782511A CN 110782511 A CN110782511 A CN 110782511A CN 201910846147 A CN201910846147 A CN 201910846147A CN 110782511 A CN110782511 A CN 110782511A
Authority
CN
China
Prior art keywords
acquiring
rendering
information
clothing
curve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910846147.5A
Other languages
Chinese (zh)
Inventor
呼伦夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lajin Zhongbo Technology Co ltd
Original Assignee
Tianmai Juyuan (hangzhou) Media Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmai Juyuan (hangzhou) Media Technology Co Ltd filed Critical Tianmai Juyuan (hangzhou) Media Technology Co Ltd
Priority to CN201910846147.5A priority Critical patent/CN110782511A/en
Publication of CN110782511A publication Critical patent/CN110782511A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method, a system, a device and a storage medium for dynamically changing a virtual anchor image, wherein the method comprises the following steps: acquiring input selection information, and acquiring a virtual anchor model according to the selection information; after the manuscript information is obtained, analyzing the manuscript information and obtaining character characteristics; combining character characteristics and a preset clothing database to obtain and store a plurality of clothing rendering data; and dynamically rendering the obtained clothing rendering data to a virtual anchor model so that the virtual anchor presents different pictures. The virtual anchor clothing, the shape and the like are dynamically changed by combining the content of the manuscript in the video playing process, so that the virtual anchor image is matched and adapted with the content of the manuscript, the substitution feeling is provided for audiences, the watching experience of the audiences is increased, and the method can be widely applied to the technical field of animation rendering.

Description

Method, system, apparatus and storage medium for dynamically changing avatar
Technical Field
The present invention relates to the field of animation rendering technologies, and in particular, to a method, a system, an apparatus, and a storage medium for dynamically changing an avatar.
Background
With the development of internet technology and self-media, many video platforms and corresponding video software, such as today's headlines, watermelon videos, jitters and the like, are available, and many network redplayers and self-media bloggers are also available. The blogger obtains click rate and attracts the attention of the fan by making videos and playing the videos on the video software, such as making movie comment videos or current affair comment videos. In order to assist the bloggers in making videos, the corresponding software products are available, for example, corresponding pictures or videos are automatically downloaded from the network according to the written manuscript, so that the situation that the users spend a large amount of time on collecting picture or video materials is avoided, and the video making efficiency is improved. The news anchor can be carried out through a virtual host, a relatively monotonous virtual host image is provided in the prior art, and the virtual host image is unchanged all the time in the video playing process, so that the watching experience of audiences is easily reduced.
Disclosure of Invention
In order to solve the above technical problems, it is an object of the present invention to provide a method, system, apparatus, and storage medium capable of dynamically changing an avatar during video playback.
The first technical scheme adopted by the invention is as follows:
a method of dynamically changing an avatar, comprising the steps of:
acquiring input selection information, and acquiring a virtual anchor model according to the selection information;
after the manuscript information is obtained, analyzing the manuscript information and obtaining character characteristics;
combining character characteristics and a preset clothing database to obtain and store a plurality of clothing rendering data;
and dynamically rendering the obtained clothing rendering data to a virtual anchor model so that the virtual anchor presents different pictures.
Further, the step of analyzing the document information and obtaining the character features after obtaining the document information specifically includes the following steps:
identifying noun words in the manuscript information, and counting the occurrence times of each noun word;
and acquiring a plurality of key words according to the occurrence times, and after matching and filtering the key words according to a preset word database, acquiring the filtered key words as character features.
Further, the step of acquiring and storing a plurality of clothes rendering data by combining the character features and a preset clothes database specifically comprises the following steps:
acquiring a corresponding clothing rendering template from a preset clothing database in sequence according to each key vocabulary in the character characteristics;
and acquiring input selection information, and acquiring and storing clothing rendering data by combining the clothing rendering template and the selection information.
Further, the step of dynamically rendering the obtained clothing rendering data to the virtual anchor model to make the virtual anchor present different pictures specifically includes the following steps:
after voice information corresponding to the manuscript information is obtained, characters in the voice information are sequentially recognized;
when the recognized characters are detected to be key words in character features, clothes rendering data corresponding to the key words are obtained;
and rendering the obtained clothing rendering data to a virtual anchor model so that the virtual anchor presents different pictures along with the playing of the video.
Further, the method further comprises a step of switching the rendering virtual anchor mouth shape, wherein the step of switching the rendering virtual anchor mouth shape specifically comprises the following steps:
after analyzing the manuscript information, obtaining the pinyin of each Chinese character in the manuscript information;
after the pinyin of each Chinese character is disassembled, obtaining a phoneme array corresponding to the pinyin;
fusing the phoneme arrays by adopting a preset fusion curve, and obtaining a mixed curve;
after voice information is generated according to the manuscript information, the voice information and the mixed curve are combined to drive the change of the mouth shape, and therefore different mouth shapes are rendered.
Further, the phoneme array comprises an initial consonant and a final, and the step of fusing the phoneme array by using a preset fusion curve and obtaining a mixing curve specifically comprises the following steps:
acquiring an initial curve according to the type of the initial and acquiring a final curve according to the type of the final;
fusing the initial curve and the final curve of the same phoneme array to obtain a phoneme curve;
and fusing the phoneme curves of the adjacent phoneme arrays to obtain a mixed curve.
Further, the step of combining the voice information and the mixed curve to drive the change of the mouth shape so as to render different mouth shapes specifically comprises the following steps:
analyzing the mixed curve to obtain a continuous driving value;
recognizing characters in the voice information, and matching the recognized characters with the driving values;
and sequentially combining the driving value and the preset mouth model to drive the change of the mouth model, thereby rendering different mouth models.
The second technical scheme adopted by the invention is as follows:
a system for dynamically changing an avatar, comprising:
the selection input module is used for acquiring input selection information and acquiring a virtual anchor model according to the selection information;
the characteristic acquisition module is used for analyzing the manuscript information after acquiring the manuscript information and acquiring character characteristics;
the clothing acquiring module is used for acquiring and storing a plurality of clothing rendering data by combining the character characteristics and a preset clothing database;
and the clothing rendering module is used for dynamically rendering the obtained clothing rendering data to the virtual anchor model so as to enable the virtual anchor to present different pictures.
The third technical scheme adopted by the invention is as follows:
an apparatus for dynamically changing an avatar, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The fourth technical scheme adopted by the invention is as follows:
a storage medium having stored therein processor-executable instructions for performing the method as described above when executed by a processor.
The invention has the beneficial effects that: the virtual anchor clothing, the shape and the like are dynamically changed by combining the content of the manuscript in the video playing process, so that the virtual anchor image is matched and adapted with the content of the manuscript, the substitution feeling is provided for audiences, and the watching experience of the audiences is improved.
Drawings
FIG. 1 is a flow chart of the steps of a method of dynamically changing an avatar of the present invention;
FIG. 2 is a block diagram of the architecture of a system for dynamically changing an avatar of the present invention;
FIG. 3 is a schematic diagram of an avatar in an embodiment;
FIG. 4 is another illustration of an avatar in an embodiment;
FIG. 5 is a graph of the phonemes of a single Pinyin in an exemplary embodiment;
FIG. 6 is a diagram illustrating the merging of phoneme curves in an exemplary embodiment;
FIG. 7 is a diagram illustrating rendering with continuous switching of the dies in an embodiment.
Detailed Description
As shown in fig. 1, the present embodiment provides a method for dynamically changing an avatar, comprising the steps of:
s1, acquiring the input selection information, and acquiring a virtual anchor model according to the selection information;
s2, after the manuscript information is obtained, analyzing the manuscript information and obtaining character characteristics;
s3, acquiring and storing a plurality of clothes rendering data by combining the character characteristics and a preset clothes database;
and S4, dynamically rendering the obtained clothes rendering data to a virtual anchor model so that the virtual anchor presents different pictures.
In this embodiment, a user creates a video, selects a virtual anchor for video playing, for example, selects a male virtual anchor or a female virtual anchor, and obtains a preset virtual anchor model according to a selection requirement, where the virtual anchor model is an initial model, for example, no hair or outer clothing is rendered. The manuscript information is a file with pure characters, and the file can be downloaded from a network, such as a news file from each major news website, such as a Xinhua network, a people network, a phoenix network, a headline and the like, and also can be written by a user. After the manuscript information is input, the manuscript information is automatically analyzed, character features in the manuscript information are obtained, the character features can be keywords in the manuscript or key data evidences, and the like, wherein the character features are identified and obtained by adopting the existing character identification technology. After the character features are obtained, a plurality of clothes rendering data are obtained and stored by combining the character features and a preset clothes database, the clothes rendering data comprise rendering data of hair, rendering data of clothes and the like, and hairstyle models, clothes models and the like with different styles are preset in the clothes database. After the clothes rendering data are obtained, in the subsequent video playing process, clothes, hair styles and the like of the virtual anchor are dynamically changed, so that the image of the virtual anchor is changed, and the watching experience of audiences is improved. For example, when a virtual anchor reports sports news, the clothes of the virtual anchor is changed into basketball clothes when a basketball project is reported; when saying the swimming project, the clothes of virtual anchor is automatically for the swimsuit to wear swimming cap etc. so can give spectator a sense of substituting fast, let spectator can absorb information more effectively, greatly improved spectator's the experience of watching. The dynamically changing avatar image may also change according to a set time point, for example, when a preset time point is detected, the avatar image is triggered to be changed.
Wherein, the step S2 specifically includes steps S21 to S22:
s21, identifying noun words in the manuscript information, and counting the occurrence frequency of each noun word;
and S22, acquiring a plurality of key words according to the occurrence times, and after matching and filtering the key words according to a preset word database, acquiring the filtered key words as character features.
The noun words are words of nouns in the manuscript, and the word database is provided with a plurality of general nouns, such as various nouns of books, basketballs, stones, tomatoes, military affairs, circus and the like. And counting the occurrence times of the noun words, obtaining the noun words with more occurrence times as key words, and determining the number of the obtained key words according to the actual situation. After the key words are obtained, the key words and the word database are subjected to matching filtering, like lottery matching filtering, when the word database stores words which are the same as the key words, the words are reserved, otherwise, filtering is performed, for example, some proper nouns (such as names of people) is performed, and filtering is performed directly. And taking the last remaining key vocabulary as character features, such as a military manuscript, and acquiring corresponding air force military uniforms, land force military uniforms and the like when the character features appear in nouns such as airplanes, tanks and the like. Referring to fig. 3, when the keyword of the manuscript information is mainly 'movie', the anchor image which is more relaxed and fashionable is replaced, and the clothing matching has a fashion sense; referring to fig. 4, when the keyword of the manuscript information is mainly 'weather', the key image with emphasis on comparison with formal is replaced, and the clothing is more formal in the traditional way.
Wherein, the step S3 specifically includes steps S31 to S32:
s31, sequentially acquiring a corresponding clothing rendering template from a preset clothing database according to each key vocabulary in the character characteristics;
and S32, acquiring the input selection information, and acquiring and storing the clothes rendering data by combining the clothes rendering template and the selection information.
Because some key words correspond a plurality of dress rendering templates, for example the noun "aircraft" corresponds dresses such as empty sister's dress, avigator and air force, for example the noun "detective" corresponds dresses such as Formors, Konan and Bay rescue, have a dress of selection can, consequently need the user to carry out the manual work and select. The rendering template is a model image with clothes, and a user directly clicks in the corresponding model image so as to acquire and store corresponding clothes rendering data.
Wherein, the step S4 specifically includes steps S41 to S43:
s41, acquiring voice information corresponding to the manuscript information, and then sequentially recognizing characters in the voice information;
s42, when the recognized characters are detected to be key words in character characteristics, clothing rendering data corresponding to the key words are obtained;
and S43, rendering the obtained clothes rendering data to a virtual anchor model so that the virtual anchor presents different pictures along with the playing of the video.
In the video playing process, recognizing characters in voice information, wherein the technology of recognizing the characters by voice can be realized by adopting the existing technology, when the recognized characters are detected to be key words in character characteristics, clothes rendering data corresponding to the key words are obtained from the clothes rendering data in storage, for example, when a key word of football is recognized, clothes rendering data corresponding to football clothing is obtained from the storage, and the clothes rendering data are rendered on a virtual anchor model, so that the virtual anchor presents the image of a football baby; when the key word 'swimming' is recognized, the costume rendering data of the swimsuit is obtained from the storage, and the costume rendering data of the virtual anchor is replaced, so that the virtual anchor presents the image of a swimmer. Spectators can enter news stories rapidly by watching the image transformation of the virtual anchor, so that the news broadcasting quality is greatly improved, and the watching experience of spectators is improved.
Further, as a preferred embodiment, the method further includes a step of switching the rendering virtual anchor port, where the step of switching the rendering virtual anchor port specifically includes steps a1 to a 4:
a1, analyzing the manuscript information to obtain the pinyin of each Chinese character in the manuscript information;
a2, disassembling the pinyin of each Chinese character to obtain a phoneme array corresponding to the pinyin;
a3, fusing the phoneme arrays by adopting a preset fusion curve, and obtaining a mixed curve;
and A4, generating voice information according to the manuscript information, and then combining the voice information and the mixed curve to drive the change of the mouth shape, thereby rendering different mouth shapes.
The phoneme array comprises an initial consonant and a final consonant, and the step A3 specifically comprises the steps A31-A33:
a31, acquiring an initial curve according to the type of the initial, and acquiring a final curve according to the type of the final;
a32, fusing the initial curve and the final curve of the same phoneme array to obtain a phoneme curve;
and A33, fusing the phoneme curves of the adjacent phoneme arrays to obtain a mixed curve.
The step A4 specifically comprises the following steps A41-A43:
a41, analyzing the mixed curve to obtain a continuous driving value;
a42, after voice information is generated according to the manuscript information, characters in the voice information are recognized, and the recognized characters are matched with the driving values;
and A43, sequentially combining the driving values and the preset mouth shape model to drive the change of the mouth shape, thereby rendering different mouth shapes.
The existing mouth shape driving technology of the virtual character mainly reduces the watching experience of audiences by checking speaking time and driving the mouth shapes of the virtual character to be combined in the time period, and the mouth shapes cannot be matched with the spoken words. In the embodiment, the pinyin of the Chinese characters in the manuscript is analyzed, each Chinese character is used for acquiring the mouth shape matched with the pinyin, and the mouth shape change during speaking is related to the pronunciation of the last Chinese character, so that the mouth shape change is smoother, the embodiment adopts a curve fusion mode to fuse adjacent Chinese characters, and then the mouth shape of the pronunciation Chinese character is acquired by the fused curve. The technology for analyzing the pinyin of the Chinese characters can be realized by adopting the existing technology, and specifically, refer to the patent with the application number of 201410712164.7.
Referring to fig. 5 and 6, the mouth shapes of adjacent Chinese characters are fused specifically in the following manner: the initial consonant pronunciation is prior to the final pronunciation, but the time length of the initial consonant pronunciation is shorter than that of the final pronunciation, so different time weights need to be allocated to the initial consonant and the final, and corresponding initial consonant curves and final curves are obtained from the database. Referring to fig. 5, an initial curve and a final curve are fused into a phoneme curve in combination with time weights of an initial and a final. Referring to fig. 6, after the phoneme curves corresponding to each chinese character are obtained, the phoneme curves of the adjacent chinese characters are mixed, so that the mouth shape of the current chinese character is the mouth shape of the previous chinese character and the pronunciation control of the present chinese character can be realized. Specifically, the vowel curve of the previous chinese character needs to be mixed with the initial consonant curve of the present chinese character, and in this embodiment, a Lerp function is used to fuse adjacent phoneme curves. Referring to fig. 7, the mouth shape changes more smoothly in order to avoid the change of the mouth shape being too abrupt, or each chinese word is composed of an open mouth and a closed mouth, which is not real and natural enough, and improves the viewing experience of the audience.
As shown in fig. 2, the present embodiment further provides a system for dynamically changing an avatar, including:
the selection input module is used for acquiring input selection information and acquiring a virtual anchor model according to the selection information;
the characteristic acquisition module is used for analyzing the manuscript information after acquiring the manuscript information and acquiring character characteristics;
the clothing acquiring module is used for acquiring and storing a plurality of clothing rendering data by combining the character characteristics and a preset clothing database;
and the clothing rendering module is used for dynamically rendering the obtained clothing rendering data to the virtual anchor model so as to enable the virtual anchor to present different pictures.
The system for dynamically changing the avatar of the virtual anchor of the embodiment can execute the method for dynamically changing the avatar of the virtual anchor provided by the embodiment of the method of the invention, can execute any combination implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.
The present embodiment further provides an apparatus for dynamically changing an avatar, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The device for dynamically changing the avatar of the virtual anchor of the embodiment can execute the method for dynamically changing the avatar of the virtual anchor provided by the embodiment of the method of the invention, can execute any combination implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.
The present embodiments also provide a storage medium having stored therein processor-executable instructions, which when executed by a processor, are configured to perform the method as described above.
The storage medium of this embodiment can execute the method for dynamically changing the avatar provided by the method embodiment of the present invention, can execute any combination of the implementation steps of the method embodiment, and has the corresponding functions and advantages of the method.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for dynamically changing an avatar, comprising the steps of:
acquiring input selection information, and acquiring a virtual anchor model according to the selection information;
after the manuscript information is obtained, analyzing the manuscript information and obtaining character characteristics;
combining character characteristics and a preset clothing database to obtain and store a plurality of clothing rendering data;
and dynamically rendering the obtained clothing rendering data to a virtual anchor model so that the virtual anchor presents different pictures.
2. The method of claim 1, wherein the step of parsing the document information and obtaining the text features after obtaining the document information comprises the following steps:
identifying noun words in the manuscript information, and counting the occurrence times of each noun word;
and acquiring a plurality of key words according to the occurrence times, and after matching and filtering the key words according to a preset word database, acquiring the filtered key words as character features.
3. The method of claim 2, wherein the step of obtaining and storing a plurality of apparel rendering data in combination with the text features and a predetermined apparel database comprises the following steps:
acquiring a corresponding clothing rendering template from a preset clothing database in sequence according to each key vocabulary in the character characteristics;
and acquiring input selection information, and acquiring and storing clothing rendering data by combining the clothing rendering template and the selection information.
4. The method of claim 3, wherein the step of dynamically rendering the obtained apparel rendering data onto the avatar model to make the avatar present different pictures comprises the following steps:
after voice information corresponding to the manuscript information is obtained, characters in the voice information are sequentially recognized;
when the recognized characters are detected to be key words in character features, clothes rendering data corresponding to the key words are obtained;
and rendering the obtained clothing rendering data to a virtual anchor model so that the virtual anchor presents different pictures along with the playing of the video.
5. The method of claim 1, further comprising the step of switching rendering avatar style, said step of switching rendering avatar style specifically comprising the steps of:
after analyzing the manuscript information, obtaining the pinyin of each Chinese character in the manuscript information;
after the pinyin of each Chinese character is disassembled, obtaining a phoneme array corresponding to the pinyin;
fusing the phoneme arrays by adopting a preset fusion curve, and obtaining a mixed curve;
after voice information is generated according to the manuscript information, the voice information and the mixed curve are combined to drive the change of the mouth shape, and therefore different mouth shapes are rendered.
6. The method of claim 5, wherein the phone arrays comprise initials and finals, and the step of fusing the phone arrays by using a preset fusing curve and obtaining a mixing curve comprises the following steps:
acquiring an initial curve according to the type of the initial and acquiring a final curve according to the type of the final;
fusing the initial curve and the final curve of the same phoneme array to obtain a phoneme curve;
and fusing the phoneme curves of the adjacent phoneme arrays to obtain a mixed curve.
7. The method as claimed in claim 6, wherein the step of combining the voice information and the mixed curve to drive the mouth shape change to render different mouth shapes comprises the following steps:
analyzing the mixed curve to obtain a continuous driving value;
recognizing characters in the voice information, and matching the recognized characters with the driving values;
and sequentially combining the driving value and the preset mouth model to drive the change of the mouth model, thereby rendering different mouth models.
8. A system for dynamically changing an avatar, comprising:
the selection input module is used for acquiring input selection information and acquiring a virtual anchor model according to the selection information;
the characteristic acquisition module is used for analyzing the manuscript information after acquiring the manuscript information and acquiring character characteristics;
the clothing acquiring module is used for acquiring and storing a plurality of clothing rendering data by combining the character characteristics and a preset clothing database;
and the clothing rendering module is used for dynamically rendering the obtained clothing rendering data to the virtual anchor model so as to enable the virtual anchor to present different pictures.
9. An apparatus for dynamically changing an avatar, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a method of dynamically changing an avatar according to any of claims 1-7.
10. A storage medium having stored therein processor-executable instructions, which when executed by a processor, are configured to perform the method of any one of claims 1-7.
CN201910846147.5A 2019-09-09 2019-09-09 Method, system, apparatus and storage medium for dynamically changing avatar Pending CN110782511A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910846147.5A CN110782511A (en) 2019-09-09 2019-09-09 Method, system, apparatus and storage medium for dynamically changing avatar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910846147.5A CN110782511A (en) 2019-09-09 2019-09-09 Method, system, apparatus and storage medium for dynamically changing avatar

Publications (1)

Publication Number Publication Date
CN110782511A true CN110782511A (en) 2020-02-11

Family

ID=69383399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910846147.5A Pending CN110782511A (en) 2019-09-09 2019-09-09 Method, system, apparatus and storage medium for dynamically changing avatar

Country Status (1)

Country Link
CN (1) CN110782511A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112002005A (en) * 2020-08-25 2020-11-27 成都威爱新经济技术研究院有限公司 Cloud-based remote virtual collaborative host method
CN112348932A (en) * 2020-11-13 2021-02-09 广州博冠信息科技有限公司 Mouth shape animation recording method and device, electronic equipment and storage medium
CN112601098A (en) * 2020-11-09 2021-04-02 北京达佳互联信息技术有限公司 Live broadcast interaction method and content recommendation method and device
CN114170354A (en) * 2021-11-03 2022-03-11 完美世界(北京)软件科技发展有限公司 Virtual character clothing manufacturing method, device, equipment, program and readable medium
CN114974046A (en) * 2022-05-26 2022-08-30 河北三川科技有限公司 Method and system for realizing advertisement playing by utilizing virtual digital image
CN114979683A (en) * 2022-04-21 2022-08-30 澳克多普有限公司 Application method and system of multi-platform intelligent anchor
CN115129205A (en) * 2022-08-05 2022-09-30 华中师范大学 Course interaction method, system, server and storage medium based on virtual teacher

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112002005A (en) * 2020-08-25 2020-11-27 成都威爱新经济技术研究院有限公司 Cloud-based remote virtual collaborative host method
CN112601098A (en) * 2020-11-09 2021-04-02 北京达佳互联信息技术有限公司 Live broadcast interaction method and content recommendation method and device
CN112348932A (en) * 2020-11-13 2021-02-09 广州博冠信息科技有限公司 Mouth shape animation recording method and device, electronic equipment and storage medium
CN114170354A (en) * 2021-11-03 2022-03-11 完美世界(北京)软件科技发展有限公司 Virtual character clothing manufacturing method, device, equipment, program and readable medium
CN114170354B (en) * 2021-11-03 2022-08-26 完美世界(北京)软件科技发展有限公司 Virtual character clothing manufacturing method, device, equipment, program and readable medium
CN114979683A (en) * 2022-04-21 2022-08-30 澳克多普有限公司 Application method and system of multi-platform intelligent anchor
CN114974046A (en) * 2022-05-26 2022-08-30 河北三川科技有限公司 Method and system for realizing advertisement playing by utilizing virtual digital image
CN115129205A (en) * 2022-08-05 2022-09-30 华中师范大学 Course interaction method, system, server and storage medium based on virtual teacher

Similar Documents

Publication Publication Date Title
CN110782511A (en) Method, system, apparatus and storage medium for dynamically changing avatar
WO2021109652A1 (en) Method and apparatus for giving character virtual gift, device, and storage medium
CN109729426B (en) Method and device for generating video cover image
CN110968736B (en) Video generation method and device, electronic equipment and storage medium
CN110830852B (en) Video content processing method and device
CN108182232B (en) Personage's methods of exhibiting, electronic equipment and computer storage media based on e-book
CN111241340B (en) Video tag determining method, device, terminal and storage medium
WO2022134698A1 (en) Video processing method and device
CN110784662A (en) Method, system, device and storage medium for replacing video background
CN105512178B (en) A kind of entity recommended method and device
CN112188304A (en) Video generation method, device, terminal and storage medium
CN110781346A (en) News production method, system, device and storage medium based on virtual image
CN108563731A (en) A kind of sensibility classification method and device
CN112102157A (en) Video face changing method, electronic device and computer readable storage medium
US10120539B2 (en) Method and device for setting user interface
CN110781327B (en) Image searching method and device, terminal equipment and storage medium
WO2021184153A1 (en) Summary video generation method and device, and server
CN106909547B (en) Picture loading method and device based on browser
KR101804679B1 (en) Apparatus and method of developing multimedia contents based on story
CN117177025A (en) Video generation method, device, equipment and storage medium
KR101576094B1 (en) System and method for adding caption using animation
Khan et al. A framework for creating natural language descriptions of video streams
CN110796718A (en) Mouth-type switching rendering method, system, device and storage medium
US20220383907A1 (en) Method for processing video, method for playing video, and electronic device
CN111160051B (en) Data processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221009

Address after: Room 1602, 16th Floor, Building 18, Yard 6, Wenhuayuan West Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176

Applicant after: Beijing Lajin Zhongbo Technology Co.,Ltd.

Address before: 310000 room 650, building 3, No. 16, Zhuantang science and technology economic block, Xihu District, Hangzhou City, Zhejiang Province

Applicant before: Tianmai Juyuan (Hangzhou) Media Technology Co.,Ltd.