CN114201043A - Content interaction method, device, equipment and medium - Google Patents

Content interaction method, device, equipment and medium Download PDF

Info

Publication number
CN114201043A
CN114201043A CN202111498671.1A CN202111498671A CN114201043A CN 114201043 A CN114201043 A CN 114201043A CN 202111498671 A CN202111498671 A CN 202111498671A CN 114201043 A CN114201043 A CN 114201043A
Authority
CN
China
Prior art keywords
input content
user input
voice file
text
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111498671.1A
Other languages
Chinese (zh)
Inventor
付钰
李鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111498671.1A priority Critical patent/CN114201043A/en
Publication of CN114201043A publication Critical patent/CN114201043A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A content interaction method, a content interaction device, a computer device, a storage medium and a computer program product relate to the field of artificial intelligence, in particular to the fields of an augmented/virtual reality technology and a human-computer interaction technology. The method comprises the following steps: sending the acquired user input content to a server; determining at least one base expression identifier associated with the user input content; sending at least one base expression identifier to a server; receiving a voice file associated with user input content and at least one set of control coefficients corresponding to at least one base emoji identifier from a server; generating, with the rendering engine, a virtual character based on the at least one base emoji identifier, the voice file, and the at least one set of control coefficients.

Description

Content interaction method, device, equipment and medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an Artificial Intelligence (AI), and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for content interaction.
Background
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge map technology, an augmented reality technology and the like.
In recent years, with the rapid development of artificial intelligence technology, human-computer interaction technology has been widely applied in people's lives. The man-machine interaction mode is mostly based on key pressing, touch and voice input, and responses are carried out by presenting images, texts or virtual characters on a display screen. However, the existing characters with the virtual images are based on a rendering scheme of a mobile terminal or a cloud terminal, and an intelligent customer service dialogue system applied to a webpage terminal only has a dialogue function and lacks the virtual images to interact with a user, so that a service model is relatively programmed and is not flexible enough, and the user experience is poor.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product for content interaction.
According to an aspect of the present disclosure, there is provided a method of content interaction, the method including: sending the acquired user input content to a server; determining at least one base expression identifier associated with the user input content, the at least one base expression identifier identifying at least one base expression of the virtual character; sending at least one base expression identifier to a server; receiving a voice file associated with user input content from a server and at least one set of control coefficients corresponding to at least one base expression identifier, the voice file being generated by performing text-to-speech conversion by the server, the at least one set of control coefficients being generated by performing speech-to-motion conversion on the voice file by the server for controlling facial motion of the virtual character on the basis of the at least one base expression; and generating, with the rendering engine, a virtual character based on the at least one base emoji identifier, the voice file, and the at least one set of control coefficients.
According to another aspect of the present disclosure, there is provided a method for enabling content interaction at a client, including: receiving user input content acquired by a client; receiving at least one base expression identifier associated with the user input content determined by the client, the at least one base expression identifier identifying at least one base expression of the virtual character; generating a voice file associated with the user input content and at least one set of control coefficients corresponding to the at least one base expression identifier, the voice file being generated by performing text-to-speech conversion, the at least one set of control coefficients being generated by performing speech-to-motion conversion on the voice file for controlling facial motion of the virtual character on the basis of the at least one base expression; and sending the voice file and the at least one set of control coefficients to the client to enable the client to generate a virtual character with the rendering engine based on the at least one base emoji identifier, the voice file, and the at least one set of control coefficients.
According to another aspect of the present disclosure, there is provided an apparatus for content interaction, the apparatus including: the first unit is configured to send the acquired user input content to a server; a second unit configured to determine at least one base expression identifier associated with the user input content, the at least one base expression identifier identifying at least one base expression of the virtual character; a third unit configured to send the at least one base expression identifier to a server; a fourth unit configured to transmit the response text and an animation component name corresponding to at least one of the response text to the server; a fifth unit configured to receive a voice file associated with the user input content from the server, the voice file being generated by performing text-to-speech conversion by the server, and at least one set of control coefficients corresponding to the at least one base expression identifier, the at least one set of control coefficients being generated by performing speech-to-motion conversion on the voice file by the server for controlling a facial motion of the virtual character on the basis of the at least one base expression; and a sixth unit configured to generate a virtual character using the rendering engine based on the at least one base emoji identifier, the voice file, and the at least one set of control coefficients.
According to another aspect of the present disclosure, there is provided an apparatus for enabling content interaction at a client, the apparatus comprising: a seventh unit configured to receive the user input content acquired by the client; an eighth unit to receive at least one base expression identifier associated with the user input content determined by the client, the at least one base expression identifier identifying at least one base expression of the virtual character; a ninth unit configured to generate a voice file associated with the user input content, the voice file being generated by performing text-to-speech conversion, and at least one set of control coefficients corresponding to the at least one base expression identifier, the at least one set of control coefficients being generated by performing speech-to-motion conversion on the voice file for controlling a facial motion of the virtual character on the basis of the at least one base expression; and a tenth unit configured to transmit the voice file and the at least one set of control coefficients to the client to enable the client to generate a virtual character using the rendering engine based on the at least one base expression identifier, the voice file, and the at least one set of control coefficients.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform a method of content interaction or a method of enabling content interaction at a client as described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the above-described method of content interaction or a method of enabling a client to perform content interaction.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the above-described method of content interaction or a method of enabling content interaction at a client.
The method and the device of the embodiment of the disclosure are suitable for various operating systems and various browsers, do not need downloading and installation, are light in weight and easy to use, and improve the experience of users.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of a method of content interaction, in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates a flow diagram of a process for rendering a virtual character using a rendering engine in the method of FIG. 2, in accordance with an embodiment of the present disclosure;
FIG. 4 illustrates a flow diagram of a method of enabling content interaction by a client in accordance with an embodiment of the present disclosure;
FIG. 5 illustrates a flow diagram of a method of enabling content interaction by a client in accordance with an embodiment of the present disclosure;
FIG. 6 illustrates a flow diagram of a method of enabling content interaction by a client in accordance with an embodiment of the present disclosure;
FIG. 7 shows a flow diagram of an interaction process between a client and a server according to an embodiment of the present disclosure;
FIG. 8 shows a flow diagram of an interaction process between a client and a server according to an embodiment of the present disclosure;
FIG. 9 shows a block diagram of an apparatus for content interaction, according to an embodiment of the present disclosure;
FIG. 10 shows a block diagram of an apparatus for enabling content interaction at a client, in accordance with an embodiment of the present disclosure;
FIG. 11 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
The man-machine interaction mode is mostly based on key pressing, touch and voice input, and responses are carried out by presenting images, texts or virtual characters on a display screen. In the prior art, rendering is performed by using native application engines of android and iOS systems, the native application engines are limited by an operating system and need to be downloaded and installed, and the installation process is high in use cost for users.
To solve the above problems, the inventors propose a virtual character rendering scheme. The scheme does not need to use a rendering engine of a native application to render the virtual character, is suitable for various operating systems and various browsers, and does not need downloading and installation. By way of example, such a scheme may be applied in, for example, a web-side or applet, but is not limited to a web-side or applet.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In an embodiment of the present disclosure, the server 120 may run one or more services or software applications that enable execution of a method of analyzing input information of a user and generating information used to render virtual character images and actions based on the input information of the user.
In some embodiments, the server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating a client device 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with the server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may use the client device 101, 102, 103, 104, 105, and/or 106 to receive the user's input and forward it to the server 120, receive the audio information sent by the server and the information defining how to render the avatar after the user's input is processed by the server 120, and render based on these information, outputting the avatar and actions to the user. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptops), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, Linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various Mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, Windows Phone, Android. Portable handheld devices may include cellular telephones, smart phones, tablets, Personal Digital Assistants (PDAs), and the like. Wearable devices may include head-mounted displays (such as smart glasses) and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), Short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The data store 130 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In certain embodiments, the data store used by the server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
According to an aspect of the present disclosure, a method 200 of content interaction is provided, and the method 200 may be applied to any of the client devices 101, 102, 103, 104, 105, and 106 in fig. 1. As shown in fig. 2, the method 200 may include the steps of:
in step 201, the acquired user input content is sent to a server.
In one example, a user enters a desired question on a web page or applet user interface, and the client device forwards the question to the server 120 via a network protocol after receiving the question queried by the user.
In one example, the user input content may be content input by a user into a client device. In the client device, a broadcast function is executed. The broadcast function will be described later in detail with reference to fig. 7.
In another example, after the user inputs the content into the client device, the client device forwards the content input by the user to the server 120 for processing, and the server 120 feeds back the response text to the user device. In the client device, a voice input dialog function or a text input dialog function is performed. The voice input dialog function or the text input dialog function will be described later in detail with reference to fig. 8.
In step 202, at least one base expression identifier associated with the user input content is determined, the at least one base expression identifier identifying at least one base expression of the virtual character.
According to some embodiments, user input content is analyzed to determine at least one base expression identifier corresponding to the user input content.
Taking a face with an expression as an example, the face with the expression can be split into two parts, wherein one part is a basal expression. The base expression is what the nature of the facial expression of the person is, and is the nature that distinguishes one facial expression from another. In one example, one base expression may be a blinking, mouth opening, etc. base expression.
According to some embodiments, the user input content is text input content.
According to some embodiments, response text is received from the server, the response text being generated by the server by determining response content for the text input content.
According to some embodiments, at least one base emoji identifier corresponding to the answer text is determined.
According to some embodiments, the user input content is speech input content.
According to some embodiments, response text is received from a server, the response text is generated by the server by converting speech input content to text input content, and by determining response content for the text input content.
According to some embodiments, at least one base emoji identifier corresponding to the answer text is determined.
In one example, the user's input is analyzed to determine which base expressions the user's input needs to use, and the identifiers of those base expressions are counted.
In step 203, at least one base expression identifier is sent to the server.
In step 204, a voice file associated with the user input content and at least one set of control coefficients corresponding to at least one base expression identifier are received from the server, the voice file is generated by performing text-to-speech conversion by the server, and the at least one set of control coefficients is generated by performing speech-to-motion conversion on the voice file by the server for controlling the facial motion of the virtual character based on the at least one base expression.
Continuing with the case of an expressive face, multiple sets of control coefficients (e.g., blend shape coefficients) can be used to control the expression of the avatar. The face of a person has various expressions and changes all the time. Thus, the face at a moment can be regarded as a plurality of base expressions superimposed on the control coefficient of each base expression at that moment.
In one example, a set of control coefficients may range in value from 0 to 1, each value representing a state of the base expression at that time. In one example, when the value of the control coefficient is 0, the mouth is just closed at this time. When the value of the control factor is 1, the mouth is now just opened to the maximum height.
According to some embodiments, the voice file associated with the user input content is a voice file corresponding to the answer text.
In another example, the client device performs a voice input dialog function, where the user input content is voice input content.
According to some embodiments, the voice file associated with the user input content is a voice file corresponding to the answer text.
In step 205, a virtual character is generated using a rendering engine based on the at least one base emoji identifier, the voice file, and the at least one set of control coefficients.
According to some embodiments, the virtual character is rendered with a rendering engine based on the at least one base expression identifier and the at least one set of control coefficients.
According to some embodiments, the rendering engine includes an animation driving module, a facial action driving module, and a model rendering library in which a pre-rendered virtual character configuration different from facial actions is stored.
Referring to fig. 3, in step 301, a base expression of a virtual character is rendered using an animation driver module based on at least one base expression identifier.
In one example, the client device may be a browser that first invokes an animation driver module to render a base expression of at least one on the face based on the base expression identifier.
In step 302, the facial motion of the virtual character is rendered using the facial motion driver module based on at least one set of control coefficients.
In one example, the browser first calls the base expression that the mouth is opened to the maximum from closed, and then calls a facial motion driving module in the rendering engine to render the whole motion based on the control coefficient corresponding to the base expression, wherein the motion can be that the mouth is opened to the maximum from closed, or that the mouth is opened to half from closed.
In step 303, a virtual character is generated based on the base expression of the virtual character, the facial movements of the virtual character, and the virtual character image configuration stored in the model rendering library.
The browser may also invoke a pre-rendered character model, a physical action of the character, or clothing of the character in a model rendering library in the rendering engine. In one example, a rendering engine in the browser combines the rendered character model, the body movements of the character, the clothes of the character and the movements of the base expressions of various faces within a certain time range to obtain the final virtual character.
In one example, when fusing a limb action with actions of multiple facial basal expressions within a certain time range, the timing of the limb action may be placed before or after the timing of the facial basal expression action, e.g., the virtual character may wave a hand and then blink.
In another example, when a limb action and actions of a plurality of facial basal expressions are fused within a certain time range, the timing sequence of the limb action and the timing sequence of the facial basal expression action can be overlapped, for example, a virtual character can blink while waving a hand.
According to some embodiments, speech corresponding to the speech file is output.
In one example, the browser outputs to the user not only the avatar, but also the voice file associated with the user input from server 102 in step 204. The voice file and the facial action expression of the virtual character are corresponding in time sequence, for example, the voice output by the browser and the action of the lip of the facial expression are in one-to-one correspondence.
In another example, the browser can also output text associated with the user input content to the user, wherein the text, the voice file and the facial action expression of the virtual character are synchronized in time sequence, for example, the voice output by the browser, the action of the lip of the facial expression and the caption output by the browser are in one-to-one correspondence.
In the prior art, when a client performs three-dimensional model rendering, a rendering engine of native application carried by a system is generally used, but the rendering engine of the native application is limited by an operating system and needs to be downloaded and installed during use, which increases inconvenience for users.
In some embodiments, the rendering engine is a rendering engine of a client-side non-native application, such as a three. In one example, the browser calls a web version of the three.js rendering engine for rendering of the 3D model. In another example, the applet side may call the applet version of the thread.
Js webpage-level rendering engine is adopted to render the model, downloading and installation are not needed, the model is light and easy to use for users, the cost is low, and the model can be applied to various operating systems and various browsers.
Through the steps and the application of the thread.js webpage-level rendering engine in the client device, virtual characters with different actions can be rendered to interact with the user, so that the user pleasure is increased, and the user experience is improved. And the cost for providing the service is low for enterprises.
According to another aspect of the present disclosure, there is also provided a method 400 for enabling content interaction for a client, where the method 400 may be applied to the server 120 in fig. 1. As shown in fig. 4, the method 400 may include the steps of:
in step 401, user input content acquired by a client is received.
In step 402, at least one base expression identifier associated with the user input content determined by the client is received, the at least one base expression identifier identifying at least one base expression of the virtual character.
In step 403, a voice file associated with the user input content is generated by performing text-to-speech conversion, and at least one set of control coefficients corresponding to at least one base expression identifier is generated by performing speech-to-motion conversion on the voice file for controlling the facial motion of the virtual character on the basis of the at least one base expression.
In one example, a functional module, such as TTS (Text-to-Speech), is included in the server 120, and can convert the user input content into a Speech file associated with the user input content. And then, converting the Voice file into at least one group of control coefficients respectively corresponding to at least one base expression identifier by utilizing a VTA (Voice-to-Animation) functional module in the server.
In one example, combining the control coefficients of all base expressions together results in all expressive actions on the face.
In one example, all the control coefficients and the voice file are finally aligned frame by voice frame of the voice file, so that the control coefficients correspond one-to-one to each frame of the voice file.
In step 404, the voice file and the at least one set of control coefficients are sent to the client to enable the client to generate a virtual character with the rendering engine based on the at least one base expression identifier, the voice file, and the at least one set of control coefficients.
According to some embodiments, the rendering engine is a three.
Referring now to FIG. 5, FIG. 5 illustrates a flow chart of a method 500 of content interaction when a client performs a text input dialog function.
According to some embodiments, the user input content is text input content.
In step 501, user input content acquired by a client is received.
In step 502, the response text is generated by determining response content for the text input content.
In one example, a natural language processing module is disposed in the server 120, and the natural language processing module may analyze the content of the text input to obtain a corresponding response text.
In this example, the text input may be "today is the day of the week? And analyzing the response text after being processed by the natural language processing module to obtain the response text such as 'Saturday today'.
In step 503, the answer text is sent to the client to enable the client to determine at least one base emoji identifier corresponding to the answer text.
In step 504, at least one base expression identifier associated with the user input content determined by the client is received, the at least one base expression identifier identifying at least one base expression of the virtual character.
In step 505, a voice file associated with the user input content is generated by performing text-to-speech conversion, and at least one set of control coefficients corresponding to at least one base expression identifier is generated by performing speech-to-motion conversion on the voice file for controlling the facial motion of the virtual character on the basis of the at least one base expression.
According to some embodiments, the voice file associated with the user input content contains a voice expressing the user input content.
In step 506, the voice file and the at least one set of control coefficients are sent to the client to enable the client to generate a virtual character with the rendering engine based on the at least one base expression identifier, the voice file, and the at least one set of control coefficients.
According to some embodiments, the voice file associated with the user input content is a voice file corresponding to the answer text.
Step 501, step 504, step 505, and step 506 in fig. 5 are similar to step 401, step 402, step 403, and step 404 in fig. 4, respectively, and for brevity, are not repeated herein.
Referring now to FIG. 6, FIG. 6 illustrates a flow chart of a method 600 of content interaction when a client performs a voice input dialog function.
In step 601, user input content acquired by a client is received.
In step 602, the speech input content is converted to text input content.
In one example, server 120 also includes an asr (automatic Speech recognition) module that can convert Speech into text.
In step 603, a response text is generated by determining response content for the text input content.
In step 604, the answer text is sent to the client to enable the client to determine at least one base emoji identifier corresponding to the answer text.
In step 605, at least one base expression identifier associated with the user input content determined by the client is received, the at least one base expression identifier identifying at least one base expression of the virtual character.
In step 606, a voice file associated with the user input content is generated by performing text-to-speech conversion, and at least one set of control coefficients corresponding to the at least one base expression identifier is generated by performing speech-to-motion conversion on the voice file for controlling the facial motion of the virtual character on the basis of the at least one base expression.
According to some embodiments, the voice file associated with the user input content is a voice file corresponding to the answer text.
In step 607, the voice file and the at least one set of control coefficients are transmitted to the client to enable the client to generate a virtual character using the rendering engine based on the at least one base expression identifier, the voice file, and the at least one set of control coefficients.
Step 601, step 605, step 606, and step 607 in fig. 6 correspond to step 401, step 402, step 403, and step 404 in fig. 4, respectively, and step 603 and step 604 in fig. 6 correspond to steps 502 and 503 in fig. 5, which are not repeated herein for brevity.
According to some embodiments, the voice file associated with the user input content contains a voice expressing the user input content.
In one example, a functional module, such as TTS (Text-to-Speech), is included in the server 120, and can convert the user input content into a Speech file associated with the user input content. For example, the user may input a content "i want to learn well" in the browser, at this time, the client performs the broadcast function, and the server 120 finally outputs a voice file including a voice file "i want to learn well" at this time.
Referring now to fig. 7, fig. 7 shows a flow chart of an interaction process 700 between a client and a server, and when the client performs a cast function.
The client may include any of client devices 101, 102, 103, 104, 105, and 106, and in one example, the client may communicate with server 120 via a websocket network protocol.
In step 701, the client sends the text information to the server 120 after receiving the text input of the user.
The client also analyzes the words to determine the base expression identifier.
In step 702, the determined base expression identifier is sent to the server 120.
The server 120 processes the text message and the base emotion identifier to obtain a voice file associated with the text message and at least one set of control coefficients corresponding to at least one base emotion identifier.
In step 703, the server 120 sends the voice file associated with the text messages and at least one set of control coefficients corresponding to at least one base emoticon identifier to the client.
Referring now to fig. 8, fig. 8 shows a flow diagram of an interaction process 800 between a client and a server, and when the client performs a text input dialog function or a voice input dialog function.
In step 801, if the client executes a voice input dialog function, the client sends voice information to the server.
The server 120 converts the voice information into text information by using an asr (automatic Speech recognition) module, and analyzes the text information by using a natural language processing module to determine a response text.
In step 802, the response text is sent to the client.
In step 801, if the client executes a text input dialog function, the client sends text information to the server 120.
The server 120 analyzes the text message using a natural language processing module to determine a response text.
In step 802, the response text is sent to the client.
The client will analyze these response texts to determine the base emoji identifier.
In step 803, the base expression identifier is sent to the server 120.
The server 120 processes the response texts and the base expression identifiers to obtain the voice files associated with the response texts and at least one set of control coefficients corresponding to at least one base expression identifier.
In step 804, the server 120 transmits the voice file associated with the answer text and at least one set of control coefficients corresponding to at least one base emoji identifier to the client.
According to another aspect of the present disclosure, there is also provided an apparatus for content interaction. As shown in fig. 9, the apparatus 900 includes: a first unit 901 configured to send the acquired user input content to a server; a second unit 902 configured to determine at least one base expression identifier associated with the user input content, the at least one base expression identifier identifying at least one base expression of the virtual character; a third unit 903 configured to send the at least one base expression identifier to the server; a fourth unit 904 receiving a voice file associated with the user input content from the server, the voice file being generated by performing text-to-speech conversion by the server, and at least one set of control coefficients corresponding to the at least one base expression identifier, the at least one set of control coefficients being generated by performing speech-to-motion conversion on the voice file by the server for controlling the facial motion of the virtual character on the basis of the at least one base expression; and a fifth unit 905 configured to generate a virtual character using the rendering engine based on the at least one base emoji identifier, the voice file, and the at least one set of control coefficients.
According to another aspect of the disclosure, an apparatus for enabling content interaction by a client is also provided. As shown in fig. 10, the apparatus 1000 includes: a sixth unit 1001 configured to receive user input content acquired by the client; a seventh unit 1002 configured to receive at least one base expression identifier associated with the user input content determined by the client, the at least one base expression identifier identifying at least one base expression of the virtual character; an eighth unit 1003 configured to generate a voice file associated with the user input content, the voice file being generated by performing text-to-speech conversion, and at least one set of control coefficients corresponding to the at least one base expression identifier, the at least one set of control coefficients being generated by performing speech-to-motion conversion on the voice file for controlling a facial motion of the virtual character on the basis of the at least one base expression; and a ninth unit 1004 configured to transmit the voice file and the at least one set of control coefficients to the client to enable the client to generate a virtual character using the rendering engine based on the at least one base expression identifier, the voice file, and the at least one set of control coefficients.
According to an embodiment of the present disclosure, there is also provided an electronic device, a readable storage medium, and a computer program product.
Referring to fig. 11, a block diagram of a structure of an electronic device 1100, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the device 1100 comprises a computing unit 1101, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)1102 or a computer program loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM1103, various programs and data necessary for the operation of the device 1100 may also be stored. The calculation unit 1101, the ROM 1102, and the RAM1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
A number of components in device 1100 connect to I/O interface 1105, including: an input unit 1106, an output unit 1107, a storage unit 1108, and a communication unit 1109. The input unit 1106 may be any type of device capable of inputting information to the device 1100, and the input unit 1106 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote control. Output unit 1107 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 1108 may include, but is not limited to, a magnetic or optical disk. The communication unit 1109 allows the device 1100 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
The computing unit 1101 can be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The computing unit 1101 performs the various methods and processes described above, such as the method 200 or the method 400. For example, in some embodiments, method 200 or method 400 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1108. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1100 via ROM 1102 and/or communication unit 1109. When loaded into RAM1103 and executed by computing unit 1101, may perform one or more of the steps of method 200 or method 400 described above. Alternatively, in other embodiments, the computing unit 1101 may be configured to perform the method 200 or the method 400 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (17)

1. A method of content interaction, comprising:
sending the acquired user input content to a server;
determining at least one base expression identifier associated with the user input content, the at least one base expression identifier identifying at least one base expression of a virtual character;
sending the at least one base expression identifier to the server;
receiving a voice file associated with the user input content from the server, the voice file being generated by performing text-to-speech conversion by the server, and at least one set of control coefficients corresponding to the at least one base expression identifier, the at least one set of control coefficients being generated by performing speech-to-motion conversion on the voice file by the server for controlling facial motion of the virtual character on the basis of the at least one base expression; and
generating, with a rendering engine, the virtual character based on the at least one base emoji identifier, the voice file, and the at least one set of control coefficients.
2. The method of claim 1, the user input being text input, the determining at least one base expression identifier associated with the user input comprising:
receiving response text from the server, the response text being generated by the server by determining response content for the text input content; and
determining at least one base emoji identifier corresponding to the answer text,
the voice file associated with the user input content is a voice file corresponding to the answer text.
3. The method of claim 1, the user input content being voice input content, the determining at least one base emoji identifier associated with the user input content comprising:
receiving a response text from the server, the response text being generated by the server by converting the speech input content into text input content and by determining response content for the text input content; and
determining at least one base emoji identifier corresponding to the answer text,
the voice file associated with the user input content is a voice file corresponding to the answer text.
4. The method of claim 1, the determining at least one base expression identifier associated with the user input content comprising:
analyzing the user input content and determining at least one base expression identifier corresponding to the user input content.
5. The method of any of claims 1-4, the generating the avatar with the rendering engine comprising:
rendering the virtual character with the rendering engine based on the at least one base expression identifier and the at least one set of control coefficients; and
and outputting the voice corresponding to the voice file.
6. The method of any of claims 1 to 4, the rendering engine comprising an animation driving module, a facial action driving module, and a model rendering library having stored therein a pre-rendered virtual character configuration different from the facial action, and the rendering the virtual character with the rendering engine comprising:
rendering the base expression of the virtual character with the animation driver module based on the at least one base expression identifier;
rendering the facial motion of the virtual character with the facial motion driver module based on the at least one set of control coefficients; and
and generating the virtual character based on the base expression of the virtual character, the facial action of the virtual character and the virtual character image configuration stored in the model rendering library.
7. The method of any of claims 1 to 4, the rendering engine being a three.
8. A method of enabling content interaction at a client, comprising:
receiving user input content acquired by the client;
receiving at least one base expression identifier associated with the user input content determined by the client, the at least one base expression identifier identifying at least one base expression of a virtual character;
generating a voice file associated with the user input content, the voice file being generated by performing text-to-speech conversion, and at least one set of control coefficients corresponding to the at least one base expression identifier, the at least one set of control coefficients being generated by performing speech-to-motion conversion on the voice file for controlling facial motion of the virtual character based on the at least one base expression; and
sending the voice file and the at least one set of control coefficients to the client to enable the client to generate the virtual character with a rendering engine based on the at least one base expression identifier, the voice file, and the at least one set of control coefficients.
9. The method of claim 8, the user input being text input, the method further comprising:
generating a response text by determining response content for the text input content; and
sending the answer text to the client to enable the client to determine the at least one base emoji identifier corresponding to the answer text,
the voice file associated with the user input content is a voice file corresponding to the answer text.
10. The method of claim 8, the user input content being speech input content, the method further comprising:
converting the voice input content into text input content;
generating a response text by determining response content for the text input content; and
sending a response text to the client to enable the client to determine the at least one base emoji identifier corresponding to the response text,
the voice file associated with the user input content is a voice file corresponding to the answer text.
11. The method of claim 8, wherein the voice file associated with the user input content comprises voice expressing the user input content.
12. The method of any of claims 8 to 11, the rendering engine being a three.
13. An apparatus for content interaction, comprising:
the first unit is configured to send the acquired user input content to a server;
a second unit configured to determine at least one base expression identifier associated with the user input content, the at least one base expression identifier identifying at least one base expression of a virtual character;
a third unit configured to send the at least one base expression identifier to the server;
a fourth unit configured to transmit the response text and an animation component name corresponding to at least one of the response text to a server;
a fifth unit configured to receive a voice file associated with the user input content from the server, the voice file being generated by performing text-to-speech conversion by the server, and at least one set of control coefficients corresponding to the at least one base expression identifier, the at least one set of control coefficients being generated by performing speech-to-motion conversion on the voice file by the server for controlling a facial motion of the virtual character on the basis of the at least one base expression; and
a sixth unit configured to generate the virtual character with a rendering engine based on the at least one base emoji identifier, the voice file, and the at least one set of control coefficients.
14. An apparatus for enabling content interaction at a client, comprising:
a seventh unit configured to receive the user input content acquired by the client;
an eighth unit to receive at least one base expression identifier associated with the user input content determined by the client, the at least one base expression identifier identifying at least one base expression of a virtual character;
a ninth unit configured to generate a voice file associated with the user input content, the voice file being generated by performing text-to-speech conversion, and at least one set of control coefficients corresponding to the at least one base expression identifier, the at least one set of control coefficients being generated by performing speech-to-motion conversion on the voice file for controlling a facial motion of the virtual character on the basis of the at least one base expression; and
a tenth unit configured to transmit the voice file and the at least one set of control coefficients to the client to enable the client to generate the virtual character with a rendering engine based on the at least one base expression identifier, the voice file, and the at least one set of control coefficients.
15. A computer device, comprising:
a memory, a processor, and a computer program stored on the memory, the processor configured to execute the computer program to implement the method of any of claims 1-12.
16. A non-transitory computer readable storage medium having stored thereon computer instructions that, when executed by a computer, cause the computer to perform the method of any of claims 1-12.
17. A computer program product comprising a computer program which, when executed by a computer, causes the computer to perform the method of any one of claims 1-12.
CN202111498671.1A 2021-12-09 2021-12-09 Content interaction method, device, equipment and medium Pending CN114201043A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111498671.1A CN114201043A (en) 2021-12-09 2021-12-09 Content interaction method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111498671.1A CN114201043A (en) 2021-12-09 2021-12-09 Content interaction method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114201043A true CN114201043A (en) 2022-03-18

Family

ID=80651664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111498671.1A Pending CN114201043A (en) 2021-12-09 2021-12-09 Content interaction method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114201043A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168134A (en) * 2022-12-28 2023-05-26 北京百度网讯科技有限公司 Digital person control method, digital person control device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833418A (en) * 2020-07-14 2020-10-27 北京百度网讯科技有限公司 Animation interaction method, device, equipment and storage medium
CN112286366A (en) * 2020-12-30 2021-01-29 北京百度网讯科技有限公司 Method, apparatus, device and medium for human-computer interaction
CN113392201A (en) * 2021-06-18 2021-09-14 中国工商银行股份有限公司 Information interaction method, information interaction device, electronic equipment, medium and program product
CN113538641A (en) * 2021-07-14 2021-10-22 北京沃东天骏信息技术有限公司 Animation generation method and device, storage medium and electronic equipment
CN113643413A (en) * 2021-08-30 2021-11-12 北京沃东天骏信息技术有限公司 Animation processing method, animation processing device, animation processing medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833418A (en) * 2020-07-14 2020-10-27 北京百度网讯科技有限公司 Animation interaction method, device, equipment and storage medium
CN112286366A (en) * 2020-12-30 2021-01-29 北京百度网讯科技有限公司 Method, apparatus, device and medium for human-computer interaction
CN113392201A (en) * 2021-06-18 2021-09-14 中国工商银行股份有限公司 Information interaction method, information interaction device, electronic equipment, medium and program product
CN113538641A (en) * 2021-07-14 2021-10-22 北京沃东天骏信息技术有限公司 Animation generation method and device, storage medium and electronic equipment
CN113643413A (en) * 2021-08-30 2021-11-12 北京沃东天骏信息技术有限公司 Animation processing method, animation processing device, animation processing medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168134A (en) * 2022-12-28 2023-05-26 北京百度网讯科技有限公司 Digital person control method, digital person control device, electronic equipment and storage medium
CN116168134B (en) * 2022-12-28 2024-01-02 北京百度网讯科技有限公司 Digital person control method, digital person control device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107704169B (en) Virtual human state management method and system
CN113807440A (en) Method, apparatus, and medium for processing multimodal data using neural networks
CN116303962B (en) Dialogue generation method, training method, device and equipment for deep learning model
CN113194350B (en) Method and device for pushing data to be broadcasted and method and device for broadcasting data
CN110288683B (en) Method and device for generating information
CN115470381A (en) Information interaction method, device, equipment and medium
CN116821684A (en) Training method, device, equipment and medium for large language model
CN114201043A (en) Content interaction method, device, equipment and medium
CN114119935A (en) Image processing method and device
CN115879469B (en) Text data processing method, model training method, device and medium
US20230245643A1 (en) Data processing method
CN115761855B (en) Face key point information generation, neural network training and three-dimensional face reconstruction method
CN116843795A (en) Image generation method and device, electronic equipment and storage medium
CN116361547A (en) Information display method, device, equipment and medium
CN114510308B (en) Method, device, equipment and medium for storing application page by mobile terminal
CN113590782B (en) Training method of reasoning model, reasoning method and device
CN114120448B (en) Image processing method and device
CN115631251A (en) Method, apparatus, electronic device, and medium for generating image based on text
CN115050396A (en) Test method and device, electronic device and medium
CN115601555A (en) Image processing method and apparatus, device and medium
CN114880580A (en) Information recommendation method and device, electronic equipment and medium
CN114881235A (en) Inference service calling method and device, electronic equipment and storage medium
CN114429678A (en) Model training method and device, electronic device and medium
CN116842156B (en) Data generation method, device, equipment and medium
CN114327718B (en) Interface display method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220318

RJ01 Rejection of invention patent application after publication