WO2018107918A1 - 一种虚拟形象之间互动的方法、终端及*** - Google Patents

一种虚拟形象之间互动的方法、终端及*** Download PDF

Info

Publication number
WO2018107918A1
WO2018107918A1 PCT/CN2017/109468 CN2017109468W WO2018107918A1 WO 2018107918 A1 WO2018107918 A1 WO 2018107918A1 CN 2017109468 W CN2017109468 W CN 2017109468W WO 2018107918 A1 WO2018107918 A1 WO 2018107918A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
user
data
type
avatar
Prior art date
Application number
PCT/CN2017/109468
Other languages
English (en)
French (fr)
Inventor
李斌
陈晓波
陈郁
易薇
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018107918A1 publication Critical patent/WO2018107918A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/764Media network packet handling at the destination 

Definitions

  • the embodiments of the present invention relate to the field of communications technologies, and in particular, to a method, a terminal, and a system for interaction between virtual images.
  • the embodiment of the present application provides a method, a terminal, and a system for interaction between virtual images, which can realize interaction between virtual images.
  • the first terminal acquires an interaction scenario
  • the first terminal renders the first virtual image corresponding to the first user to the display in the interactive scene
  • the first terminal acquires real-time chat data and behavior characteristic data of the first user, where the first user is a user of the first terminal;
  • the first terminal applies behavior characteristic data of the first user to the first terminal Displayed on the first avatar;
  • the real-time chat data is sent to the first terminal for playing, and the behavior data of the second user is applied to the second virtual image corresponding to the second user displayed by the second terminal to implement the first virtual The interaction between the image and the second avatar.
  • a memory coupled to the processor; storing, in the memory, a machine readable instruction unit; the machine readable instruction unit comprising:
  • a first acquiring unit configured to acquire an interaction scenario when the first user corresponding to the first terminal performs a real-time voice call with the second user corresponding to the second terminal;
  • a rendering unit configured to render a first avatar corresponding to the first user to display in the interactive scene
  • a second acquiring unit configured to acquire real-time chat data and behavior characteristic data of the first user, where the first user is a user of the terminal;
  • a processing unit configured to apply behavior characteristic data of the first user to the first virtual image displayed by the terminal
  • a sending unit configured to send real-time chat data and behavior feature data of the first user to the second terminal by using a server, so that the second terminal plays real-time chat data of the first user, and the A behavior characteristic data of a user is applied to the first virtual image displayed by the second terminal, wherein the second terminal acquires a real-time chat of the second user And the behavior data of the second user is sent to the first terminal for playing, and the behavior characteristic data of the second user is applied to the second terminal to be displayed by the second terminal.
  • a second avatar to achieve an interaction between the first avatar and the second avatar.
  • Embodiments of the present application also provide a non-transitory computer readable storage medium, wherein the storage medium stores machine readable instructions, which are executable by a processor to perform the method as described above .
  • FIG. 1 is a schematic diagram of a scenario of a method for interaction between avatars provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for interaction between avatars provided by an embodiment of the present application
  • FIG. 3 is another schematic flowchart of a method for interaction between avatars provided by an embodiment of the present application.
  • FIG. 4 is another schematic flowchart of a method for interaction between avatars provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • FIG. 6 is another schematic structural diagram of a terminal provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a system for interaction between avatars provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of voice interaction signaling interaction provided by an embodiment of the present application.
  • 10a to 10c are schematic diagrams of interaction interfaces between avatars in the embodiment of the present application.
  • the embodiment of the present application provides a method, a terminal, and a system for interaction between virtual images, which can realize interaction between virtual images.
  • a specific implementation scenario of the avatar in the embodiment of the present application may be as shown in FIG. 1 , including the terminal 101 and the server 102 .
  • the terminal 101 may include multiple terminals, and the terminal 101 may include the first terminal 101 a and the second terminal 101 b. Initially, the user of each terminal 101 can create an avatar on the corresponding terminal 101, and the first terminal 101a and the second terminal 101b respectively establish a connection with the server.
  • the first terminal 101a may initiate an interaction request to the second terminal 101b through the server 102. After the server 102 establishes a communication channel for the first terminal 101a and the second terminal 101b, the first terminal 101a may acquire an interaction scenario, and the avatar that needs interaction is needed.
  • first avatar and second avatar to the displayed in the acquired interactive scene, and then acquiring real-time chat data and behavior characteristic data of the first user, and the first terminal 101a applies the behavior data of the first user to the On the avatar displayed by the first terminal 101a, and then by the server 102
  • the real-time chat data and behavior characteristic data of a user are sent to the second terminal 101b, so that the second terminal 101b plays the real-time chat data of the first user, and the behavior data of the first user is applied to the
  • the first virtual image displayed by the second terminal 101b is used to realize real-time chat and real-time behavior interaction between the virtual images.
  • the method in this embodiment includes the following steps:
  • Step 201 The first terminal acquires an interaction scenario.
  • the user of each terminal can establish an avatar on the terminal in advance.
  • the user can create an avatar as follows:
  • the face scanning system is used to scan the face to obtain facial feature data and facial maps, and the facial feature data may include feature data of the mouth, nose, eyes, eyebrows, face, chin, etc.; then the acquired facial features The data and the facial map are merged into the face of the preset avatar model; finally, the dressing is selected from the dressing interface provided by the terminal, and the selected dress is merged into the corresponding part of the preset avatar model, thereby achieving virtual The establishment of the image.
  • the dresses provided in the dress up interface include, but are not limited to, hairstyles, clothes, pants, shoes, and the like.
  • the user of the first terminal may be referred to as a first user
  • the virtual image established by the first user is referred to as a first virtual image
  • the user of the second terminal is referred to as a second user.
  • the avatar established by the second user is referred to as a second avatar.
  • the first terminal may initiate an interaction request to the second terminal by using the server, and after the server establishes a communication channel for the first terminal and the second terminal, the first terminal may acquire Interactive scene.
  • the first terminal can obtain an interactive scenario as follows:
  • the first terminal may send preset location information to the server to obtain a street view image of the preset location from the server, and use the street view image as the interactive scene, the preset location. It may be the location of the first avatar, which is also the location of the first terminal, which may be represented by latitude and longitude values, geographic coordinate values, and the like.
  • the first terminal constructs a virtual scene image by using a preset element in advance, and when the interaction is required, the virtual scene image constructed by using the preset element is obtained from the storage, and the virtual scene image is used as the interactive scene.
  • Elements include, but are not limited to, three-dimensionally constructed streets, buildings, trees, rivers, and the like.
  • the third type the first terminal collects a real-life image through a camera, and uses the real-life image as the interaction scenario.
  • the first terminal may further provide a scene selection interface, so that the first user may select any one of the three scenarios, and the first terminal may switch to display different scenarios according to the selection of the first user.
  • Step 202 The first terminal renders an avatar that needs to be interactive to display in the interactive scene.
  • the avatar that needs to be interactive includes a first avatar and a second avatar, that is, the first terminal may merge the first avatar and the second avatar into an interactive scene selected by the first user, Thereby showing the effect of the combination of virtual and real.
  • Step 203 The first terminal acquires real-time chat data and behavior characteristic data of the first user, where the first user is a user of the first terminal;
  • the real-time chat data of the first user may include voice data, video data, text data, and the like input by the first user, which are not specifically limited herein.
  • Real-time chat data can be collected in real time through the microphone of the terminal, data acquisition interface, and the like.
  • the first user's behavioral feature data may include facial expression data, independent limb motion data, and interactive limb motion data.
  • facial expression data such as frowning, opening mouth, smiling, wrinkle and other expression data
  • independent limb motion data such as walking, running, waving, shaking, nodding and other motion data
  • interactive limb motion data such as hug, handshake, kiss and other motion data.
  • facial expression data there are two ways to obtain facial expression data, one is obtained by real-time data acquisition, for example, real-time scanning can be used to identify the real face of the user, and the facial features of the real face can be extracted, and the matching of the facial features can be achieved.
  • the algorithm calculates the current possible expressions, such as frowning, opening mouth, smiling, wrinkling, etc., and then obtaining the expression data corresponding to the expressions; second, obtaining according to the user's selection, for example, the user can select an expression from the preset expression list.
  • the terminal acquires expression data corresponding to the expression selected by the user.
  • the independent limb motion data can be acquired in two ways.
  • independent limb motion data such as walking and running can be acquired by real-time data acquisition.
  • the motion detection function provided by the system can be used to detect whether the user is walking or Running, thereby obtaining corresponding action data;
  • independent body motion data such as waving, shaking, nodding, etc. can be obtained according to the user's selection, for example, the user can select an action from the preset independent limb action list, and the terminal acquires the action selected by the user. Action data.
  • the interactive limb motion data may be obtained according to a user's selection. For example, the user may select an action from the preset interactive limb motion list, and the terminal acquires action data corresponding to the user selected motion.
  • Step 204 The first terminal applies the behavior characteristic data of the first user to the avatar displayed by the first terminal.
  • the avatar displayed by the first terminal includes a first avatar and a second avatar.
  • the first terminal may directly apply the real-time chat data of the first user to the first virtual image displayed by the first terminal, so as to present that the first virtual image is being compared with the second The effect of real-time chat with avatars.
  • the first terminal may apply the facial expression data to the first virtual image displayed by the first terminal. That is, on the first terminal side, the facial expression data is applied to the virtual corresponding to the first user.
  • the face corresponding position of the pseudo image model is to present an effect that the first avatar is interacting with the second avatar.
  • the first terminal may apply the independent limb motion data to the first virtual image displayed by the first terminal. That is, on the first terminal side, the independent limb motion data is applied to the limb corresponding position of the avatar model corresponding to the first user, so that the first avatar is performing independent limb motion with the second avatar. The effect of interaction.
  • the first terminal may apply the interactive limb motion data to the first avatar and the second displayed by the first terminal when the behavior data of the first user is interactive limb motion data.
  • the interactive limb motion data is applied to the limb corresponding position of the avatar model corresponding to the first user
  • the interactive limb motion data is applied to the limb corresponding to the avatar model corresponding to the second user. a location to present an effect that the first avatar is interacting with the second avatar to interact with a limb motion.
  • Step 205 The first terminal sends the real-time chat data and behavior feature data of the first user to the second terminal by using a server, so that the second terminal performs real-time chat data and behavior characteristics of the first user.
  • the data is applied to the avatar displayed by the second terminal to implement interaction between the avatars.
  • the second terminal After the second terminal receives the interaction request initiated by the first terminal, the second terminal also acquires the interaction scenario, and the acquisition method is the same as the first terminal, and the second terminal also renders the avatar that needs interaction.
  • the avatar displayed by the second terminal includes a first avatar and a second avatar.
  • the second terminal may directly apply the real-time chat data of the first user to the first virtual image displayed by the second terminal, to present that the first virtual image is being compared with the second An interactive scene in which a avatar performs live chat.
  • the second terminal may apply the facial expression data to the first virtual image displayed by the second terminal. That is, on the second terminal side, the facial expression data is applied to the face corresponding position of the avatar model corresponding to the first user.
  • the second terminal may apply the independent limb motion data to the first virtual image displayed by the second terminal. That is, on the second terminal side, the independent limb motion data is applied to the limb corresponding position of the avatar model corresponding to the first user.
  • the second terminal may apply the interaction limb motion data to the first avatar and the second displayed by the second terminal.
  • the interactive limb motion data is applied to the limb corresponding position of the avatar model corresponding to the first user
  • the interactive limb motion data is applied to the limb corresponding to the avatar model corresponding to the second user. position.
  • the first terminal may acquire an interaction scenario, and render an avatar that needs to be interactive to be displayed in the interaction scenario, and then obtain real-time chat data and behavior feature data of the first user, where the first user is the The user of the first terminal then applies the real-time chat data and behavior characteristic data of the first user to the avatar displayed by the first terminal, and finally the real-time chat data and behavior characteristics of the first user through the server.
  • the data is sent to the second terminal, so that the second terminal applies real-time chat data and behavior characteristic data of the first user to the avatar displayed by the second terminal, thereby realizing real-time chat between the avatars.
  • Interactions such as real-time voice, text chat
  • real-time behavior such as real-time expressions, actions).
  • the method described in the embodiment shown in FIG. 2, this embodiment will be further illustrated by way of example, such as As shown in FIG. 3, the method of this embodiment includes:
  • Step 301 The first terminal acquires an interaction scenario.
  • a user of each terminal can establish an avatar on its terminal in advance.
  • the user of the first terminal may be referred to as a first user
  • the virtual image established by the first user is referred to as a first virtual image
  • the user of the second terminal is referred to as a second user.
  • the avatar established by the second user is referred to as a second avatar.
  • the first terminal may initiate an interaction request to the second terminal by using the server, and after the server establishes a communication channel for the first terminal and the second terminal, the first terminal may acquire Interactive scene.
  • step 301 The process of obtaining the interaction scenario in step 301 is similar to step 201, and details are not described herein again.
  • Step 302 The first terminal renders an avatar that needs to be interactive to the display in the interactive scene.
  • the avatar that needs to be interactive includes a first avatar and a second avatar, that is, the first terminal may merge the first avatar and the second avatar into an interactive scene selected by the first user, Thereby showing the effect of the combination of virtual and real.
  • Step 303 The first terminal acquires real-time chat data and behavior characteristic data of the first user, where the first user is a user of the first terminal;
  • the real-time chat data of the first user may include voice data, video data, text data, and the like input by the first user, which are not specifically limited herein.
  • Real-time chat data can be collected in real time through the microphone of the terminal, data acquisition interface, and the like.
  • the first user's behavioral feature data may include facial expression data, independent limb motion data, and interactive limb motion data.
  • facial expression data such as frowning, opening mouth, smiling, wrinkle and other expression data
  • independent limb motion data such as walking, running, waving, shaking, nodding and other motion data
  • interactive limb motion data such as hug, handshake, kiss and other motion data.
  • Step 304 The first terminal performs real-time chat data and behavior of the first user.
  • the levy data acts on the avatar displayed by the first terminal;
  • Step 305 The first terminal sends the real-time chat data and the behavior feature data of the first user to the second terminal by using the server, so that the second terminal performs real-time chat data and behavior characteristics of the first user. Data is applied to the avatar displayed by the second terminal;
  • Step 306 The first terminal receives real-time chat data and behavior feature data of the second user sent by the second terminal by using the server.
  • the second terminal may also obtain the real-time chat data and behavior characteristic data of the second user. After the second terminal, the second terminal may firstly apply the real-time chat data and behavior characteristic data of the second user to the second terminal.
  • the avatar is as follows:
  • the second terminal may directly apply the real-time chat data of the second user to the second avatar displayed by the second terminal to present that the second avatar is being the first An interactive scene in which a avatar performs live chat.
  • the second terminal may apply the facial expression data to the second avatar displayed by the second terminal. That is, on the second terminal side, the facial expression data is applied to the face corresponding position of the avatar model corresponding to the second user.
  • the second terminal may apply the independent limb motion data to the second avatar displayed by the second terminal. That is, on the second terminal side, the independent limb motion data is applied to the limb corresponding position of the avatar model corresponding to the second user.
  • the second The terminal may apply the interactive limb motion data to the first avatar and the second avatar displayed by the second terminal. That is, on the second terminal side, the interactive limb motion data is applied to the limb corresponding position of the avatar model corresponding to the first user, and the interactive limb motion data is applied to the limb corresponding to the avatar model corresponding to the second user. position.
  • the second terminal sends the real-time chat data and behavior characteristic data of the second user to the first terminal by using the service.
  • Step 307 The first terminal applies live chat data and behavior feature data of the second user to the avatar displayed by the first terminal.
  • the first terminal may directly apply the real-time chat data of the second user to the second avatar displayed by the first terminal to present the second avatar.
  • the first terminal may apply the facial expression data to the second virtual image displayed by the first terminal. That is, on the first terminal side, the facial expression data is applied to the face corresponding position of the avatar model corresponding to the second user.
  • the first terminal may apply the independent limb motion data to the second avatar displayed by the first terminal. That is, on the first terminal side, the independent limb motion data is applied to the limb corresponding position of the avatar model corresponding to the second user.
  • the first terminal may apply the interactive limb motion data to the first avatar and the second displayed by the first terminal, when the behavior data of the second user is interactive limb motion data.
  • the interaction limb The motion data is applied to the limb corresponding position of the avatar model corresponding to the first user, and the interaction limb motion data is applied to the limb corresponding position of the avatar model corresponding to the second user.
  • the first terminal may acquire an interaction scenario, and render an avatar that needs to be interactive to be displayed in the interaction scenario, and then obtain real-time chat data and behavior feature data of the first user, where the first user is the The user of the first terminal then applies the real-time chat data and behavior characteristic data of the first user to the avatar displayed by the first terminal, and finally the real-time chat data and behavior characteristics of the first user through the server.
  • the data is sent to the second terminal, so that the second terminal applies real-time chat data and behavior characteristic data of the first user to the avatar displayed by the second terminal, thereby realizing real-time chat between the avatars.
  • Interactions such as real-time voice, text chat
  • real-time behavior such as real-time expressions, actions).
  • An avatar interaction method is also provided in the embodiment of the present application, as shown in FIG. 4 .
  • the method of this embodiment includes the following steps:
  • Step 401 When the first user corresponding to the first terminal performs a real-time voice call with the second user corresponding to the second terminal, the first terminal acquires an interaction scenario.
  • Step 402 The first terminal renders the first virtual image corresponding to the first user to the interactive scene for display.
  • Step 403 The first terminal acquires real-time chat data and behavior feature data of the first user.
  • the first user's behavioral feature data includes one or more of facial expression data, independent limb motion data, and interaction motion data; wherein the first user's independent limb motion data is Actuating motion data on the first avatar; the interactive limb motion data of the first user is limb motion data acting on the first avatar and the second avatar.
  • the facial expression data includes a facial expression type; the independent limb motion data includes an independent limb motion type; and the interactive limb motion data includes an interactive limb motion type.
  • the acquiring, by the first terminal, behavior characteristic data of the first user includes at least one of the following:
  • the first terminal acquires a facial expression type of the first user by using an expression detection or an expression selection;
  • the first terminal acquires an independent limb motion type of the first user by independent motion detection or independent motion selection;
  • the first terminal selects an interaction action type of the first user by an interaction action.
  • Step 404 The first terminal applies behavior characteristic data of the first user to the first virtual image displayed by the first terminal.
  • Step 405 The first terminal sends the real-time chat data and behavior characteristic data of the first user to the second terminal by using the server, so that the second terminal plays the real-time chat data of the first user, and the first user is The behavior characteristic data is applied to the first avatar displayed by the second terminal.
  • the second terminal acquires the real-time chat data and the behavior feature data of the second user, and sends the real-time chat data of the second user to the first terminal for playing, and the behavior of the second user is performed.
  • the feature data is sent to the first terminal, and the first terminal controls the second avatar displayed by the first terminal, and the behavior characteristic data of the second user is applied to the second terminal corresponding to the second user displayed by the second terminal.
  • the avatar is to achieve an interaction between the first avatar and the second avatar.
  • the first terminal sends the behavior data of the first user to the second terminal, including at least one of the following:
  • the first terminal sends an expression type request message to the server, where the expression type request message carries an identifier of the facial expression type of the first user; wherein the server forwards the expression type request message to the Determining, by the second terminal, the second terminal, according to the identifier of the facial expression type carried therein, determining a corresponding facial expression type, and controlling the facial of the first virtual image displayed by the second terminal according to the facial expression type expression;
  • the first terminal sends an independent action type request message to the server, where the independent action type request message carries an identifier of the independent user action type of the first user; wherein the server requests the independent action type Forwarding the message to the second terminal, so that the second terminal determines the corresponding independent limb action type according to the identifier of the independent limb action type carried therein, and controls the display of the second terminal according to the independent limb action type.
  • the first terminal sends an interaction action type request message to the server, where the interaction action type request message carries an identifier of the interactive action type of the first user; wherein the server sends the interaction action type request Transmitting the message to the second terminal, so that the second terminal determines, according to the identifier of the type of the interactive limb action carried in the second terminal, the corresponding type of the interactive limb action, and controls the display of the second terminal according to the type of the interactive limb action.
  • the interactive movement of the first avatar and the second avatar is
  • the server forwards only the real-time chat data and the behavior feature data between the first terminal and the second terminal, where the first user and the second user perform real-time voice calls
  • the live chat data described includes voice data with a small amount of data.
  • the behavior feature data may include: a facial expression type, an independent limb action type, and an interactive limb action type, and the server only needs to forward the type information between the first terminal and the second terminal, and then the notification may be notified.
  • the corresponding terminal controls the displayed avatar according to the corresponding type, further reducing the amount of data that needs to be forwarded, improving the processing efficiency, and reducing the processing pressure of the server.
  • the embodiment of the present application further provides a terminal.
  • the terminal in this embodiment includes: a first acquiring unit 501, a rendering unit 502, a second obtaining unit 503, a processing unit 504, and a sending unit 505, as follows:
  • the first obtaining unit 501 is configured to acquire an interaction scenario.
  • the user of each terminal can pre-create an avatar on its terminal.
  • the user of the first terminal may be referred to as a first user
  • the virtual image established by the first user is referred to as a first virtual image
  • the user of the second terminal is referred to as a second user.
  • the avatar established by the second user is referred to as a second avatar.
  • the first terminal may initiate an interaction request to the second terminal by using the server, and after the server establishes a communication channel for the first terminal and the second terminal, the first terminal may acquire Interactive scene.
  • the rendering unit 502 is configured to render an avatar that needs to be interactive to the display in the interactive scene.
  • the avatar that needs to interact includes a first avatar and a second avatar, that is, the rendering unit 502 can fuse the first avatar and the second avatar into an interactive scene selected by the first user. Thereby showing the effect of the combination of virtual and real.
  • the second obtaining unit 503 is configured to acquire real-time chat data and behavior characteristic data of the first user, where the first user is a user of the terminal.
  • the real-time chat data of the first user may include voice data, video data, text data, and the like input by the first user, which are not specifically limited herein.
  • Real-time chat data can be collected in real time through the microphone of the terminal, data acquisition interface, and the like.
  • the first user's behavioral feature data may include facial expression data, independent limb motion data, and interactive limb motion data.
  • Facial expression data independent limb motion data
  • process of acquiring interactive limb motion data For details, refer to the description of the foregoing method embodiments, and details are not described herein again.
  • the processing unit 504 is configured to apply behavior characteristic data of the first user to the avatar displayed by the terminal.
  • the avatar displayed by the first terminal includes a first avatar and a second avatar.
  • processing unit 504 For the specific operation of the processing unit 504, refer to the description of the foregoing method embodiment, and the details are described in this step.
  • the sending unit 505 is configured to send, by the server, the real-time chat data and the behavior feature data of the first user to the second terminal, so that the second terminal applies behavior characteristic data of the first user to the first
  • the second terminal displays the avatar to achieve interaction between the avatars.
  • the second terminal After the second terminal receives the interaction request initiated by the first terminal, the second terminal also acquires the interaction scenario, and the acquisition method is the same as the first terminal, and the second terminal also renders the avatar that needs interaction.
  • the avatar displayed by the second terminal includes a first avatar and a second avatar.
  • the terminal may further include a receiving unit 506, configured to receive real-time chat data and behavior feature data of the second user sent by the second terminal by using the server, where the processing unit 504 And being used for playing the real-time chat data of the second user, and applying the behavior characteristic data of the second user to the avatar corresponding to the first user displayed by the second terminal.
  • a receiving unit 506 configured to receive real-time chat data and behavior feature data of the second user sent by the second terminal by using the server, where the processing unit 504 And being used for playing the real-time chat data of the second user, and applying the behavior characteristic data of the second user to the avatar corresponding to the first user displayed by the second terminal.
  • the terminal provided by the foregoing embodiment implements the interaction between the avatars
  • only the division of the above functional modules is illustrated.
  • the function distribution may be completed by different functional modules as needed.
  • the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the method for the interaction between the terminal and the avatar provided by the foregoing embodiment belongs to the same concept, and the specific implementation process is described in the method embodiment, and details are not described herein again.
  • the terminal may obtain an interaction scenario, and render an avatar that needs to be interactive to be displayed in the interaction scenario, and then obtain real-time chat data and behavior feature data of the first user, where the first user is the terminal The user then applies the behavior data of the first user to the avatar displayed by the terminal, and finally sends the real-time chat data and behavior characteristic data of the first user to other terminals through the server, so that the The other terminal applies the behavior data of the first user to the avatar displayed by the other terminal, thereby realizing real-time chat between the avatars (such as real-time voice, text chat) and real-time behavior (such as real-time expressions and actions). )of interaction.
  • the avatars such as real-time voice, text chat
  • real-time behavior such as real-time expressions and actions.
  • the embodiment of the present application further provides a terminal, as shown in FIG. 6, which shows a schematic structural diagram of a terminal involved in the embodiment of the present application, specifically:
  • the terminal may include a radio frequency (RF) circuit 601, a memory 602 including one or more computer readable storage media, an input unit 603, a display unit 604, a sensor 605, an audio circuit 606, and wireless fidelity (WiFi,
  • the Wireless Fidelity module 607 includes a processor 608 having one or more processing cores, and a power supply 609 and the like. It will be understood by those skilled in the art that the terminal structure shown in FIG. 6 does not constitute a limitation to the terminal, and may include more or less components than those illustrated, or a combination of certain components, or different component arrangements. among them:
  • the RF circuit 601 can be used for transmitting and receiving information or receiving and transmitting signals during a call.
  • the memory 602 can be used to store software programs and modules, and the processor 608 executes various functional applications and data processing by running software programs and modules stored in the memory 602.
  • the memory 602 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the terminal (such as audio data, phone book, etc.).
  • memory 602 can include high speed random access memory, and can also include non-volatile memory, such as at least one disk storage Devices, flash devices, or other volatile solid-state storage devices. Accordingly, memory 602 may also include a memory controller to provide access to memory 602 by processor 608 and input unit 603.
  • the input unit 603 can be configured to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • Display unit 604 can be used to display information entered by the user or information provided to the user, as well as various graphical user interfaces of the terminal, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the terminal may also include at least one type of sensor 605, such as a light sensor, motion sensor, and other sensors.
  • sensor 605 such as a light sensor, motion sensor, and other sensors.
  • Audio circuitry 606 can provide an audio interface between the user and the terminal.
  • WiFi is a short-range wireless transmission technology that provides users with wireless broadband Internet access.
  • FIG. 5 shows the WiFi module 607, it can be understood that it does not belong to the necessary configuration of the terminal, and can be omitted as needed within the scope of not changing the essence of the application.
  • the processor 608 is the control center of the terminal, and connects various parts of the entire terminal using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 602, and calling data stored in the memory 602, executing The terminal's various functions and processing data, so as to monitor the terminal as a whole.
  • the processor 608 can include one or more processing cores; in some embodiments, the processor 608 can integrate an application processor and a modem processor, wherein the application processor primarily processes the operating system, the user Interfaces, applications, etc., the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 608.
  • the terminal also includes a power source 609 (such as a battery) that supplies power to the various components.
  • a power source 609 such as a battery
  • the terminal may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 608 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 602 according to the following instructions, and is executed by the processor 608 to be stored in the memory.
  • the application in 602 to implement various functions:
  • the data is applied to the first avatar displayed by the second terminal, wherein the second terminal acquires real-time chat data and behavior characteristic data of the second user, and sends real-time chat data of the second user to
  • the first terminal performs playing, and the behavior data of the second user is applied to the second virtual image corresponding to the second user displayed by the second terminal to implement interaction between the virtual images.
  • the embodiment of the present application further provides a system for interaction between avatars.
  • the system includes a terminal 710 and a server 720.
  • the terminal 710 can include a call module 711, a scene management module 712, and an interaction module 713, as follows:
  • the call module 711 is mainly used for implementing channel establishment, state management, device management, and audio data transmission and reception of a voice call.
  • the scene management module 712 is mainly used to implement display and rendering of different interactive scenes
  • the interaction module 713 is mainly used for realizing interactions such as expressions, independent actions, and interactions between virtual images based on interactive scenes.
  • Server 720 can include an interaction management module 721, a notification center module 722, a voice signaling module 723, a voice data module 724, a message center module 725, and a state center module 726.
  • the terminal 710 may include the terminal A and the terminal B.
  • the user of the terminal A may be referred to as a first user.
  • the avatar established by the user of the terminal A may be referred to as a first avatar, and the user of the terminal B may be referred to as a first avatar.
  • the second user, the avatar established by the user of the terminal B may be referred to as a second avatar.
  • FIG. 8 mainly shows the voice interaction between the avatars.
  • the signaling interaction FIG. 9 mainly shows the signaling interaction when the avatar performs the behavior interaction. In practice, the voice interaction and the behavior interaction can be performed simultaneously. Referring first to FIG. 8, the details are as follows:
  • Step 801 a long connection is established
  • Terminal A and Terminal B maintain a long connection with the Transmission Control Protocol (TCP) to ensure that they are strong online.
  • TCP Transmission Control Protocol
  • the state center module maintains the online status of each terminal.
  • Step 802 initiation of an interaction request
  • the voice signaling module After terminal A initiates an interaction request with B to the voice signaling module, the voice signaling module first performs an online status check of B to ensure that this is a valid call when B is online; otherwise, the call fails.
  • Step 803 the notification of the interactive request
  • the voice signaling module After the voice signaling module checks that the request for initiating the interaction request is satisfied by the state center module, the A request is returned successfully, and the called party B is notified by the notification center module.
  • Steps 804-805 establishing a data channel
  • Terminals A and B start to establish a voice data channel based on User Datagram Protocol (UDP). Once successfully established, they will start their own audio devices, start collecting audio data, and apply audio data to their own users. After the avatar, Send to the voice data module.
  • UDP User Datagram Protocol
  • Step 806 transmitting and receiving audio data
  • the voice data module forwards the voice data to the other party.
  • the terminal A applies the voice data to the second avatar displayed by the terminal A, and the terminal B receives the voice data.
  • the voice data is applied to the first avatar displayed by the terminal B to present the effect of voice interaction between the avatars.
  • Step 911 the terminal A may obtain the facial expression data of the first user by using the expression detection or the expression selection, and apply the facial expression data of the first user to the first virtual image displayed by the terminal A.
  • the terminal A will be the first The user's facial expression data is sent to the server's interactive management module.
  • the interaction management module, the message and the notification center module of the server send the facial expression data of the first user to the terminal B, and in step 915, the terminal B applies the facial expression data of the first user to the display of the terminal B.
  • the effect of interacting with the avatar is presented.
  • the terminal B can obtain the independent limb motion data of the second user by using the independent motion detection or the independent motion selection, and the independent limb motion data of the second user is applied to the second virtual image displayed by the terminal B, and then step 922 is performed.
  • the terminal B sends the independent user action data of the second user to the interactive management module of the server.
  • Steps 923 and 924, the server interaction management module, the message and the notification center module send the second user's independent limb motion data to the terminal A, and in step 925, the terminal A applies the second user's independent limb motion data to the terminal A display.
  • the effect of independent action interaction between the avatars is presented.
  • the terminal A can obtain the interaction motion data of the first user by using the interaction action, and the interaction data of the first user is applied to the first and second virtual images displayed by the terminal A, and then the terminal 932 is used.
  • A sends the first user's interactive limb motion data to the server's interactive management module.
  • Steps 933 and 934, the interaction management module, the message and the notification center module of the server send the interaction motion data of the first user to the terminal B.
  • the terminal B applies the interaction motion data of the first user to the display of the terminal B.
  • the first and second avatars present the effect of interaction between the avatars.
  • the two terminals can respectively obtain an interaction scenario, and the avatars that need to be interacted are rendered to the respective acquired interactive scenes, and the interaction scenarios acquired by the terminals may be the same or different.
  • each terminal can switch to display different interaction scenarios according to the selection of the respective users.
  • FIGS. 10a to 10c illustrate an interactive interface provided by an embodiment of the present application.
  • the interactive scene is a real scene
  • the interactive interface of FIG. 10b and FIG. 10c the interactive scene is selected by the corresponding user. Street view.
  • FIGS. 10a to 10c are only an effect display diagram of the interactive interface, and in practice, it does not constitute a limitation on the final display effect.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated
  • the components displayed as the unit may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application, in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, device, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Computer Security & Cryptography (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例公开了一种虚拟形象之间互动的方法、终端及***,虚拟形象之间互动的方法包括:第一终端获取互动场景;所述第一终端将需要互动的虚拟形象渲染至所述互动场景中显示;所述第一终端获取第一用户的实时聊天数据及行为特征数据,所述第一用户为所述第一终端的用户;所述第一终端将所述第一用户的行为特征数据作用于所述第一终端显示的虚拟形象上;所述第一终端通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,以使得所述第二终端将所述第一用户的行为特征数据作用于所述第二终端显示的虚拟形象上,以实现虚拟形象之间的互动。

Description

一种虚拟形象之间互动的方法、终端及***
本申请要求于2016年12月15日提交中国专利局、申请号为201611161850.5,发明名称为“一种虚拟形象之间互动的方法、终端及***”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及通信技术领域,具体涉及一种虚拟形象之间互动的方法、终端及***。
发明背景
目前,绝大部分的互动实现方案,都是基于真实人物的,例如真实人物之间进行语音、文字等聊天互动,缺乏虚拟形象之间互动的实现方案。
发明内容
有鉴于此,本申请实施例提供了一种虚拟形象之间互动的方法、终端及***,能够实现虚拟形象之间的互动。
本申请实施例提供的虚拟形象之间互动的方法,包括:
当第一终端对应的第一用户与第二终端对应的第二用户进行实时语音通话时,第一终端获取互动场景;
所述第一终端将所述第一用户对应的第一虚拟形象渲染至所述互动场景中显示;
所述第一终端获取第一用户的实时聊天数据及行为特征数据,所述第一用户为所述第一终端的用户;
所述第一终端将所述第一用户的行为特征数据作用于所述第一终端 显示的第一虚拟形象上;
所述第一终端通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,以使得所述第二终端播放所述第一用户的实时聊天数据,并将所述第一用户的行为特征数据作用于所述第二终端显示的第一虚拟形象上,其中,所述第二终端获取所述第二用户的实时聊天数据及行为特征数据,将所述第二用户的实时聊天数据发送给第一终端进行播放,并将所述第二用户的行为特征数据作用于所述第二终端显示的与第二用户对应的第二虚拟形象上,以实现所述第一虚拟形象和所述第二虚拟形象的之间的互动。
本申请实施例提供的终端,包括:
处理器;
与所述处理器相连接的存储器;所述存储器中存储有机器可读指令单元;所述机器可读指令单元包括:
第一获取单元,用于当第一终端对应的第一用户与第二终端对应的第二用户进行实时语音通话时,获取互动场景;
渲染单元,用于将所述第一用户对应的第一虚拟形象渲染至所述互动场景中显示;
第二获取单元,用于获取第一用户的实时聊天数据及行为特征数据,所述第一用户为所述终端的用户;
处理单元,用于将所述第一用户的行为特征数据作用于所述终端显示的第一虚拟形象上;
发送单元,用于通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,以使得所述第二终端播放所述第一用户的实时聊天数据,并将所述第一用户的行为特征数据作用于所述第二终端显示的第一虚拟形象上,其中,所述第二终端获取所述第二用户的实时聊 天数据及行为特征数据,将所述第二用户的实时聊天数据发送给第一终端进行播放,并将所述第二用户的行为特征数据作用于所述第二终端显示的与第二用户对应的第二虚拟形象上,以实现所述第一虚拟形象和所述第二虚拟形象之间的互动。
本申请实施例还提供了一种非易失性计算机可读存储介质,其中所述存储介质中存储有机器可读指令,所述机器可读指令可以由处理器执行以完成如上所述的方法。
附图简要说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例所提供的虚拟形象之间互动的方法的一个场景示意图;
图2是本申请实施例所提供的虚拟形象之间互动的方法的一个流程示意图;
图3是本申请实施例所提供的虚拟形象之间互动的方法的另一流程示意图;
图4是本申请实施例提供的虚拟形象之间互动的方法的另一流程示意图;
图5是本申请实施例所提供的终端的一个结构示意图;
图6是本申请实施例所提供的终端的另一结构示意图;
图7是本申请实施例所提供的虚拟形象之间互动的***的一个结构示意图;
图8是本申请实施例所提供的语音互动信令交互示意图;
图9是本申请实施例所提供的行为互动信令交互示意图;
图10a至10c是本申请实施例的虚拟形象之间互动的互动界面示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供了一种虚拟形象之间互动的方法、终端及***,能够实现虚拟形象之间的互动。本申请实施例虚拟形象之间互动的方法一个具体实施场景可如图1所示,包括终端101与服务器102,终端101可以有多个,终端101可以包括第一终端101a、第二终端101b。初始时,每个终端101的用户可以在对应终端101上创建虚拟形象,第一终端101a和第二终端101b分别与服务器建立连接。当第一终端101a的用户(第一用户)创建的虚拟形象(第一虚拟形象)想要与第二终端101b的用户(第二用户)创建的虚拟形象(第二虚拟形象)进行互动时,第一终端101a可以通过服务器102向第二终端101b发起互动请求,在服务器102为第一终端101a与第二终端101b建立通信信道之后,第一终端101a可以获取互动场景,将需要互动的虚拟形象(第一虚拟形象和第二虚拟形象)渲染至所获取的互动场景中显示,然后获取第一用户的实时聊天数据及行为特征数据,第一终端101a将第一用户的行为特征数据作用于所述第一终端101a显示的虚拟形象上,然后通过服务器102将第 一用户的实时聊天数据及行为特征数据发送给第二终端101b,以使得所述第二终端101b播放所述第一用户的实时聊天数据,并将所述第一用户的行为特征数据作用于所述第二终端101b显示的第一虚拟形象上,以实现虚拟形象之间的实时聊天及实时行为的互动。
下面将从终端的角度描述本申请提供的虚拟形象之间互动的方法,如图2所示,本实施例的方法包括以下步骤:
步骤201、第一终端获取互动场景;
具体实现中,每个终端的用户都可以预先在其终端上建立虚拟形象,在一些实施例中,用户可以按照如下方式建立虚拟形象:
首先,使用终端的人脸扫描***扫描面部,以获取面部特征数据及面部贴图,面部特征数据可以包括嘴巴、鼻子、眼睛、眉毛、脸部、下巴等部位的特征数据;然后将获取的面部特征数据及面部贴图融合至预设的虚拟形象模型的面部;最后可以从终端提供的装扮界面中选择装扮,将所选的装扮融合至预设的虚拟形象模型的对应部位,至此,即实现了虚拟形象的建立。装扮界面中提供的装扮包括但不限于发型、衣服、裤子、鞋子等。
为便于描述,本实施例中,可以将第一终端的用户称为第一用户,将第一用户建立的虚拟形象称为第一虚拟形象,将第二终端的用户称为第二用户,将第二用户建立的虚拟形象称为第二虚拟形象。当第一虚拟形象想要与第二虚拟形象进行互动时,第一终端可以通过服务器向第二终端发起互动请求,在服务器为第一终端及第二终端建立通信信道之后,第一终端可以获取互动场景。
在一些实施例中,第一终端可以按照如下方式获取互动场景:
第一种,第一终端可以向服务器发送预设位置信息,以从服务器获取预设位置的街景图像,将所述街景图像作为所述互动场景,预设位置 可以是第一虚拟形象的位置,该位置也是第一终端的位置,该位置可以用经纬度值、地理坐标值等表示。
第二种,第一终端预先采用预设元素构建虚拟场景图像并存储,需要互动时,从存储中获取采用预设元素构建的虚拟场景图像,将所述虚拟场景图像作为所述互动场景,预设元素包括但不限于三维构建的街道、建筑、树木、河流等。
第三种:所述第一终端通过摄像头采集实景图像,将所述实景图像作为所述交互场景。
进一步地,第一终端还可以提供场景选择界面,以供第一用户从以上三种场景中选择任意一种交互场景,第一终端可以根据第一用户的选择切换显示不同的场景。
步骤202、所述第一终端将需要互动的虚拟形象渲染至所述互动场景中显示;
在一些实施例中,需要互动的虚拟形象包括第一虚拟形象和第二虚拟形象,即第一终端可以将第一虚拟形象和第二虚拟形象融合至第一用户所选的互动场景中显示,从而呈现出虚实结合的效果。
步骤203、所述第一终端获取第一用户的实时聊天数据及行为特征数据,所述第一用户为所述第一终端的用户;
第一用户的实时聊天数据可以包括第一用户输入的语音数据、视频数据、文字数据等,此处不做具体限定。实时聊天数据可以通过终端的麦克风、数据采集接口等实时采集。
第一用户的行为特征数据可以包括面部表情数据、独立肢体动作数据以及交互肢体动作数据。其中,面部表情数据例如皱眉、张嘴、微笑、皱鼻等表情数据,独立肢体动作数据例如行走、奔跑、挥手、摇头、点头等动作数据,交互肢体动作数据例如拥抱、握手、亲吻等动作数据。
在一些实施例中,面部表情数据的获取方式有两种,一是通过实时数据采集获取,例如可以实时扫描以识别出用户的真实人脸,提取真实人脸的表情特征,通过表情特征的匹配算法,计算出当前可能的表情,比如皱眉、张嘴、微笑、皱鼻等,然后获取这些表情对应的表情数据;二是根据用户的选择获取,例如用户可以从预置的表情列表中选择表情,终端获取用户选择的表情对应的表情数据。
在一些实施例中,独立肢体动作数据的获取方式也可以有两种,例如行走、奔跑等独立肢体动作数据可以通过实时数据采集获取,例如可以利用***提供的运动检测功能检测用户是否在行走或奔跑,从而获取对应的动作数据;再例如挥手、摇头、点头等独立肢体动作数据可以根据用户的选择获取,例如用户可以从预置的独立肢体动作列表中选择动作,终端获取用户选择的动作对应的动作数据。
在一些实施例中,交互肢体动作数据可以根据用户的选择获取,例如用户可以从预置的交互肢体动作列表中选择动作,终端获取用户选择的动作对应的动作数据。
步骤204、所述第一终端将所述第一用户的行为特征数据作用于所述第一终端显示的虚拟形象上;
所述第一终端显示的虚拟形象包括第一虚拟形象和第二虚拟形象。
针对实时聊天数据,所述第一终端可以直接将第一用户的实时聊天数据作用于所述第一终端显示的第一虚拟形象上,以呈现出所述第一虚拟形象正在与所述第二虚拟形象进行实时聊天的效果。
针对行为特征数据,需要视具体数据类型分别处理,如下:
当所述第一用户的行为特征数据为面部表情数据时,所述第一终端可以将所述面部表情数据作用于所述第一终端显示的所述第一虚拟形象上。即在第一终端侧,将所述面部表情数据作用于第一用户对应的虚 拟形象模型的面部对应位置,以呈现出所述第一虚拟形象正在与所述第二虚拟形象进行表情互动的效果。
当所述第一用户的行为特征数据为独立肢体动作数据时,所述第一终端可以将所述独立肢体动作数据作用于所述第一终端显示的所述第一虚拟形象上。即在第一终端侧,将所述独立肢体动作数据作用于第一用户对应的虚拟形象模型的肢体对应位置,以呈现出所述第一虚拟形象正在与所述第二虚拟形象进行独立肢体动作互动的效果。
当所述第一用户的行为特征数据为交互肢体动作数据时,所述第一终端可以将所述交互肢体动作数据作用于所述第一终端显示的所述第一虚拟形象和所述第二虚拟形象上。即在第一终端侧,将所述交互肢体动作数据作用于第一用户对应的虚拟形象模型的肢体对应位置,同时,将该交互肢体动作数据作用于第二用户对应的虚拟形象模型的肢体对应位置,以呈现出所述第一虚拟形象正在与所述第二虚拟形象进行交互肢体动作互动的效果。
步骤205、所述第一终端通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,以使得所述第二终端将所述第一用户的实时聊天数据及行为特征数据作用于所述第二终端显示的虚拟形象上,以实现虚拟形象之间的互动。
在第二终端接收到第一终端发起的互动请求之后,第二终端也会获取互动场景,获取方法同第一终端,此处不再赘述,第二终端也会将需要互动的虚拟形象渲染至所述互动场景中显示,所述第二终端显示的虚拟形象包括第一虚拟形象和第二虚拟形象。
针对实时聊天数据,所述第二终端可以直接将第一用户的实时聊天数据作用于所述第二终端显示的第一虚拟形象上,以呈现出所述第一虚拟形象正在与所述第二虚拟形象进行实时聊天的交互场景。
针对行为特征数据,需要视具体数据类型分别处理,如下:
当所述第一用户的行为特征数据为面部表情数据时,所述第二终端可以将所述面部表情数据作用于所述第二终端显示的所述第一虚拟形象上。即在第二终端侧,将所述面部表情数据作用于第一用户对应的虚拟形象模型的面部对应位置。
当所述第一用户的行为特征数据为独立肢体动作数据时,所述第二终端可以将所述独立肢体动作数据作用于所述第二终端显示的所述第一虚拟形象上。即在第二终端侧,将所述独立肢体动作数据作用于第一用户对应的虚拟形象模型的肢体对应位置。
当所述第一用户的行为特征数据为交互肢体动作数据时,所述第二终端可以将所述交互肢体动作数据作用于所述第二终端显示的所述第一虚拟形象和所述第二虚拟形象上。即在第二终端侧,将所述交互肢体动作数据作用于第一用户对应的虚拟形象模型的肢体对应位置,同时,将该交互肢体动作数据作用于第二用户对应的虚拟形象模型的肢体对应位置。
本实施例中,第一终端可以获取互动场景,将需要互动的虚拟形象渲染至所述互动场景中显示,然后获取第一用户的实时聊天数据及行为特征数据,所述第一用户为所述第一终端的用户,然后将所述第一用户的实时聊天数据及行为特征数据作用于所述第一终端显示的虚拟形象上,最后通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,以使得所述第二终端将所述第一用户的实时聊天数据及行为特征数据作用于所述第二终端显示的虚拟形象上,从而实现了虚拟形象之间实时聊天(例如实时语音、文字聊天)及实时行为(例如实时表情、动作)的互动。
图2所示实施例所描述的方法,本实施例将举例作进一步详细说明,如 图3所示,本实施例的方法包括:
步骤301、第一终端获取互动场景;
具体实现中,每个终端的用户都可以预先在其终端上建立虚拟形象。为便于描述,本实施例中,可以将第一终端的用户称为第一用户,将第一用户建立的虚拟形象称为第一虚拟形象,将第二终端的用户称为第二用户,将第二用户建立的虚拟形象称为第二虚拟形象。当第一虚拟形象想要与第二虚拟形象进行互动时,第一终端可以通过服务器向第二终端发起互动请求,在服务器为第一终端及第二终端建立通信信道之后,第一终端可以获取互动场景。
步骤301中获取互动场景的过程与步骤201类似,在此不再赘述。
步骤302、所述第一终端将需要互动的虚拟形象渲染至所述互动场景中显示;
在一些实施例中,需要互动的虚拟形象包括第一虚拟形象和第二虚拟形象,即第一终端可以将第一虚拟形象和第二虚拟形象融合至第一用户所选的互动场景中显示,从而呈现出虚实结合的效果。
步骤303、所述第一终端获取第一用户的实时聊天数据及行为特征数据,所述第一用户为所述第一终端的用户;
第一用户的实时聊天数据可以包括第一用户输入的语音数据、视频数据、文字数据等,此处不做具体限定。实时聊天数据可以通过终端的麦克风、数据采集接口等实时采集。
第一用户的行为特征数据可以包括面部表情数据、独立肢体动作数据以及交互肢体动作数据。其中,面部表情数据例如皱眉、张嘴、微笑、皱鼻等表情数据,独立肢体动作数据例如行走、奔跑、挥手、摇头、点头等动作数据,交互肢体动作数据例如拥抱、握手、亲吻等动作数据。
步骤304、所述第一终端将所述第一用户的实时聊天数据及行为特 征数据作用于所述第一终端显示的虚拟形象上;
步骤305、所述第一终端通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,以使得所述第二终端将所述第一用户的实时聊天数据及行为特征数据作用于所述第二终端显示的虚拟形象上;
步骤304、305的具体处理过程,可对应参阅步骤204、205的具体处理过程,此处不再赘述。
步骤306、所述第一终端通过所述服务器接收所述第二终端发送的所述第二用户的实时聊天数据及行为特征数据;
在互动时,第二终端也可以获取第二用户的实时聊天数据及行为特征数据,在获取之后,第二终端可以先将第二用户的实时聊天数据及行为特征数据作用于第二终端显示的虚拟形象上,具体如下:
针对实时聊天数据,所述第二终端可以直接将第二用户的实时聊天数据作用于所述第二终端显示的第二虚拟形象上,以呈现出所述第二虚拟形象正在与所述第一虚拟形象进行实时聊天的交互场景。
针对行为特征数据,需要视具体数据类型分别处理,如下:
当所述第二用户的行为特征数据为面部表情数据时,所述第二终端可以将所述面部表情数据作用于所述第二终端显示的所述第二虚拟形象上。即在第二终端侧,将所述面部表情数据作用于第二用户对应的虚拟形象模型的面部对应位置。
当所述第二用户的行为特征数据为独立肢体动作数据时,所述第二终端可以将所述独立肢体动作数据作用于所述第二终端显示的所述第二虚拟形象上。即在第二终端侧,将所述独立肢体动作数据作用于第二用户对应的虚拟形象模型的肢体对应位置。
当所述第二用户的行为特征数据为交互肢体动作数据时,所述第二 终端可以将所述交互肢体动作数据作用于所述第二终端显示的所述第一虚拟形象和所述第二虚拟形象上。即在第二终端侧,将所述交互肢体动作数据作用于第一用户对应的虚拟形象模型的肢体对应位置,同时,将该交互肢体动作数据作用于第二用户对应的虚拟形象模型的肢体对应位置。
之后,第二终端通过服务将第二用户的实时聊天数据及行为特征数据发送给第一终端。
步骤307、所述第一终端将所述第二用户的实时聊天数据及行为特征数据作用于所述第一终端显示的虚拟形象上。
在一些实施例中,针对实时聊天数据,所述第一终端可以直接将第二用户的实时聊天数据作用于所述第一终端显示的第二虚拟形象上,以呈现出所述第二虚拟形象正在与所述第一虚拟形象进行实时聊天的交互场景。
针对行为特征数据,需要视具体数据类型分别处理,如下:
当所述第二用户的行为特征数据为面部表情数据时,所述第一终端可以将所述面部表情数据作用于所述第一终端显示的所述第二虚拟形象上。即在第一终端侧,将所述面部表情数据作用于第二用户对应的虚拟形象模型的面部对应位置。
当所述第二用户的行为特征数据为独立肢体动作数据时,所述第一终端可以将所述独立肢体动作数据作用于所述第一终端显示的所述第二虚拟形象上。即在第一终端侧,将所述独立肢体动作数据作用于第二用户对应的虚拟形象模型的肢体对应位置。
当所述第二用户的行为特征数据为交互肢体动作数据时,所述第一终端可以将所述交互肢体动作数据作用于所述第一终端显示的所述第一虚拟形象和所述第二虚拟形象上。即在第一终端侧,将所述交互肢体 动作数据作用于第一用户对应的虚拟形象模型的肢体对应位置,同时,将该交互肢体动作数据作用于第二用户对应的虚拟形象模型的肢体对应位置。
本实施例中,第一终端可以获取互动场景,将需要互动的虚拟形象渲染至所述互动场景中显示,然后获取第一用户的实时聊天数据及行为特征数据,所述第一用户为所述第一终端的用户,然后将所述第一用户的实时聊天数据及行为特征数据作用于所述第一终端显示的虚拟形象上,最后通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,以使得所述第二终端将所述第一用户的实时聊天数据及行为特征数据作用于所述第二终端显示的虚拟形象上,从而实现了虚拟形象之间实时聊天(例如实时语音、文字聊天)及实时行为(例如实时表情、动作)的互动。
本申请实施例还提供的一种虚拟形象互动方法,如图4所示。本实施例的方法包括以下步骤:
步骤401,当第一终端对应的第一用户与第二终端对应的第二用户进行实时语音通话时,第一终端获取互动场景。
步骤402,所述第一终端将所述第一用户对应的第一虚拟形象渲染至所述互动场景中显示。
步骤403,所述第一终端获取第一用户的实时聊天数据及行为特征数据。
在一些实施例中,所述第一用户的行为特征数据包括:面部表情数据、独立肢体动作数据、和交互动作数据中的一个或者多个;其中,所述第一用户的独立肢体动作数据为作用于所述第一虚拟形象上的肢体动作数据;所述第一用户的交互肢体动作数据为作用于所述第一虚拟形象和第二虚拟形象上的肢体动作数据。
在一些实施例中,所述面部表情数据包括面部表情类型;所述独立肢体动作数据包括独立肢体动作类型;所述交互肢体动作数据包括交互肢体动作类型。
在一些实施例中,所述第一终端获取所述第一用户的行为特征数据包括以下至少一个:
所述第一终端通过表情检测或表情选择,获取所述第一用户的面部表情类型;
所述第一终端通过独立动作检测或独立动作选择,获取所述第一用户的独立肢体动作类型;
所述第一终端通过交互动作选择,获取所述第一用户的交互肢体动作类型。
步骤404,所述第一终端将第一用户的行为特征数据作用于所述第一终端显示的第一虚拟形象上。
步骤405,所述第一终端通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,使得第二终端播放所述第一用户的实时聊天数据,并将第一用户的行为特征数据作用于第二终端显示的第一虚拟形象上。
在一些实施例中,第二终端获取所述第二用户的实时聊天数据及行为特征数据,将所述第二用户的实时聊天数据发送给第一终端进行播放,将所述第二用户的行为特征数据发送给第一终端,用于第一终端控制其显示的第二虚拟形象,并将所述第二用户的行为特征数据作用于所述第二终端显示的与第二用户对应的第二虚拟形象上,以实现第一虚拟形象和第二虚拟形象的之间的互动。
在一些实施例中,所述第一终端将所述第一用户的行为特征数据发送给第二终端包括以下至少一个:
所述第一终端向所述服务器发送表情类型请求消息,所述表情类型请求消息中携带所述第一用户的面部表情类型的标识;其中,所述服务器将所述表情类型请求消息转发给所述第二终端,使得所述第二终端根据其中携带的面部表情类型的标识,确定对应的面部表情类型,并根据所述面部表情类型,控制所述第二终端显示的第一虚拟形象的面部表情;
所述第一终端向所述服务器发送独立动作类型请求消息,所述独立动作类型请求消息中携带所述第一用户的独立肢体动作类型的标识;其中,所述服务器将所述独立动作类型请求消息转发给所述第二终端,使得得到第二终端根据其中携带的独立肢体动作类型的标识,确定对应的独立肢体动作类型,并根据所述独立肢体动作类型,控制所述第二终端显示的第一虚拟形象的独立肢体动作;
所述第一终端向所述服务器发送交互动作类型请求消息,所述交互动作类型请求消息中携带所述第一用户的交互肢体动作类型的标识;其中,所述服务器将所述交互动作类型请求消息转发给所述第二终端,使得得到第二终端根据其中携带的交互肢体动作类型的标识,确定对应的交互肢体动作类型,并根据所述交互肢体动作类型,控制所述第二终端显示的第一虚拟形象和第二虚拟形象的交互肢体动作。
在本申请实施例中,服务器在第一终端和第二终端之间进行转发的仅包含实时聊天数据和行为特征数据,其中,由于第一用户和第二用户进行的是实时语音通话,所以所述的实时聊天数据包含的是语音数据,数据量较小。而且,在本申请实施例中,行为特征数据可以包括:面部表情类型,独立肢体动作类型,交互肢体动作类型,服务器只需要在第一终端和第二终端之间转发上述类型信息,即可通知对应的终端根据相应的类型对显示的虚拟形象进行控制,进一步减少了需要转发的数据量,提高了处理效率,降低了服务器的处理压力。
本申请实施例还提供一种终端,如图5所示,本实施例的终端包括:第一获取单元501、渲染单元502、第二获取单元503、处理单元504及发送单元505,如下:
第一获取单元501,用于获取互动场景。
在一些实施例中,每个终端的用户都可以预先在其终端上建立虚拟形象。
为便于描述,本实施例中,可以将第一终端的用户称为第一用户,将第一用户建立的虚拟形象称为第一虚拟形象,将第二终端的用户称为第二用户,将第二用户建立的虚拟形象称为第二虚拟形象。当第一虚拟形象想要与第二虚拟形象进行互动时,第一终端可以通过服务器向第二终端发起互动请求,在服务器为第一终端及第二终端建立通信信道之后,第一终端可以获取互动场景。
渲染单元502,用于将需要互动的虚拟形象渲染至所述互动场景中显示。
在一些实施例中,需要互动的虚拟形象包括第一虚拟形象和第二虚拟形象,即渲染单元502可以将第一虚拟形象和第二虚拟形象融合至第一用户所选的互动场景中显示,从而呈现出虚实结合的效果。
第二获取单元503,用于获取第一用户的实时聊天数据及行为特征数据,所述第一用户为所述终端的用户。
第一用户的实时聊天数据可以包括第一用户输入的语音数据、视频数据、文字数据等,此处不做具体限定。实时聊天数据可以通过终端的麦克风、数据采集接口等实时采集。
第一用户的行为特征数据可以包括面部表情数据、独立肢体动作数据以及交互肢体动作数据。
面部表情数据、独立肢体动作数据和交互肢体动作数据的获取过程 可以参见前面方法实施例的描述,在此不再赘述。
处理单元504,用于将所述第一用户的行为特征数据作用于所述终端显示的虚拟形象上。
所述第一终端显示的虚拟形象包括第一虚拟形象和第二虚拟形象。
针对行为特征数据,处理单元504的具体操作可以参见前面方法实施例的说明,在此步骤赘述。
发送单元505,用于通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,以使得所述第二终端将所述第一用户的行为特征数据作用于所述第二终端显示的虚拟形象上,以实现虚拟形象之间的互动。
在第二终端接收到第一终端发起的互动请求之后,第二终端也会获取互动场景,获取方法同第一终端,此处不再赘述,第二终端也会将需要互动的虚拟形象渲染至所述互动场景中显示,所述第二终端显示的虚拟形象包括第一虚拟形象和第二虚拟形象。
进一步地,终端还可以包括接收单元506,所述接收单元506用于,通过所述服务器接收所述第二终端发送的所述第二用户的实时聊天数据及行为特征数据,所述处理单元504还用于,播放所述第二用户的实时聊天数据,并将所述第二用户的行为特征数据作用于所述第二终端显示的第一用户对应的虚拟形象上。
需要说明的是,上述实施例提供的终端在实现虚拟形象之间互动时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的终端与虚拟形象之间互动的方法属于同一构思,其具体实现过程详见方法实施例,此处不再赘述。
本实施例中,终端可以获取互动场景,将需要互动的虚拟形象渲染至所述互动场景中显示,然后获取第一用户的实时聊天数据及行为特征数据,所述第一用户为所述终端的用户,然后将所述第一用户的行为特征数据作用于所述终端显示的虚拟形象上,最后通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给其他终端,以使得所述其他终端将所述第一用户的行为特征数据作用于所述其他终端显示的虚拟形象上,从而实现了虚拟形象之间实时聊天(例如实时语音、文字聊天)及实时行为(例如实时表情、动作)的互动。
本申请实施例还提供了一种终端,如图6所示,其示出了本申请实施例所涉及的终端的结构示意图,具体来讲:
该终端可以包括射频(RF,Radio Frequency)电路601、包括有一个或一个以上计算机可读存储介质的存储器602、输入单元603、显示单元604、传感器605、音频电路606、无线保真(WiFi,Wireless Fidelity)模块607、包括有一个或者一个以上处理核心的处理器608、以及电源609等部件。本领域技术人员可以理解,图6中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
RF电路601可用于收发信息或通话过程中,信号的接收和发送。
存储器602可用于存储软件程序以及模块,处理器608通过运行存储在存储器602的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器602可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据终端的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器602可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储 器件、闪存器件、或其他易失性固态存储器件。相应地,存储器602还可以包括存储器控制器,以提供处理器608和输入单元603对存储器602的访问。
输入单元603可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
显示单元604可用于显示由用户输入的信息或提供给用户的信息以及终端的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。
终端还可包括至少一种传感器605,比如光传感器、运动传感器以及其他传感器。
音频电路606可提供用户与终端之间的音频接口。
WiFi属于短距离无线传输技术,为用户提供了无线的宽带互联网访问。虽然图5示出了WiFi模块607,但是可以理解的是,其并不属于终端的必须构成,完全可以根据需要在不改变申请的本质的范围内而省略。
处理器608是终端的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器602内的软件程序和/或模块,以及调用存储在存储器602内的数据,执行终端的各种功能和处理数据,从而对终端进行整体监控。在一些实施例中,处理器608可包括一个或多个处理核心;在一些实施例中,处理器608可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器608中。
终端还包括给各个部件供电的电源609(比如电池)。
尽管未示出,终端还可以包括摄像头、蓝牙模块等,在此不再赘述。 具体在本实施例中,终端中的处理器608会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器602中,并由处理器608来运行存储在存储器602中的应用程序,从而实现各种功能:
当第一终端对应的第一用户与第二终端对应的第二用户进行实时语音通话时,获取互动场景;
将所述第一用户对应的第一的虚拟形象渲染至所述互动场景中显示;
获取第一用户的实时聊天数据及行为特征数据,所述第一用户为所述终端的用户;
将所述第一用户的行为特征数据作用于所述终端显示的虚拟形象上;
通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,以使得所述第二终端播放所述第一用户的实时聊天数据,并将所述第一用户的行为特征数据作用于所述第二终端显示的第一虚拟形象上,其中,所述第二终端获取所述第二用户的实时聊天数据及行为特征数据,将所述第二用户的实时聊天数据发送给第一终端进行播放,并将所述第二用户的行为特征数据作用于所述第二终端显示的与第二用户对应的第二虚拟形象上,以实现虚拟形象之间的互动。
相应的,本申请实施例还提供了一种虚拟形象之间互动的***,如图7所示,***中包括终端710与服务器720。终端710可以包括通话模块711、场景管理模块712和互动模块713,如下:
通话模块711,主要用于实现语音通话的通道建立、状态管理、设备管理、音频数据收发等;
场景管理模块712,主要用于实现不同互动场景的显示和渲染;
互动模块713,主要用于基于互动场景,实现虚拟形象之间的表情、独立动作、交互动作等互动。
服务器720可以包括互动管理模块721、通知中心模块722、语音信令模块723、语音数据模块724、消息中心模块725和状态中心模块726。
在一个实施例中,终端710可以包括终端A及终端B,终端A的用户可以称为第一用户,终端A的用户建立的虚拟形象可以称为第一虚拟形象,终端B的用户可以称为第二用户,终端B的用户建立的虚拟形象可以称为第二虚拟形象。当第一、二虚拟形象之间要进行互动时,终端与服务器的各个模块之间的信令交互可如图8、图9所示,图8主要示出了虚拟形象之间进行语音互动时的信令交互,图9主要示出了虚拟形象之间进行行为交互时的信令交互,实际中,语音交互和行为交互可以同时进行,先参阅图8,具体如下:
步骤801,长连接建立;
终端A与终端B都会与服务器维持一个传输控制协议(Transmission Control Protocol,TCP)的长连接,从而保证自己的强在线,状态中心模块会维持每个终端的在线状态。
步骤802,互动请求的发起;
终端A向语音信令模块发起与B的互动请求后,语音信令模块首先会进行B的在线状态检查,确保B在线时,才会认为这是一次有效的呼叫;反之,会返回呼叫失败。
步骤803,互动请求的通知;
语音信令模块通过状态中心模块检查满足发起互动请求要求后,会返回A请求成功,并由通知中心模块通知被呼叫方B。
步骤804-805,数据通道的建立;
终端A和B开始进行基于用户数据报协议(User Datagram Protocol,UDP)的语音数据通道的建立,一旦建立成功就会启动各自的音频设备,开始采集音频数据,将音频数据作用于自身用户建立的虚拟形象之后, 发送给语音数据模块。
步骤806,音频数据的收发;
语音数据模块收到A和B的语音数据后,会转发给对方,终端A接收到终端B发送的语音数据之后,会将该语音数据作用于终端A显示的第二虚拟形象上,终端B接收到终端A发送的语音数据之后,会将该语音数据作用于终端B显示的第一虚拟形象上,以呈现出虚拟形象之间进行语音互动的效果。
接下来可参阅图9,具体如下:
针对面部表情的互动;
步骤911,终端A可以通过表情检测或表情选择,获取第一用户的面部表情数据,将第一用户的面部表情数据作用于终端A显示的第一虚拟形象上,步骤912,终端A将第一用户的面部表情数据发送给服务器的互动管理模块。然后在步骤913和914,服务器的互动管理模块、消息和通知中心模块将第一用户的面部表情数据发送给终端B,步骤915,终端B将第一用户的面部表情数据作用于终端B显示的第一虚拟形象上,以呈现出虚拟形象之间进行表情互动的效果。
针对独立肢体动作互动;
步骤921,终端B可以通过独立动作检测或独立动作选择,获取第二用户的独立肢体动作数据,将第二用户的独立肢体动作数据作用于终端B显示的第二虚拟形象上,然后步骤922,终端B将第二用户的独立肢体动作数据发送给服务器的互动管理模块。步骤923和924,服务器的互动管理模块、消息和通知中心模块将第二用户的独立肢体动作数据发送给终端A,步骤925,终端A将第二用户的独立肢体动作数据作用于终端A显示的第二虚拟形象上,以呈现出虚拟形象之间进行独立动作互动的效果。
针对交互肢体动作互动;
步骤931,终端A可以通过交互动作选择,获取第一用户的交互肢体动作数据,将第一用户的交互肢体动作数据作用于终端A显示的第一、第二虚拟形象上,然后步骤932,终端A将第一用户的交互肢体动作数据发送给服务器的互动管理模块。步骤933和934,服务器的互动管理模块、消息和通知中心模块将第一用户的交互肢体动作数据发送给终端B,步骤935,终端B将第一用户的交互肢体动作数据作用于终端B显示的第一、第二三虚拟形象上,以呈现出虚拟形象之间进行交互动作互动的效果。
在终端A向终端B发起互动请求之后,两个终端就可以各自获取互动场景,将需要互动的虚拟形象渲染至各自获取的互动场景中显示,各个终端获取的互动场景可以相同,也可以不同,在互动的过程中,各终端可以根据各自用户的选择切换显示不同的交互场景。
图10a至10c就示出了本申请实施例提供的互动界面,其中,图10a的互动界面1001中,互动场景为实景,图10b与图10c的互动界面中,互动场景均为对应用户选择的街景。需要说明的是,图10a至10c仅为互动界面的一个效果展示图,实际中,并不构成对最终展示效果的限定。
在本申请所提供的几个实施例中,应该理解到,所揭露的***,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。所述作为分离部件说明的单元可以是或者也可以不是物理上分开 的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,装置,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (23)

  1. 一种虚拟形象之间互动的方法,包括:
    当第一终端对应的第一用户与第二终端对应的第二用户进行实时语音通话时,第一终端获取互动场景;
    所述第一终端将所述第一用户对应的第一虚拟形象渲染至所述互动场景中显示;
    所述第一终端获取第一用户的实时聊天数据及行为特征数据;
    所述第一终端将所述第一用户的行为特征数据作用于所述第一终端显示的第一虚拟形象上;
    所述第一终端通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,以使得所述第二终端播放所述第一用户的实时聊天数据,并将所述第一用户的行为特征数据作用于所述第二终端显示的第一虚拟形象上,其中,所述第二终端获取所述第二用户的实时聊天数据及行为特征数据,将所述第二用户的实时聊天数据发送给第一终端进行播放,并将所述第二用户的行为特征数据作用于所述第二终端显示的与第二用户对应的第二虚拟形象上,以实现所述第一虚拟形象和所述第二虚拟形象的之间的互动。
  2. 根据权利要求1所述的方法,所述第一终端获取互动场景包括:
    所述第一终端从所述服务器获取预设位置的街景图像,将所述街景图像作为所述互动场景。
  3. 根据权利要求1所述的方法,所述第一终端获取互动场景包括:
    所述第一终端从存储中获取采用预设元素构建的虚拟场景图像,将所述虚拟场景图像作为所述互动场景。
  4. 根据权利要求1所述的方法,所述第一终端获取互动场景包括:
    所述第一终端通过摄像头采集实景图像,将所述实景图像作为所述 交互场景。
  5. 根据权利要求1所述的方法,所述第一用户的行为特征数据包括:面部表情数据、独立肢体动作数据、和交互动作数据中的一个或者多个;其中,所述第一用户的独立肢体动作数据为作用于所述第一虚拟形象上的肢体动作数据;所述第一用户的交互肢体动作数据为作用于所述第一虚拟形象和第二虚拟形象上的肢体动作数据;
    所述第二用户的行为特征数据包括:面部表情数据、独立肢体动作数据、和交互动作数据中的一个或者多个;其中,所述第二用户的独立肢体动作数据为作用于所述第二虚拟形象上的肢体动作数据;所述第二用户的交互肢体动作数据为作用于所述第一虚拟形象和第二虚拟形象上的肢体动作数据。
  6. 根据权利要求5所述的方法,其中所述面部表情数据包括面部表情类型;所述独立肢体动作数据包括独立肢体动作类型;所述交互肢体动作数据包括交互肢体动作类型。
  7. 根据权利要求6所述的方法,其中,所述第一终端将所述第一用户的行为特征数据发送给第二终端包括以下至少一个:
    所述第一终端向所述服务器发送表情类型请求消息,所述表情类型请求消息中携带所述第一用户的面部表情类型的标识;其中,所述服务器将所述表情类型请求消息转发给所述第二终端,使得所述第二终端根据其中携带的面部表情类型的标识,确定对应的面部表情类型,并根据所述面部表情类型,控制所述第二终端显示的第一虚拟形象的面部表情;
    所述第一终端向所述服务器发送独立动作类型请求消息,所述独立动作类型请求消息中携带所述第一用户的独立肢体动作类型的标识;其中,所述服务器将所述独立动作类型请求消息转发给所述第二终端,使得得到第二终端根据其中携带的独立肢体动作类型的标识,确定对应的 独立肢体动作类型,并根据所述独立肢体动作类型,控制所述第二终端显示的第一虚拟形象的独立肢体动作;
    所述第一终端向所述服务器发送交互动作类型请求消息,所述交互动作类型请求消息中携带所述第一用户的交互肢体动作类型的标识;其中,所述服务器将所述交互动作类型请求消息转发给所述第二终端,使得得到第二终端根据其中携带的交互肢体动作类型的标识,确定对应的交互肢体动作类型,并根据所述交互肢体动作类型,控制所述第二终端显示的第一虚拟形象和第二虚拟形象的交互肢体动作。
  8. 根据权利要求6所述的方法,所述第一终端获取所述第一用户的行为特征数据包括以下至少一个:
    所述第一终端通过表情检测或表情选择,获取所述第一用户的面部表情类型;
    所述第一终端通过独立动作检测或独立动作选择,获取所述第一用户的独立肢体动作类型;
    所述第一终端通过交互动作选择,获取所述第一用户的交互肢体动作类型。
  9. 根据权利要求8所述的方法,所述第一终端通过表情检测获取所述第一用户的面部表情类型包括:
    所述第一终端实时扫描并识别所述第一用户的真实人脸,通过表情匹配算法,计算所述第一用户的面部表情;并根据所述计算出的面部表情,确定与所述计算出的面部表情对应的面部表情类型;
    所述第一终端通过表情选择获取所述第一用户的面部表情数据包括:
    所述第一终端根据所述第一用户从预先设置的表情选择列表中选择的表情,确定与所述选择的表情对应的面部表情类型。
  10. 根据权利要求8所述的方法,所述第一终端通过独立动作检测 获取所述第一用户的独立肢体动作类型包括:
    所述第一终端通过运动检测装置检测所述第一终端的运动状态,并根据所述检测得到的运动状态确定所述第一用户的独立肢体动作类型;
    所述第一终端通过独立动作选择获取所述第一用户的独立肢体动作数据包括:
    所述第一终端根据所述第一用户从动作选择列表中选择的动作,确定所述第一用户的独立肢体动作类型。
  11. 根据权利要求8所述的方法,所述第一终端通过交互动作选择获取所述第一用户的交互肢体动作数据包括:
    所述第一终端根据所述第一用户从交互动作选择列表中选择的交互动作,确定所述第一用户的交互肢体动作类型。
  12. 一种终端,包括:
    处理器;
    与所述处理器相连接的存储器;所述存储器中存储有机器可读指令模块;所述机器可读指令模块包括:
    第一获取单元,用于当第一终端对应的第一用户与第二终端对应的第二用户进行实时语音通话时,获取互动场景;
    渲染单元,用于将所述第一用户对应的第一虚拟形象渲染至所述互动场景中显示;
    第二获取单元,用于获取第一用户的实时聊天数据及行为特征数据,所述第一用户为所述终端的用户;
    处理单元,用于将所述第一用户的行为特征数据作用于所述终端显示的第一虚拟形象上;
    发送单元,用于通过服务器将所述第一用户的实时聊天数据及行为特征数据发送给第二终端,以使得所述第二终端播放所述第一用户的实 时聊天数据,并将所述第一用户的行为特征数据作用于所述第二终端显示的第一虚拟形象上,其中,所述第二终端获取所述第二用户的实时聊天数据及行为特征数据,将所述第二用户的实时聊天数据发送给第一终端进行播放,并将所述第二用户的行为特征数据作用于所述第二终端显示的与第二用户对应的第二虚拟形象上,以实现所述第一虚拟形象和所述第二虚拟形象之间的互动。
  13. 根据权利要求12所述的终端,所述第一获取单元具体用于,从所述服务器获取预设位置的街景图像,将所述街景图像作为所述互动场景。
  14. 根据权利要求12所述的终端,所述第一获取单元具体用于,从所述终端的存储中获取采用预设元素构建的虚拟场景图像,将所述虚拟场景图像作为所述互动场景。
  15. 根据权利要求12所述的终端,所述第一获取单元具体用于,通过摄像头采集实景图像,将所述实景图像作为所述交互场景。
  16. 根据权利要求12所述的终端,所述第一用户的行为特征数据包括:面部表情数据、独立肢体动作数据、和交互动作数据中的一个或者多个;其中,所述第一用户的独立肢体动作数据为作用于所述第一虚拟形象上的肢体动作数据;所述第一用户的交互肢体动作数据为作用于所述第一虚拟形象和第二虚拟形象上的肢体动作数据;
    所述第二用户的行为特征数据包括:面部表情数据、独立肢体动作数据、和交互动作数据中的一个或者多个;其中,所述第二用户的独立肢体动作数据为作用于所述第二虚拟形象上的肢体动作数据;所述第二用户的交互肢体动作数据为作用于所述第一虚拟形象和第二虚拟形象上的肢体动作数据。
  17. 根据权利要求16所述的终端,其中所述面部表情数据包括面部 表情类型;所述独立肢体动作数据包括独立肢体动作类型;所述交互肢体动作数据包括交互肢体动作类型。
  18. 根据权利要求17所述的终端,所述发送单元进一步用于:
    向所述服务器发送表情类型请求消息,所述表情类型请求消息中携带所述第一用户的面部表情类型的标识;其中,所述服务器将所述表情类型请求消息转发给所述第二终端,使得所述第二终端根据其中携带的面部表情类型的标识,确定对应的面部表情类型,并根据所述面部表情类型,控制所述第二终端显示的第一虚拟形象的面部表情;
    向所述服务器发送独立动作类型请求消息,所述独立动作类型请求消息中携带所述第一用户的独立肢体动作类型的标识;其中,所述服务器将所述独立动作类型请求消息转发给所述第二终端,使得得到第二终端根据其中携带的独立肢体动作类型的标识,确定对应的独立肢体动作类型,并根据所述独立肢体动作类型,控制所述第二终端显示的第一虚拟形象的独立肢体动作;
    向所述服务器发送交互动作类型请求消息,所述交互动作类型请求消息中携带所述第一用户的交互肢体动作类型的标识;其中,所述服务器将所述交互动作类型请求消息转发给所述第二终端,使得得到第二终端根据其中携带的交互肢体动作类型的标识,确定对应的交互肢体动作类型,并根据所述交互肢体动作类型,控制所述第二终端显示的第一虚拟形象和第二虚拟形象的交互肢体动作。
  19. 根据权利要求17所述的终端,所述第二获取单元进一步用于:
    通过表情检测或表情选择,获取所述第一用户的面部表情类型;
    通过独立动作检测或独立动作选择,获取所述第一用户的独立肢体动作类型;
    通过交互动作选择,获取所述第一用户的交互肢体动作类型。
  20. 根据权利要求19所述的终端,所述第二获取单元进一步用于,
    实时扫描并识别所述第一用户的真实人脸,通过表情匹配算法,计算所述第一用户的面部表情;并根据所述计算出的面部表情,确定与所述计算出的面部表情对应的面部表情类型;或者
    根据所述第一用户从预先设置的表情选择列表中选择的表情,确定与所述选择的表情对应的面部表情类型。
  21. 根据权利要求19所述的终端,所述第二获取单元进一步用于,
    通过运动检测装置检测所述第一终端的运动状态,并根据所述检测得到的运动状态确定所述第一用户的独立肢体动作类型;或
    根据所述第一用户从动作选择列表中选择的动作,确定所述第一用户的独立肢体动作类型。
  22. 根据权利要求19所述的终端,所述第二获取单元进一步用于,
    根据所述第一用户从交互动作选择列表中选择的交互动作,确定所述第一用户的交互肢体动作类型。
  23. 一种非易失性计算机可读存储介质,其中所述存储介质中存储有机器可读指令,所述机器可读指令可以由处理器执行以完成如权利要求1至11中任一项的方法。
PCT/CN2017/109468 2016-12-15 2017-11-06 一种虚拟形象之间互动的方法、终端及*** WO2018107918A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611161850.5A CN108234276B (zh) 2016-12-15 2016-12-15 一种虚拟形象之间互动的方法、终端及***
CN201611161850.5 2016-12-15

Publications (1)

Publication Number Publication Date
WO2018107918A1 true WO2018107918A1 (zh) 2018-06-21

Family

ID=62557963

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/109468 WO2018107918A1 (zh) 2016-12-15 2017-11-06 一种虚拟形象之间互动的方法、终端及***

Country Status (2)

Country Link
CN (1) CN108234276B (zh)
WO (1) WO2018107918A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490956A (zh) * 2019-08-14 2019-11-22 北京金山安全软件有限公司 动效素材生成方法、装置、电子设备和存储介质
CN113158058A (zh) * 2021-04-30 2021-07-23 南京硅基智能科技有限公司 服务信息的发送方法及装置、接收方法及装置
CN114422740A (zh) * 2021-12-25 2022-04-29 在秀网络科技(深圳)有限公司 一种用于即时通讯及视频的虚似场景互动方法与***

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109445573A (zh) * 2018-09-14 2019-03-08 重庆爱奇艺智能科技有限公司 一种用于虚拟化身形象互动的方法与装置
WO2020056694A1 (zh) * 2018-09-20 2020-03-26 华为技术有限公司 增强现实的通信方法及电子设备
CN109525483A (zh) * 2018-11-14 2019-03-26 惠州Tcl移动通信有限公司 移动终端及其互动动画的生成方法、计算机可读存储介质
CN109550256A (zh) * 2018-11-20 2019-04-02 咪咕互动娱乐有限公司 虚拟角色调整方法、装置和存储介质
CN109885367B (zh) * 2019-01-31 2020-08-04 腾讯科技(深圳)有限公司 互动聊天实现方法、装置、终端和存储介质
CN110102053B (zh) * 2019-05-13 2021-12-21 腾讯科技(深圳)有限公司 虚拟形象显示方法、装置、终端及存储介质
CN110609620B (zh) * 2019-09-05 2020-11-17 深圳追一科技有限公司 基于虚拟形象的人机交互方法、装置及电子设备
CN110599359B (zh) * 2019-09-05 2022-09-16 深圳追一科技有限公司 社交方法、装置、***、终端设备及存储介质
CN110674706B (zh) * 2019-09-05 2021-07-23 深圳追一科技有限公司 社交方法、装置、电子设备及存储介质
CN110889382A (zh) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 虚拟形象渲染方法及装置、电子设备和存储介质
CN111246225B (zh) * 2019-12-25 2022-02-08 北京达佳互联信息技术有限公司 信息交互方法、装置、电子设备及计算机可读存储介质
CN115396390B (zh) * 2021-05-25 2024-06-18 Oppo广东移动通信有限公司 基于视频聊天的互动方法、***、装置及电子设备
CN114168044A (zh) * 2021-11-30 2022-03-11 完美世界(北京)软件科技发展有限公司 虚拟场景的交互方法及装置、存储介质、电子装置
CN116664805B (zh) * 2023-06-06 2024-02-06 深圳市莱创云信息技术有限公司 一种基于增强现实技术的多媒体展示***及方法
CN117193541B (zh) * 2023-11-08 2024-03-15 安徽淘云科技股份有限公司 虚拟形象交互方法、装置、终端和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1606347A (zh) * 2004-11-15 2005-04-13 北京中星微电子有限公司 一种视频通信的方法
CN103218843A (zh) * 2013-03-15 2013-07-24 苏州跨界软件科技有限公司 虚拟人物通讯***和方法
US20130258040A1 (en) * 2012-04-02 2013-10-03 Argela Yazilim ve Bilisim Teknolojileri San. ve Tic. A.S. Interactive Avatars for Telecommunication Systems
CN103368929A (zh) * 2012-04-11 2013-10-23 腾讯科技(深圳)有限公司 一种视频聊天方法及***
CN103368816A (zh) * 2012-03-29 2013-10-23 深圳市腾讯计算机***有限公司 基于虚拟人物形象的即时通讯方法及***
CN105554430A (zh) * 2015-12-22 2016-05-04 掌赢信息科技(上海)有限公司 一种视频通话方法、***及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1606347A (zh) * 2004-11-15 2005-04-13 北京中星微电子有限公司 一种视频通信的方法
CN103368816A (zh) * 2012-03-29 2013-10-23 深圳市腾讯计算机***有限公司 基于虚拟人物形象的即时通讯方法及***
US20130258040A1 (en) * 2012-04-02 2013-10-03 Argela Yazilim ve Bilisim Teknolojileri San. ve Tic. A.S. Interactive Avatars for Telecommunication Systems
CN103368929A (zh) * 2012-04-11 2013-10-23 腾讯科技(深圳)有限公司 一种视频聊天方法及***
CN103218843A (zh) * 2013-03-15 2013-07-24 苏州跨界软件科技有限公司 虚拟人物通讯***和方法
CN105554430A (zh) * 2015-12-22 2016-05-04 掌赢信息科技(上海)有限公司 一种视频通话方法、***及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490956A (zh) * 2019-08-14 2019-11-22 北京金山安全软件有限公司 动效素材生成方法、装置、电子设备和存储介质
CN113158058A (zh) * 2021-04-30 2021-07-23 南京硅基智能科技有限公司 服务信息的发送方法及装置、接收方法及装置
CN114422740A (zh) * 2021-12-25 2022-04-29 在秀网络科技(深圳)有限公司 一种用于即时通讯及视频的虚似场景互动方法与***

Also Published As

Publication number Publication date
CN108234276A (zh) 2018-06-29
CN108234276B (zh) 2020-01-14

Similar Documents

Publication Publication Date Title
WO2018107918A1 (zh) 一种虚拟形象之间互动的方法、终端及***
US10636221B2 (en) Interaction method between user terminals, terminal, server, system, and storage medium
US11595617B2 (en) Communication using interactive avatars
WO2019096027A1 (zh) 一种通信处理方法、终端及存储介质
US20170111616A1 (en) Communication using avatar
WO2019034142A1 (zh) 三维虚拟形象的显示方法、装置、终端及存储介质
US9357174B2 (en) System and method for avatar management and selection
WO2019024750A1 (zh) 视频通信的方法、装置、终端及计算机可读存储介质
US10142588B2 (en) Information-processing device, communication system, storage medium, and communication method
CN109428859B (zh) 一种同步通信方法、终端及服务器
CN108876878B (zh) 头像生成方法及装置
CN108513088B (zh) 群组视频会话的方法及装置
CN109150690B (zh) 交互数据处理方法、装置、计算机设备和存储介质
WO2018093543A1 (en) Gaming server and method to generate context-aware gaming information
JPWO2019155735A1 (ja) 情報処理装置、情報処理方法及びプログラム
JP2020064426A (ja) コミュニケーションシステム及びプログラム
CN109039851B (zh) 交互数据处理方法、装置、计算机设备和存储介质
CN112906467A (zh) 合影图像生成方法及装置、电子设备和存储介质
JP2019164614A (ja) アバター制御プログラム
CN117319102A (zh) 基于上下文的化身质量
CN117999115A (zh) 为多用户通信会话对准扫描环境

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17881881

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17881881

Country of ref document: EP

Kind code of ref document: A1