CN114244816B - Synchronous communication method, terminal and readable storage medium - Google Patents

Synchronous communication method, terminal and readable storage medium Download PDF

Info

Publication number
CN114244816B
CN114244816B CN202210044304.2A CN202210044304A CN114244816B CN 114244816 B CN114244816 B CN 114244816B CN 202210044304 A CN202210044304 A CN 202210044304A CN 114244816 B CN114244816 B CN 114244816B
Authority
CN
China
Prior art keywords
communication
voice
terminal
virtual character
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210044304.2A
Other languages
Chinese (zh)
Other versions
CN114244816A (en
Inventor
李斌
陈晓波
冉蓉
易薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210044304.2A priority Critical patent/CN114244816B/en
Publication of CN114244816A publication Critical patent/CN114244816A/en
Application granted granted Critical
Publication of CN114244816B publication Critical patent/CN114244816B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1069Session establishment or de-establishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephonic Communication Services (AREA)
  • Telephone Function (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application discloses a synchronous communication method, a terminal, a computer program product and a readable storage medium, wherein the method comprises the following steps: starting an application interface according to the touch starting instruction, and displaying a virtual character group corresponding to the first communication object and consisting of corresponding second communication objects in the historical record on the application interface; receiving a communication instruction, and triggering a voice communication request according to the communication instruction so as to add a first communication object into any one communication session in a historical record; when the voice communication request is allowed, completing the establishment of a first communication session of the first communication object and the second communication object on an application interface; and triggering to start a voice function in a first communication session of an application interface, sending first voice data triggered by a first communication object to a second terminal when the voice function is started, and synchronously displaying the current voice state and the voice broadcast identification of the established communication session in the application interface where the first communication object and the second communication object are positioned.

Description

Synchronous communication method, terminal and readable storage medium
Description of the preferred embodiment
The application is a divisional application with the application number of 201710744130.X, the application date of 2017, 08 and 25 months and the name of 'a synchronous communication method, a terminal and a server'.
Technical Field
The present application relates to social application communication technology in the field of electronic applications, and in particular, to a synchronous communication method, a terminal, a computer program product, and a readable storage medium.
Background
With the continuous development of science and technology, electronic technology has also been developed rapidly, and the variety of electronic products is more and more, and people also enjoy various conveniences brought by the development of science and technology. People can now enjoy comfortable lives with technological development through various types of electronic devices or terminals, and applications of various functions mounted on the terminals. For example, a social application on the terminal may be used to communicate with distant friends and relatives via a network (IM).
In instant messaging applications, real-time chat conversation is a very important communication mode, and is more direct and real-time than communication modes such as texts, pictures and the like. At present, most of mainstream IM applications support real-time conversation in an audio and video mode, but the establishment process of the real-time conversation is complex, the conversation can be established only by calling an initiator and answering a receiver, the conversation establishment and the real-time conversation application are passive, and the man-machine interaction performance is limited.
Disclosure of Invention
Embodiments of the present application are expected to provide a synchronous communication method, a terminal, a computer program product, and a readable storage medium, which can flexibly establish and implement synchronous communication, and improve human-computer interaction performance.
The technical scheme of the invention is realized as follows:
the embodiment of the invention provides a synchronous communication method, which is applied to a first terminal and comprises the following steps:
starting an application interface according to a starting touch instruction, and displaying a virtual character group corresponding to a first communication object and a second communication object in a historical record on the application interface;
receiving a communication instruction, and triggering a voice communication request according to the communication instruction so as to add the first communication object into any one communication session in the history record;
when the voice communication request is allowed, completing the establishment of a first communication session of the first communication object and the second communication object in the application interface;
and triggering to start a voice function in the first communication session of the application interface, sending first voice data triggered by the first communication object to a second terminal when the voice function is started, and synchronously displaying the current voice state and voice broadcast identification of the established communication session in the application interface where the first communication object and the second communication object are located.
The embodiment of the invention provides a synchronous communication method, which is applied to a server and comprises the following steps:
receiving a voice communication request message for a first communication session sent by the first terminal, where the voice communication request message carries an identifier of the first communication session, and the first communication session is any one of the communication sessions in the history record in which the first communication object joins;
when the application interface corresponding to the identifier of the first communication session is not found, responding to the voice communication request message, establishing the application interface corresponding to the identifier of the first communication session, and generating a voice interface establishment completion message;
sending the voice interface establishment completion message to the first terminal;
receiving a message for establishing a real-time data channel sent by the first terminal, and establishing a real-time data channel with the first terminal according to the message for establishing the real-time data channel;
and when the real-time data channel is established, sending a communication permission message to the first terminal.
An embodiment of the present invention provides a first terminal, including:
the display unit is used for starting an application interface according to a starting touch instruction, and displaying a virtual character group which corresponds to a first communication object and is formed by corresponding second communication objects in a historical record on the application interface;
a first receiving unit, configured to receive a communication instruction, and trigger a voice communication request according to the communication instruction, so as to add the first communication object to any one of the communication sessions in the history record;
the communication unit is used for completing the establishment of a first communication session of the first communication object and the second communication object on the application interface when the voice communication request is allowed;
a starting unit, configured to trigger starting of a voice function in the first communication session of the application interface,
a first sending unit, configured to send first voice data triggered by the first communication object to a second terminal when the voice function is turned on,
the display unit is further configured to synchronously display a current voice state and a voice broadcast identifier of the established communication session in the application interface where the first communication object and the second communication object are located.
An embodiment of the present invention provides a server, including:
a second receiving unit, configured to receive a voice communication request message for a first communication session sent by the first terminal, where the voice communication request message carries an identifier of the first communication session, and the first communication session is any one of the communication sessions in the history record that the first communication object joins;
an establishing unit, configured to establish, according to the voice communication request message, an application interface corresponding to the identifier of the first communication session when the application interface corresponding to the identifier of the first communication session is not found,
the generating unit is used for generating a voice interface establishment completion message;
a second sending unit, configured to send the voice interface establishment completion message to the first terminal;
the second receiving unit is further configured to receive a message for establishing a real-time data channel sent by the first terminal, and establish a real-time data channel with the first terminal according to the message for establishing the real-time data channel;
the second sending unit is further configured to send a communication permission message to the first terminal when the real-time data channel is established.
An embodiment of the present invention provides a first computer-readable storage medium in which one or more programs are stored, and the one or more programs may be executed by one or more first processors to perform a synchronous communication method on the terminal side.
An embodiment of the present invention provides a second computer-readable storage medium, where one or more programs are stored in the second computer-readable storage medium, and the one or more programs may be executed by one or more second processors to perform the server-side synchronous communication method.
The embodiment of the application provides a synchronous communication method, a terminal, a computer program product and a readable storage medium, wherein an application interface is started according to a starting touch instruction, and a virtual character group which corresponds to a first communication object and is formed by corresponding second communication objects in a historical record is displayed on the application interface; receiving a communication instruction, and triggering a voice communication request according to the communication instruction so as to add a first communication object into any one communication session in a historical record; when the voice communication request is allowed, completing the establishment of a first communication session of the first communication object and the second communication object on an application interface; and triggering to start a voice function in a first communication session of an application interface, sending first voice data triggered by a first communication object to a second terminal when the voice function is started, and synchronously displaying the current voice state and voice broadcast identification of the established communication session in the application interface where the first communication object and the second communication object are positioned. By adopting the technical implementation scheme, as the communication of any communication session in the history record can be selected in the application interface, and the communication connection between the first terminal and the server is established by sending the voice communication request message to the server, when the communication connection is completed, namely the first terminal receives the communication permission message, the first terminal can enter the application interface and can see which communication objects are in the application communication interface, thus when the voice function corresponding to the first terminal or the first communication object is opened, the voice communication is carried out on other communication objects, and the first terminal can display which communication session has the voice communication in progress on the second terminal by synchronizing the voice broadcast identifier and the voice state of the first communication session to the application interface of the second terminal in which the second communication object is positioned, thus, the first terminal provides the autonomous selection for establishing the communication connection with the second terminal, and also provides the autonomous voice communication mechanism, and can flexibly carry out the establishment of synchronous communication and the implementation of synchronous communication, thereby improving the human-computer interaction performance.
Drawings
FIG. 1 is a diagram illustrating various hardware entities in a system for performing synchronous communications in an embodiment of the present application;
fig. 2 is a diagram of an interaction architecture between a first terminal (terminal) and a server in an embodiment of the present application;
fig. 3 is a first flowchart of a synchronous communication method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an exemplary current communication interface provided by embodiments of the present application;
FIG. 5 is a first schematic diagram of an exemplary voice communication interface provided by an embodiment of the present application;
fig. 6 is a second flowchart of a synchronous communication method according to an embodiment of the present application;
FIG. 7 is a second schematic diagram of an exemplary voice communication interface provided in an embodiment of the present application;
FIG. 8 is a third schematic diagram of an exemplary voice communication interface provided by an embodiment of the present application;
FIG. 9 is a fourth schematic diagram of an exemplary voice communication interface provided by an embodiment of the present application;
fig. 10 is a flowchart three of a synchronous communication method according to an embodiment of the present application;
FIG. 11 is a fifth schematic diagram of an exemplary voice communication interface provided by an embodiment of the present application;
fig. 12 is a flowchart of a synchronous communication according to an embodiment of the present application;
FIG. 13 is an interaction diagram for synchronous communications according to an embodiment of the present application;
fig. 14 is a first schematic structural diagram of a first terminal according to an embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of a second terminal according to an embodiment of the present application;
fig. 16 is a first schematic structural diagram of a server according to an embodiment of the present application;
fig. 17 is a third schematic structural diagram of a first terminal according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Fig. 1 is a schematic diagram of various hardware entities in a system for performing synchronous communication in an embodiment of the present application, where fig. 1 includes: one or more servers 2, terminals 1-1 to 1-5 and a network 3, the network 3 comprising routers, gateways and other network entities, not shown. The terminals 1-1 to 1-5 perform information interaction with the server through a wired network or a wireless network so as to acquire the identification results from the terminals 1-1 to 1-5 and transmit the identification results to the server. The types of the terminals are shown in fig. 1, and include a mobile phone (terminal 1-3), a tablet computer or a PDA (terminal 1-5), a desktop computer (terminal 1-2), a PC (terminal 1-4), and a one-piece machine (terminal 1-1). In some embodiments, the terminal can also be implemented as various types of user terminals, such as but not limited to, a notebook computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device, a smart phone, a smart watch), a smart voice interaction device (e.g., a smart speaker), a smart appliance, and a smart car device. The terminal is installed with various applications required by the user, such as applications with entertainment functions (e.g., video applications, audio playing applications, game applications, and reading software), and applications with service functions (e.g., map navigation applications, group buying applications, shooting applications, financing applications, and communication applications).
In the embodiments of the present application, a terminal (for example, a first terminal, a second terminal, or the like) in which a communication application is installed and a server corresponding to the communication application are described as examples.
The following describes a connection structure of modules in the synchronous communication system in the embodiment of the present application, taking the first terminal and the server as an example.
Specifically, as shown in fig. 2, the server includes a real-time communication data module, a real-time communication signaling module, and a state center, and the client of the communication application in the first terminal may include: and a real-time communication interface module. Wherein the content of the first and second substances,
and the real-time communication signaling module is used for realizing room management (establishing a voice communication interface) related to real-time communication, equipment management (opening or starting of a voice function and the like) of audio and the like, state management (a list of on-line conditions and the like of communication objects) of the communication objects, signaling management with a terminal and the like.
And the real-time communication data module is used for realizing real-time transmission of data types such as voice data, face feature information, functional objects and the like related to real-time communication.
And the real-time communication interface is used for realizing the arrangement, interface display and the like of virtual characters in real-time communication.
And the state center is used for storing real-time signaling and real-time data.
It should be noted that the interaction between the first terminal and the server is not only the interaction of real-time data, but also the interaction of real-time signaling, where the interaction of real-time signaling is realized by the first terminal interacting with a real-time communication signaling module of the server through a real-time signaling channel, and the interaction of real-time data is realized by the first terminal interacting with a real-time communication data module of the server through a real-time data channel.
In this embodiment of the present application, the real-time signaling channel may be a Transmission Control Protocol (TCP) channel, and the real-time data channel may be a User Datagram Protocol (UDP) channel.
An embodiment of the present application provides a synchronous communication method, which is applied to a first terminal side, and as shown in fig. 3, the synchronous communication method may include:
s101, starting an application interface according to the touch starting instruction, and displaying a virtual character group corresponding to the first communication object and consisting of corresponding second communication objects in the history record on the application interface.
The synchronous communication method provided by the embodiment of the application can be suitable for use scenes of instant messaging, and the communication application can be a social application for real-time communication, such as chat software for real-time chat and conversation. The embodiments of the present application are not limited.
The first terminal in the embodiment of the present application may be an electronic device installed with a communication application, for example, a smart phone, a tablet, and the like. The embodiments of the present application are not limited.
In this embodiment of the application, when a first user wants to use a communication application, the first user touches an icon of the communication application, and then the first terminal receives a start touch instruction for the communication application, where the start touch instruction is used to start the communication application. Wherein the first user may be a user using a communication application on the first terminal.
Illustratively, the first user clicks on the icon of "communication application 1" to launch the communication application 1.
It should be noted that, in the embodiment of the present application, the first user touches the communication application, and the touch operation for starting the communication application may be a click, a double click, a special gesture, or the like, which is not limited in the embodiment of the present application.
After the first terminal receives the starting touch instruction, the first terminal responds to the starting touch instruction, loads a current communication interface of the communication application (the current communication interface is the application interface at the moment), and displays a first communication object (the expression form of the first communication object is a first virtual character) of the logged-in communication application and a virtual character group of a second communication object in the history record on the current communication interface.
The method comprises the steps that after a touch control starting instruction for a communication application is received by a first terminal, the communication application is started in response to the touch control starting instruction, a current communication interface of the communication application starts to be loaded, after the loading is completed, the current communication interface is displayed, and a virtual character group consisting of a first virtual character of a first communication object which logs in the communication application and a second communication object in a history record is displayed on the current communication interface. The history is history information of communication sessions with the first communication partner. Wherein the first communication object may characterize a first user who has logged in to the communication application.
It should be noted that, in the embodiment of the present application, the presentation form of the communication object in the communication application may take virtual characters as main characters, and each virtual character has its own identifier (i.e., the identifier of the user, that is, the identifier of the communication object, etc.). Specific virtual character representation forms are not limited in the embodiments of the present application.
Preferably, the virtual character in the embodiment of the present application may be a three-dimensional virtual character, and a specific implementation process in the embodiment of the present application is not limited.
Further, in this embodiment of the application, when the communication application is triggered for the first time, the first terminal may display a login interface when the communication application is logged in, the first user may register and set related information of the first communication object on the login interface, after logging in, the first user may communicate with other communication objects through a function of adding a friend or the like, and when the first user does not log out, and after the first user closes the logged-in communication application, and when the first user touches the communication application again, after the communication application receives a start touch instruction for the communication application, since the first communication object is already logged in the first terminal, at this time, the login display of the first terminal may directly display the logged-in communication interface of the first communication object, that is, the first terminal loads the current communication interface of the communication application in response to the start touch instruction, and displays the first communication object of the communicated application and a history of communication performed before the first communication object is recorded on the current communication interface. Since the communication objects in the embodiment of the present application may be displayed in the form of virtual characters, the first terminal displays a first communication object of the logged-in communication application on the current communication interface and displays a virtual character group consisting of the first virtual character and a second communication object in the history record, wherein the second communication object is in communication with the first communication object.
It should be noted that, in the embodiment of the present application, the communication object represents different users that can communicate with each other, and the representation is in the form of an identifier of a virtual character that can communicate with each other. The virtual character of the communication object, the identification and the like are set when the user represented by the communication object logs in.
In the embodiment of the present application, a setting template of a virtual character is set in a communication application, a user may set a virtual character of the user through the setting template of the virtual character in a login process, and a specific user may independently select various information such as a character role of a communication object corresponding to the user, wearing of the virtual character, a nickname (i.e., an identifier) of the virtual character, and the embodiment of the present application is not limited. The setting template of the virtual character is the image and the related information of the virtual character designed by a designer in advance, and is used when the user sets the virtual character during login.
In the embodiment of the application, the displayed image of the first communication object is the first virtual character, and the displayed image of the second communication object in the history record is represented as the virtual character group.
It should be noted that, in the embodiment of the present application, there may be at least one second communication object that has communicated with the first communication object, and therefore, the number of the communication objects in the history record may be one or multiple, so that the virtual character group represents a general term. In the embodiment of the present application, the virtual character groups of the communication objects in the history may be scroll-displayed by sliding, the current communication interface is limited to be displayed, and the specific triggering manner such as the sliding operation is not limited in the embodiment of the present application.
As for the "virtual character group" mentioned in the embodiment of the present application, there are two application scenarios in the embodiment of the present application. First, in a scenario where a first communication object is a party of call communication (i.e., a calling party of both communication parties), a virtual character group formed by a second communication object may be referred to as a second virtual character group, which is a general term representing a group formed by the second communication object communicating with the first communication object; second, in a scene in which a certain communication object of the second communication objects is used as a calling subject to communicate with the first communication object, the communication objects (including the first communication object) in the history record of the certain communication object form a first virtual character group. That is, the virtual character group is a general term of a group in which communication targets constituting call communication are 3D virtualized to obtain virtual characters.
Illustratively, as shown in fig. 4, after a first communication object a logged in to a communication application (second world) starts the communication application, a head portrait of the first virtual character of a and a second communication object in a history, such as a communication object in group chat, a virtual character group in lie, a secretary and the like, are displayed on a current communication interface. Since the history may include group chat, the virtual character of the first communication partner a is also displayed in the group chat on the current interface. In some embodiments, as shown in FIG. 4, time information of the historical communication sessions may be displayed at corresponding positions of the avatar of the second communication object and displayed in chronological order. If "before 3 minutes" is displayed above the virtual character of the communication object "lie", and "before 1 minute" is displayed above the virtual character of the communication object "small buddy", the historical communication sessions respectively corresponding to "lie" and "small buddy" are displayed in chronological order.
S102, receiving a communication instruction, and triggering a voice communication request according to the communication instruction so as to add a first communication object into any one communication session in the history record.
After the first terminal opens the application interface according to the touch instruction, and displays the virtual character group (i.e. the second virtual character group) corresponding to the first communication object and consisting of the corresponding second communication object in the history record on the application interface, because the first terminal has already displayed the communication session of the second communication object which has been previously communicated, the first user of the first terminal can directly select the first communication session which the first user wants to communicate with from the currently displayed history record, i.e. the first user touches any one communication session (the first communication session) in the selection history record, so that the first terminal receives the communication instruction of the first communication session, the first terminal triggers the voice communication request according to the communication instruction to add the first communication object into the first communication session, specifically the first terminal responds to the communication instruction, sends the voice communication request message to the server, receives the permission communication message of the server responding to the voice communication request message, and realizes that the first communication object is added into the first communication session according to the permission communication message.
It should be noted that, after the first terminal receives the communication instruction for the first communication session, the first terminal may respond to the communication instruction, and since the communication instruction is used to instruct the first communication partner to join the first communication session, the first terminal may perform a process of joining the first communication session. In some embodiments, the first terminal requests the server to perform a voice communication request message requesting a communication connection with the first communication session, and therefore, the first terminal sends the voice communication request message to the server, and the server sends a permission communication message to the first terminal in response to the voice communication request, and allows the first terminal to join the first communication session for voice communication. Wherein, the allowed communication message is used for representing that the first terminal has already established communication connection with the server, and interaction of communication data can be carried out.
In the embodiment of the present application, the server is an application server corresponding to the communication application. When the first terminal uses the communication application, information interaction for communication connection with a server corresponding to the communication application is required, and a specific interaction process will be described in detail in the following embodiment and subsequent embodiments.
In this embodiment, the voice communication request message carries an identifier of the first communication session, so that the server can perform communication connection with the first terminal through the identifier of the first communication session.
It should be noted that, in the embodiment of the present application, the main communication method between the communication objects may be voice communication, and the image between the communication objects is shown as a virtual character, but other communication methods such as characters, pictures, or expressions may also be used for auxiliary communication, and the embodiment of the present application is not limited.
In some embodiments, the implementation process of S102 may include: the first terminal responds to the communication instruction and sends a voice communication request message to the server; receiving a voice interface establishment completion message fed back by the server in response to the communication instruction; establishing a completion message according to the voice interface, and sending a message for establishing a real-time data channel to the server; and receiving an allowance communication message fed back by the server in response to the real-time data channel message, wherein the allowance communication message is used for representing that the voice communication request is allowed. The specific process will be explained in the following examples.
It should be noted that, in this embodiment of the present application, the first user may also select a new communication object to perform a communication session through an address book in the communication application, or the first user performs a communication session by adding a new communication object, and specifically, the first user may trigger the selection of the communication session through which form this embodiment of the present application is not limited. However, regardless of the manner by which the first user may trigger a communication session, for the first terminal, the first terminal receives a communication instruction for the communication session, which is the first user-triggered communication session, i.e., the role of the first communication session above.
S103, when the voice communication request is allowed, establishing the first communication session between the first communication object and the second communication object is completed on the application interface.
After the first terminal triggers a voice communication request to the server according to the communication instruction, after the first terminal receives a communication permission message of the server responding to the voice communication request message, the first terminal can perform communication data interaction because the communication permission message represents that the server permits the first terminal to join the first communication session, namely the voice communication request is permitted, namely the communication permission message is used for representing that the first terminal has already established communication connection with the server, so that the first terminal represents that the first communication session of the first communication object and the second communication object is established on the current application interface.
S104, triggering and starting a voice function in a first communication session of an application interface, sending first voice data triggered by a first communication object to a second terminal when the voice function is started, and synchronously displaying a current voice state and a voice broadcast identifier of the established communication session in the application interface where the first communication object and the second communication object are located.
After the first terminal completes establishment of the first communication session between the first communication object and the second communication object through an application interface (i.e., a voice communication interface), the first terminal already establishes a communication connection with the server, and can perform interaction of communication data with the second terminal through the server. The method comprises the steps that a first terminal triggers and starts a voice function in a first communication session of an application interface, when the voice function is started, first voice data triggered by a first communication object are sent to a second terminal, and the current voice state and the voice broadcast identification of the established communication session are synchronously displayed in the application interface where the first communication object and the second communication object are located. The voice state at this time is the voice state of the first virtual character, and the voice broadcast identifier at this time is the voice broadcast identifier of the first communication session.
Since the first virtual character and the second virtual character can be displayed on the voice communication interface, the first communication object which the first terminal has logged in can carry out voice communication with the second communication object in the voice communication interface. In this embodiment of the present application, a voice function may be set in the voice communication interface, and when the voice function is turned on (the voice function is triggered to be turned on in the first communication session of the voice communication interface), that is, when the voice function is turned on, it is represented that the first communication object can perform real-time voice communication using the voice communication interface on the first terminal, and then the first user corresponding to the first communication object can speak, so that the first terminal receives the first voice data of the first communication object, and can also send the first voice data triggered by the first communication object to the second terminal. Since the first communication object and the second communication object are displayed in the voice communication interface, that is, more than one communication object is displayed in the current voice communication interface, when one of the communication objects speaks, that is, when the first terminal receives voice data of one communication object, the first terminal can synchronously display which communication object is speaking. That is, the first voice data of the first communication partner is received at the first terminal, and the first terminal can synchronously display the voice state of the first avatar. Because the first virtual character is the presentation form of the first communication object, the voice state of the first virtual character on the voice communication interface is that when the first virtual character is in a call, the first virtual character represents that the first communication object is speaking. Meanwhile, the first terminal can synchronously display the voice broadcast identification of the first communication session, and the voice broadcast identification and the voice state are synchronously displayed to the second terminal through the server, so that the second terminal can synchronously display the voice broadcast identification and the voice state of the first communication session, and thus, a user corresponding to the second terminal can know which communication session is in the voice call and which communication in the voice call is in the speech call, and the selection of the subsequent user for selecting the voice communication is facilitated. The second terminal is a terminal device corresponding to the online second communication object in the first communication session.
In the embodiment of the application, when the voice function is turned on, when the first communication object is not speaking, there may be a state that the online second communication object performs voice communication, and then the second terminal logged in the online second communication object can send the second voice data to the server in real time, and the server forwards the second voice data to the first terminal in real time, so that the first terminal can receive and play the voice data of the second communication object on the current voice communication interface, and real-time voice communication is realized.
It should be noted that, in the embodiment of the present application, the voice state of the first avatar and the voice broadcast identifier of the first communication session are also synchronized on the second terminal through the server.
Optionally, in this application embodiment, the expression form of the voice state of each virtual character may be a sound wave-shaped call identifier, or may also be a text prompt, and the like. The displayed position of the representation of the speech state may also be in the vicinity of the virtual character's logo, which may facilitate identification of the virtual character's characterized identity, but the application is not limited by the embodiment.
Illustratively, in the voice communication interface corresponding to the first communication session shown in fig. 5, when the first avatar of the first communication object is an aging wave, the first user clicks a voice function button on the voice communication interface displayed on the first terminal, that is, the first user clicks the voice function button
Figure BDA0003471544070000131
When the first terminal is lighted, the first user can carry out voice call (namely, real-time chat is opened), so that the first terminal receives first voice data representing a first communication object of the first user, and synchronously displays the voice state beside the identification of the virtual character of the first communication object as calling, namely displays a voice wave-shaped call identification.
Further, the voice communication interface displayed by the first terminal may further be provided with other functions such as adding communication friends, so that other communication objects may be invited to perform real-time communication in the voice communication, as shown in fig. 5 in the voice communication interface
Figure BDA0003471544070000132
The keys can be used as function keys for adding other functions such as communication friends and the like.
Further, when the voice function is opened, or the voice function of a communication object in the voice communication interface is opened, the display area of the first communication session displays the voice broadcast identifier to represent that someone in the first communication session is in the voice, so that an offline communication object (a corresponding second terminal) can directly see that someone in the first communication session is in communication through the history record of the current communication interface of the offline communication object, and thus, users corresponding to other communication objects can join in the voice communication by opening the first communication session. And, the display area of the first communication session on the second terminal will also display the voice broadcast identification synchronously, and the display of the voice broadcast identification in the second terminal is consistent with the voice broadcast identification on the display area shown in fig. 4. That is to say, the embodiment of the present application provides a synchronous communication method applied to a second terminal, where the second terminal may display a voice broadcast identifier in a display area of a first communication session on a current communication interface when a first communication object in the first communication session starts a voice function; and the second communication object can be added into the voice call corresponding to the first communication session by opening the first communication session, and the voice state of the first virtual character is displayed in the application interface where the second communication object is located.
Illustratively, as shown in fig. 4 and 5, the voice broadcast identification is displayed next to the identification (buddy) of the first communication session
Figure BDA0003471544070000141
Furthermore, after the first terminal synchronously displays the voice state of the first avatar, the first terminal needs to send the first voice data of the first communication object to the server, so that the server can transmit the voice data to the second terminal corresponding to the second communication object in real time, so that the second communication object which is on line can listen to the voice of the first communication object in real time, and real-time communication between the communication objects is realized.
It can be understood that, because the communication of the first communication session in the history record can be selected in the current communication interface, and the communication connection between the first terminal and the server is established by sending the voice communication request message to the server, when the communication connection is completed, that is, the first terminal receives the communication permission message, the first terminal can enter the voice communication interface and can see which communication objects are in the voice communication interface, so that when the voice function corresponding to the first terminal or the first communication object is opened, the voice call is performed on other communication objects, and the first terminal synchronizes the voice broadcast identifier of the first communication session to the second terminal through the server, so that the second terminal can display which communication session has voice communication ongoing, so that the first terminal provides an autonomous selection for establishing the communication connection with the server, and also provides a mechanism for autonomously performing the voice call, and the establishment of synchronous communication and the implementation of the synchronous communication can be flexibly performed, and a new implementation form of the virtual character representing the identity of the communication object is provided in the voice communication interface, thereby improving the man-machine interaction performance.
Further, after S103, as shown in fig. 6, the method for synchronous communication according to the embodiment of the present application may further include: S105-S108. The following were used:
and S105, displaying the first virtual character and the virtual character group corresponding to the first communication object on the application interface.
In this embodiment, after the first terminal triggers the voice communication request according to the communication instruction, the first terminal loads an interface for voice communication (i.e., an application interface at this time) according to the communication permission message (i.e., when the voice communication request is permitted), that is, after the application interface completes establishment of the first communication session between the first communication object and the second communication object, the first terminal may display, on the interface for voice communication, the first virtual character and the second virtual character group corresponding to the first communication object.
It should be noted that, in the embodiment of the present application, since the avatar of the communication object is shown as a virtual character, and the communication mode is mainly voice communication, in this embodiment, a virtual character of the communication object in the current first communication session is displayed on the voice communication interface, and the first communication object is also joined to the first communication session, so that the voice communication interface of the first terminal displays the first virtual character and a second virtual character in a second virtual character group, where the second virtual character is a virtual character in the second virtual character group corresponding to the second communication object, and the second communication object is a communication object different from the first communication object in the first communication session.
It should be noted that, in this embodiment, when the first terminal responds to the permission communication message and enters the voice communication interface, since the first communication session selected by the first user may be a group chat session, in such a case, the group chat already includes the first communication object, and thus virtual characters of all communication objects of the first communication session, that is, the first virtual character and the second virtual character, are displayed on the voice communication interface; when the first communication session selected by the first user is a single communication object, the second communication object in the first communication session is originally a communication object different from the first communication object, so that the first terminal enters the voice communication interfaces of the first communication object and the second communication object, and the first virtual character and the second virtual character are displayed. That is, after the application interface completes establishment of the first communication session between the first communication object and the second communication object in S103, the first terminal may further display a session scene of the first virtual character and the second virtual character through the interface of voice communication (i.e., the application interface at this time).
In the embodiment of the present application, the corresponding areas of the virtual character all have identifiers representing the virtual character (i.e. identifiers of communication objects), so as to perform the identity of the communication objects of multiple virtual characters in the same voice communication interface.
It should be noted that, in the embodiment of the present application, an online status prompt identifier, for example, an online prompt lamp, may also be disposed in the area where the identifier of the virtual character is located. Therefore, which communication objects are online in the voice communication interface corresponding to the current first communication session can be judged through the online state prompt identification, and the fact that the communication objects are online means that the communication objects log in the communication application at the moment and enter the voice communication interface of the first communication session. The specific implementation form of the online status prompt identifier may be different color identifiers, text prompt, and the like, and the embodiment of the application is not limited.
Specifically, the server maintains a list of all chat members (list of online communication objects) in a current real-time room (voice communication interface) through real-time signaling management, and when a communication object joins (i.e., the communication object opens a first communication session on its terminal, i.e., goes online) or exits (i.e., closes the first communication session on its terminal, i.e., goes offline), the server updates the list of chat members in real time and then synchronizes other members currently in the real-time room, so that the online status of each communication object can be synchronously displayed on the voice communication interface, i.e., the online status prompt identifier of each communication object is updated in real time.
For example, as shown in fig. 7, it is assumed that the second communication object in the first communication session ("buddy") of the first communication object may be cat and lie, and the first communication object may be a standing wave, so that, when the first user selects this communication session of "buddy", a voice communication interface of "buddy" is entered, on which virtual characters corresponding to cat, lie and standing wave and their identifications are displayed, and an online indicator lamp is in front of the virtual character corresponding to cat, lie and standing wave, and the online indicator lamp is turned on to indicate online and turned off to indicate offline, as in fig. 7, the online indicator lamp in front of cat and standing wave in the virtual character is turned on, and the cat and standing wave are entered into the voice communication interface at this time, that is, the cat and standing wave can perform real-time voice communication; the li-li is not on-line, i.e. does not enter the voice communication interface at the moment.
In the embodiment of the present application, the voice communication interface may also be characterized as a voice room of real-time voice communication entered by a communication object, and the embodiment of the present application is not limited in particular.
And S106, receiving the communication function touch instruction, responding to the communication function touch instruction, and calling a function selection interface in a second display area of the application interface.
After the first virtual character and the virtual character group are displayed on the voice communication interface by the first terminal, a communication function touch key may be disposed on the displayed voice communication interface (application interface) in the embodiment of the present application, where the communication function touch key is a function key that can implement other forms of communication with other communication objects. That is to say, in the embodiment of the present application, in addition to the voice communication mode, another communication function touch key may be further disposed in the voice communication interface, so that the first communication object and the second communication object perform other types of communication. In this way, when the first user triggers the communication function touch key, the first terminal receives the communication function touch instruction, and then the first terminal responds to the communication function touch instruction and communicates with the second communication object in the current voice communication interface. In some embodiments, the first terminal may call a corresponding function selection interface in the second display area of the voice communication interface in response to the communication function touch instruction, and select a specific function object on the function selection interface, thereby implementing the communication function represented by the function object. The second display area is a partial area in the current voice communication interface. Optionally, in this embodiment of the application, the second display area may be a lowermost area of the voice communication interface.
Here, in the embodiment of the present application, the voice function key, the added communication friend function key, the communication function touch key, and other touch keys may be displayed in the second display area, that is, the touch keys are set and managed in a unified area, so that the use and the operation of the user are facilitated. Of course, the setting area of each touch key is not limited in the embodiments of the present application.
For example, in the voice communication interface shown in fig. 5, touch keys such as the voice function key, the add communication friend function key, and the communication function touch key are all displayed in the display area 1 (the second display area), in this case, the communication function touch key on the first terminal in the embodiment of the present application may be an expressive function key, for example,
Figure BDA0003471544070000171
and the like, or character input, and the like, and the embodiment of the application does not limit the functional types of the communication function touch keys.
It should be noted that, in this embodiment of the present application, the communication function touch key may be provided with a plurality of keys corresponding to different functions, and also may be provided with function objects with the same function in different expression forms in one communication function touch key, and a specific setting or implementation manner is not limited in this embodiment of the present application.
And S107, receiving a selection instruction on the function selection interface, responding to the selection instruction, and correspondingly displaying the function object selected by the selection instruction and the first virtual character on the voice communication interface.
After the first terminal receives the communication function touch instruction and responds to the communication function touch instruction, and after the function selection interface is called out in the second display area of the voice communication interface, as the communication function touch button is touched, a plurality of communication function implementation modes corresponding to the communication function touch button can be provided, so that a plurality of function objects corresponding to the communication function can be displayed on the function selection interface, and one function object to be implemented needs to be selected from the plurality of function objects on the function selection interface, namely the first terminal receives the selection instruction on the function selection interface and responds to the selection instruction, and the function object selected by the selection instruction is correspondingly displayed on the voice communication interface with the first virtual character.
In this embodiment of the application, the communication function touch key may be an individualized function key such as an emoticon, and the expressive form of the emoticon may be a preset emoticon, or a preset body figure icon, and the specific embodiment of the application is not limited. The function object is a selected one of the preset emoticons or a selected one of the preset limb character icons.
In the embodiment of the application, a first user selects one functional object from a plurality of functional objects on a function selection interface, namely, a first terminal receives a selection instruction, when the first user selects one of preset emoticons, the selection instruction is an expression selection instruction, and the functional object at this time is an expression object; when the first user selects one of the preset limb character icons, the selection instruction is a limb selection instruction, and the function object at the moment is a limb object.
In some embodiments, when the selection instruction is a limb selection instruction, the first terminal responds to the selection instruction, synchronously maps the limb object selected by the selection instruction to the first virtual character, and then displays the first virtual character. And when the selection instruction is an expression selection instruction, responding to the selection instruction, calling out a function display area corresponding to the first virtual character on the voice communication interface, and displaying the selected expression object.
It should be noted that, in the embodiment of the present application, after the first user selects a limb object, the first terminal may map the selected limb object on the first virtual character synchronously, that is, the first virtual character may implement the motion of the selected limb object; and after the first user selects the expression object, the first terminal can display the selected expression object in the corresponding area of the first virtual character so as to change the expression mood of the first virtual character at the moment. The realization of the communication function can more vividly show the richness of the communication between the communication objects and the interest of the interaction.
Illustratively, when the selection instruction is an emoticon selection instruction, as shown in fig. 8, a function selection interface on the display area 1 on the voice communication interface displays a plurality of emoticons, and the first terminal selects one of the plurality of emoticons according to the selection instruction, for example,
Figure BDA0003471544070000181
then, a function display area corresponding to a first virtual character (display wave) on the first terminal shows one
Figure BDA0003471544070000182
The function display area in the embodiment of the present application may be a display area near the corresponding virtual character, and the embodiment of the present application is not limited, and may be reasonably arranged.
For example, when the selection instruction is a limb selection instruction, as shown in fig. 9, a plurality of limb objects are displayed on the function selection interface on the display area 1 on the voice communication interface, and the first terminal selects one of the plurality of limb objects, for example, "clap", according to the selection instruction, then the first virtual character (crow) on the first terminal can synchronously map the movement of the clap.
Further, in the embodiment of the present application, the preset emoticon may also be set as an expression form of an emoticon function key, and the preset limb character icon may be set as an expression form of a limb function key, where the emoticon function key and the limb function key are two communication function touch keys, that is, are implemented by using the two communication function touch keys.
And S108, sending the functional object to a server.
After the first terminal receives the selection instruction on the function selection interface and responds to the selection instruction, and the function object selected by the selection instruction and the first virtual character are correspondingly displayed on the voice communication interface, because in the synchronous communication method provided by the embodiment of the application, the first communication object logged in the first terminal needs to be communicated with the second communication object in real time, when a first user sends communication modes such as expressions or limbs to the first communication object, the first terminal needs to send the function objects to the server, so that the server can forward the function object to the second terminal corresponding to the second communication object communicated with the first communication object, and real-time communication between the function object and the second communication object is realized. Specific implementations will be described in detail in the following embodiments and subsequent embodiments.
In the embodiment of the application, the preset emoticons mainly comprise smiles, anger and the like; or presetting limb figure icons which mainly comprise hugs, dances and the like. When a certain communication object sends an emoticon or a limb character icon, the emoticon is converted into expression data or limb data and sent to the server, and when the server forwards the expression data or the limb data to terminals of other communication objects, the expression data or the limb data are restored to corresponding expressions or limb animations and act on corresponding virtual characters.
Further, as shown in fig. 10, after S103 and before S105, the method for synchronous communication provided in the embodiment of the present application may further include: S109-S112. The following were used:
and S109, acquiring a face image of the first communication object in real time, and displaying the face image in a first display area of the application interface.
In the embodiment of the application, the first terminal has a function of synchronizing real-time facial expressions when using a communication application, so that the first terminal triggers a voice communication request according to the communication instruction, responds to a communication permission message, loads a voice communication interface (application interface), that is, after the application interface completes establishment of a first communication session between a first communication object and a second communication object, starts a front camera or a front image acquisition device of the first terminal, starts to acquire a facial image of a first user represented by the first communication object in real time, and displays the facial image in a first display area of the voice communication interface (application interface).
It should be noted that, in the embodiment of the present application, the function of real-time facial expression synchronization is a function of synchronously mapping changes of actual human expressions or facial features and the like on the faces of the corresponding virtual characters.
In this embodiment of the application, the first display area may be an upper right area of the voice communication interface, and when the first communication object performs real-time communication with the second communication object, the face image of the first user corresponding to the first communication object is displayed in real time in the first display area.
And S110, recognizing the face feature information of the face image.
And S111, mapping the face feature information to the face feature of the first virtual character.
The first terminal can perform face recognition on the face image after acquiring the face image of the first communication object to obtain face feature information, and the first terminal can acquire the face image corresponding to the first communication object in real time, so that the first terminal maps the recognized face feature information to the face feature of the first virtual character, that is, the actual face or facial expression or feature of the first user can be displayed on the face feature of the first virtual character of the first communication object corresponding to the first user in real time.
The face feature information adopted in the embodiment of the application is a parameter for describing face features, and is also called a feature descriptor; the embodiment of the application can extract the face feature information by extracting the positioning mode of the face key points. Based on different requirements and emphasis, the embodiment of the application can be selected correspondingly, and can be used in combination for improving stability, specifically as follows: the face feature information of the face image at the first terminal may use at least one of Scale-invariant feature transform (SIFT) Features, histogram of Oriented Gradients (HOG) Features, or Speeded Up Robust Features (SURF) extracted from the initial key point positions.
In the embodiment of the application, the positioning of the face key points refers to accurately finding out the positions of the face key points through an algorithm. The face key points are key points with strong representation capability of the face, such as eyes, a nose, a mouth, a face contour and the like.
It should be noted that, in the embodiment of the present application, the first terminal supports a face recognition and positioning technology, when positioning a face key point, a target object to be recognized (i.e., a face image corresponding to a first communication object) is first acquired, and when the terminal detects that the target object is the face image, the terminal may generate a target detection area for face recognition and positioning on the face image according to a preset configuration and perform labeling, so that the labeled target detection area is displayed on the face image, and the face key point is positioned.
Optionally, the target detection area is an area set for performing target object detection, for example, a face detection frame, and the face detection frame may be in a shape of a rectangle, a circle, an ellipse, or the like.
In the following, the face feature information is taken as an HOG feature value (also referred to as HOG data feature), and in the embodiment of the present application, the HOG feature principle is used: the core idea of HOG is that the detected local object profile can be described by a distribution of intensity gradients or edge directions. By dividing the whole image into small connected regions (called cells), each cell generates a histogram of the directional gradients or the edge directions of the pixels in the cell, and the combination of these histograms can represent the (detected target object) descriptor. To improve accuracy, the local histogram can be normalized by computing the intensity of a larger region in the image (called a block) as a measure, and then normalizing all cells in this block with this value (measure). This normalization process achieves better illumination/shadow invariance.
Compared to other descriptors, HOG derived descriptors retain geometric and optical transformation invariance (unless the object orientation changes). Therefore, the HOG descriptor is particularly suitable for the detection of human faces.
Specifically, the HOG feature extraction method is to perform the following processes on an image:
1. graying (treating the image as a three-dimensional image in x, y, z (gray scale));
2. dividing into small cells (2 x 2);
3. calculating the gradient (i.e. orientation) of each pixel in each cell;
4. and (4) counting the gradient histogram (the number of different gradients) of each cell to form the descriptor of each cell.
Note that, in the embodiment of the present application, the weight deviation amount may be calculated by a gradient descent method. In a word, for a given face key point position, calculating some information lists on the face key point position to form a vector, namely extracting face feature information, then regressing the face feature information, namely combining each numerical value of the vector, and finally obtaining a first offset of a face key point distance true solution. There are many methods for extracting face feature information, including: random forest, sift, etc. the characteristics of the face at the current key point position can be expressed by using the extracted face characteristic information.
Illustratively, as shown in fig. 11b, the first terminal acquires a face image corresponding to the first communication object and displays the face image on the display area 2 (the first display area), and the first terminal extracts face feature information on the face image by using a face recognition technique, that is, the face feature points are shown as dashed boxes (i.e., target detection areas) in fig. 11b. Thus, the first terminal performs synchronous mapping on the facial features of the aging wave (first virtual character) on the voice communication interface. Thus, the virtual character of the presentation wave changes from fig. 11a to fig. 11b.
It can be understood that, because the first terminal can map the facial features of the actual person on the facial features of the corresponding virtual person, the actual appearance of the actual communication user, i.e., the first user, can be represented in real time, and the interestingness, the individuation and the communication effect of real-time communication are embodied.
And S112, sending the face feature information to a server.
After the first terminal identifies the face feature information of the face image, the first terminal can send the face feature information to the server, so that the server can forward the face feature information to a second terminal corresponding to a second communication object, the face feature corresponding to the first virtual character can be synchronously displayed on the second terminal, and the effect of real-time communication is achieved.
It should be noted that the facial feature information may be understood as facial expression data, which is obtained by opening a front-facing camera on the terminal, collecting facial images of a human face, identifying a current expression of the human face, such as closing an eye and opening a mouth, and representing the current expression by using a string of feature data. In the embodiment of the application, after a first communication object is successfully added into a voice communication interface, if the number of the communication objects in the voice communication interface is greater than 2 people, the recognition of the facial expression (namely the function of real-time facial expression synchronization) is automatically started, meanwhile, facial expression data are transmitted to other communication objects in real time, and the other communication objects restore the expression of a first virtual character of the first communication object according to the facial expression data.
It is understood that, in the embodiments of the present application, related data such as human face feature information is referred to, when the embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use and processing of the related data need to comply with relevant laws and regulations and standards of relevant countries and regions.
An embodiment of the present application provides a synchronous communication method, which is applied to a server side, and as shown in fig. 12, a process of establishing a communication connection between a server and a first terminal in the synchronous communication method may include:
s201, receiving a voice communication request message for a first communication session sent by a first terminal, wherein the voice communication request message carries an identifier of the first communication session, and the first communication session is any one of communication sessions in a history record added to a first communication object.
S202, when the application interface corresponding to the identifier of the first communication session is not found, establishing the application interface corresponding to the identifier of the first communication session according to the voice communication request message, and generating a voice interface establishment completion message.
S203, sending the voice interface establishment completion message to the first terminal.
And S204, receiving the message for establishing the real-time data channel sent by the first terminal, and establishing the real-time data channel with the first terminal according to the message for establishing the real-time data channel.
And S205, when the real-time data channel is established, sending a communication permission message to the first terminal.
In the embodiment of the present application, when a first communication object logged in a first terminal wants to perform real-time communication with a second communication object in a first communication session, the first terminal that needs to log in the first communication object first needs to establish communication connection with a server corresponding to a communication application after receiving a communication instruction that the first user generates the first communication session by touching, and then, the first terminal sends a voice communication request message to the server (i.e., a real-time communication signaling module of the server receives the voice communication request message for the first communication session sent by the first terminal), where the first communication session is any one of the communication sessions in a history record that the first communication object joins, the voice communication request message carries an identifier of the first communication session, so that the server checks whether a voice communication interface corresponding to the first communication session already exists (i.e., an application interface, which may also be considered as a voice communication room), when the server does not inquire that the voice communication interface corresponding to the first communication session already exists, the real-time communication module uses the identifier of the first communication session to generate a voice communication interface corresponding to establish a voice communication session (i.e., an application interface, which is considered as a voice communication room), and returns a real-time data communication data access data to the real-time communication module, and after the real-time communication data access data communication module receives the voice communication data access data of the first communication session, the real-time data access module returns the real-time data access module to the real-time data access module, and then sends the real-time data access module to the real-time data access module, namely, the first terminal receives a voice interface establishment completion message fed back by the server in response to the communication instruction. At this time, after the first terminal receives the voice interface establishment completion message, the first terminal may start establishing a real-time data channel with the server according to the voice interface establishment completion message, that is, the first terminal sends the message of establishing the real-time data channel to the server according to the voice interface establishment completion message. The server receives a message for establishing a real-time data channel sent by the first terminal, and establishes a real-time data channel with the first terminal according to the message for establishing the real-time data channel, so that the server establishes a real-time data channel with a first communication object of the first terminal, and when the real-time data channel is established, the server returns a communication permission message to the first terminal, namely the first terminal receives a communication permission message fed back by the server in response to the real-time data channel message, so that the first terminal can perform real-time communication with a second communication object in the first communication session in a voice communication interface.
It should be noted that, in this embodiment of the present application, the second terminal that has logged in the second communication object may also establish a connection of a real-time data channel with the server according to the manner of the first terminal, so that the first terminal and the second terminal are located in the same voice communication interface of real-time data, and the first terminal and the second terminal may exchange data with the server through respective real-time data channels and forward the data to the other side through the server, thereby implementing real-time communication.
Further, in this embodiment of the application, the first terminal sends a voice communication request message to the server, where the voice communication request message carries an identifier of the first communication session, so that after the server checks whether a voice communication interface corresponding to the first communication session already exists (i.e., a voice communication room), when the server has inquires that the voice communication interface corresponding to the first communication session already exists, it indicates that there is a member in the first communication session in communication, and thus, the server directly returns a voice interface setup completion message to the first terminal, that is, the first terminal receives a voice interface setup completion message fed back by the server in response to the communication instruction. At this time, after the first terminal receives the voice interface establishment completion message, the first terminal needs to start establishing the real-time data channel, that is, the first terminal sends a message of establishing the real-time data channel to the server to establish the real-time data channel.
Further, in the synchronous communication method provided by the embodiment of the present application, after S205, the server may perform data interaction with the first terminal and the second terminal. The method specifically comprises the following steps: S206-S208.
The following were used:
s206, receiving first voice data sent by the first terminal, and forwarding the first voice data to a second terminal, wherein the second terminal is a terminal device corresponding to an online second communication object in the first communication session, and the second communication object is a communication object in the history record.
And S207, receiving the face feature information sent by the first terminal, and forwarding the face feature information to the second terminal.
S208, receiving the function object sent by the first terminal, and forwarding the function object to the second terminal.
After the server establishes a real-time data channel with the first terminal, the server may perform data interaction with the first terminal, and the server may transmit various real-time data, such as first voice data performed by the first communication object, or a face feature of the first avatar, or a functional object, to the server in real time, and the server may forward the real-time data to a second terminal corresponding to another communication object of the first communication object in the same voice communication interface, where the second communication object is a communication object in a history record, so that the second terminal plays the first voice data transmitted by the server in real time after receiving the first voice data, or the second terminal synchronously maps the face feature of the first avatar transmitted by the server on the first avatar, or the second terminal receives the functional object of the first terminal transmitted by the server, and the functional object corresponds to the first avatar, so that the second terminal may display or respond the functional object correspondingly to the first avatar, and a specific implementation process is identical to that of the first terminal. Therefore, when the first communication object on the first terminal carries out voice, or carries out real-time expression synchronization, or issues an emotion icon and the like, through forwarding of the server, the first virtual character representing the first communication object on the second terminal can correspondingly broadcast voice, or carries out real-time expression synchronization, or issues an emotion icon and the like, and therefore real-time communication between the corresponding communication objects on the first terminal and the second terminal is achieved.
It should be noted that, in this embodiment of the present application, the second terminal corresponding to one second communication object may also send out the same or responsive communication mode or expression as that of the first communication object, such as second voice data, and at this time, the server may also forward the real-time data generated by this second terminal to the first terminal and other second terminals corresponding to other second communication objects, so as to synchronize the real-time communication mode of the one second communication object to other second communication objects and the first communication object to see, thereby completing the real-time communication between the communication objects in the first communication session.
Further, the server needs to synchronously forward the voice state of the first avatar and the voice broadcast identifier of the first communication session, etc. to the second terminal so as to synchronously display on the second terminal.
It can be understood that, in the embodiment of the present application, the server may provide the functions of establishing real-time communication and forwarding communication data for the first terminal and the second terminal, so that the first terminal and the second terminal can autonomously join a communication session and receive and send real-time data such as voice data, the establishment of synchronous communication and the implementation of synchronous communication can be flexibly performed, and the human-computer interaction performance is improved.
An embodiment of the present application provides a synchronous communication method, as shown in fig. 13, taking a first terminal and a second terminal corresponding to a communication object belonging to a first communication session as an example to perform voice communication, and assuming that a communication application is a second world, the method may include:
s301, when a second world on the first terminal is opened (a starting touch instruction of the communication application is received), the first terminal loads a current communication interface of the second world, and displays a first virtual character of a first communication object of the logged-in communication application and a virtual character group formed by the communication objects in the history record on the current communication interface.
S302, the first terminal receives a communication instruction for adding the first communication object into the first communication session on the current communication interface.
And S303, the first terminal responds to the communication instruction and sends a voice communication request message to the server, wherein the voice communication request message carries the identifier of the first communication session.
S304, when the server does not find the voice communication interface corresponding to the identifier of the first communication session, the server establishes the voice communication interface corresponding to the identifier of the first communication session according to the voice communication request message and generates a voice interface establishment completion message.
S305, the server sends the voice interface establishment completion message to the first terminal.
S306, the first terminal establishes a completion message according to the voice interface and sends a message for establishing a real-time data channel to the server.
And S307, the server establishes a real-time data channel with the first terminal according to the message for establishing the real-time data channel.
And S308, when the real-time data channel is established, the server sends a communication permission message to the first terminal.
S309, the first terminal responds to the communication permission message, loads the voice communication interface, and displays the first virtual character and the virtual character group on the voice communication interface.
S310, triggering and starting a voice function in a first communication session of a voice communication interface, and when the voice function is started, receiving first voice data of a first communication object by a first terminal, and synchronously displaying a voice state of a first virtual character and a voice broadcast identifier of the first communication session.
S311, the first terminal sends the first voice data, the voice broadcast identification and the voice state to a server.
S312, the server forwards the first voice data, the voice broadcast identification and the voice state to the second terminal.
And S313, the second terminal synchronously displays the voice broadcast identification and the voice state.
It should be noted that the communication form between the first terminal and the second terminal may also be a real-time communication manner such as expression or limb, which is consistent with the process and principle of real-time receiving and sending of voice data, and is not described herein again.
As shown in fig. 14, an embodiment of the present application provides a first terminal 1, and the first terminal 1 includes:
the display unit 10 is used for starting an application interface according to a starting touch instruction, and displaying a corresponding first communication object and a virtual character group formed by corresponding second communication objects in a historical record on the application interface;
a first receiving unit 11, configured to receive a communication instruction, and trigger a voice communication request according to the communication instruction, so as to add the first communication object to any one of the communication sessions in the history record;
the communication unit 12 is configured to complete establishment of a first communication session between the first communication object and the second communication object in the application interface when the voice communication request is allowed;
an initiating unit 13, configured to trigger to start a voice function in the first communication session of the application interface,
a first sending unit 14, configured to send first voice data triggered by the first communication object to a second terminal when the voice function is turned on,
the display unit 10 is further configured to synchronously display a current voice state and a voice broadcast identifier of the established communication session in the application interface where the first communication object and the second communication object are located.
Optionally, the display unit 10 is further configured to display, on the application interface, a first virtual character and the virtual character group corresponding to the first communication object after the application interface completes establishment of the first communication session between the first communication object and the second communication object.
Optionally, based on fig. 14, as shown in fig. 15, the first terminal 1 further includes: the acquisition unit 15, the identification unit 16 and the mapping unit 17;
the acquiring unit 15 is configured to acquire, in real time, a facial image of the first communication object after the application interface completes establishment of the first communication session between the first communication object and the second communication object and before the application interface displays the first virtual character and the virtual character group corresponding to the first communication object,
the display unit 10 is further configured to display the face image in a first display area of the application interface;
the recognition unit 16 is configured to recognize face feature information of the face image;
the mapping unit 17 is configured to map the facial feature information onto facial features of the first virtual character;
the first sending unit 14 is further configured to send the facial feature information to the server.
Optionally, the first receiving unit 11 is configured to receive a communication function touch instruction after the first avatar and the avatar group corresponding to the first communication object are displayed on the application interface,
the display unit 10 is further configured to respond to the communication function touch instruction and call a function selection interface in a second display area of the voice communication interface;
the first receiving unit 11 is further configured to receive a selection instruction on the function selection interface,
the display unit 10 is further configured to respond to the selection instruction, and display the functional object selected by the selection instruction and the first virtual character on the voice communication interface in a corresponding manner;
the first sending unit 14 is further configured to send the function object to the server.
Optionally, the display unit 10 is specifically configured to, when the selection instruction is a limb selection instruction, respond to the selection instruction, map the limb object selected by the selection instruction to the first virtual character synchronously, and then display the first virtual character. Optionally, the display unit 10 is specifically configured to, when the selection instruction is an expression selection instruction, respond to the selection instruction, call up a function display area corresponding to the first virtual character on the voice communication interface, and display the selected expression object.
Optionally, the first sending unit 14 is specifically configured to send the voice communication request message to the server in response to the communication instruction;
the first receiving unit 11 is specifically configured to receive a voice interface establishment completion message that is fed back by the server in response to the communication instruction;
the first sending unit 14 is further specifically configured to send a message for establishing a real-time data channel to the server according to the message for establishing a voice interface;
the first receiving unit 11 is further specifically configured to receive the permission communication message that the server responds to the real-time data channel message feedback, where the permission communication message is used to characterize that the voice communication request is permitted.
It can be understood that, because the communication of the first communication session in the history record can be selected in the current communication interface, and the communication connection between the first terminal and the server is established by sending the voice communication request message to the server, when the communication connection is completed, that is, the first terminal receives the communication permission message, the first terminal can enter the voice communication interface and can see which communication objects are in the voice communication interface, so that when the voice function corresponding to the first terminal or the first communication object is opened, the voice call is performed on other communication objects, and the first terminal synchronizes the voice broadcast identifier of the first communication session to the second terminal through the server, so that the second terminal can display which communication session has voice communication ongoing, so that the first terminal provides an autonomous selection for establishing the communication connection with the server, and also provides a mechanism for autonomously performing the voice call, and the establishment of synchronous communication and the implementation of the synchronous communication can be flexibly performed, and a new implementation form of the virtual character representing the identity of the communication object is provided in the voice communication interface, thereby improving the man-machine interaction performance.
As shown in fig. 16, an embodiment of the present application provides a server 2, where the server 2 may include:
a second receiving unit 20, configured to receive a voice communication request message for a first communication session sent by the first terminal, where the voice communication request message carries an identifier of the first communication session, and the first communication session is any one of the communication sessions in the history record joined by the first communication object;
a establishing unit 21, configured to, when the application interface corresponding to the identifier of the first communication session is not found, establish an application interface corresponding to the identifier of the first communication session according to the voice communication request message,
a generating unit 22, configured to generate a voice interface setup completion message;
a second sending unit 23, configured to send the voice interface establishment completion message to the first terminal;
the second receiving unit 20 is further configured to receive a message for establishing a real-time data channel sent by the first terminal, and establish a real-time data channel with the first terminal according to the message for establishing a real-time data channel;
the second sending unit 23 is further configured to send a communication permission message to the first terminal when the real-time data channel is established.
Optionally, the second receiving unit 20 is further configured to receive the first voice data sent by the first terminal after the communication permission message is sent to the first terminal,
the second sending unit 23 is further configured to forward the first voice data to a second terminal, where the second terminal is a terminal device corresponding to an online second communication object in the first communication session, and the second communication object is a communication object in a history record;
the second receiving unit 20 is further configured to receive the facial feature information sent by the first terminal after the communication permission message is sent to the first terminal,
the second sending unit 23 is further configured to forward the face feature information to the second terminal;
the second receiving unit 20 is further configured to receive the function object sent by the first terminal after the communication permission message is sent to the first terminal,
the second sending unit 23 is further configured to forward the function object to the second terminal.
It can be understood that, in the embodiment of the present application, the server may provide the functions of establishing real-time communication and forwarding communication data for the first terminal and the second terminal, so that the first terminal and the second terminal can autonomously join a communication session and receive and send real-time data such as voice data, the establishment of synchronous communication and the implementation of synchronous communication can be flexibly performed, and the human-computer interaction performance is improved.
As shown in fig. 17, an embodiment of the present application provides a first terminal, corresponding to a synchronous communication method on a first terminal side, including: a first receiver 17, a first transmitter 18, a first memory 19, a first processor 110, a display 111, a camera 112, a player 114 and a first communication bus 113, wherein the first receiver 17, the first transmitter 18, the first memory 19, the display 111, the camera 112, the player 114 and the first processor 110 are connected through the first communication bus 113;
the display 111 is configured to start an application interface according to the start touch instruction, and display a virtual character group corresponding to the first communication object and a virtual character group formed by corresponding second communication objects in the history record on the application interface;
the first receiver 17 is configured to receive a communication instruction, and trigger a voice communication request according to the communication instruction, so as to add the first communication object to any one of the communication sessions in the history record;
the first processor 110 calls the synchronous communication related program stored in the first memory 19, and executes: when the voice communication request is allowed, completing the establishment of a first communication session of the first communication object and the second communication object at the application interface; triggering an open voice function in the first communication session of the application interface,
the first transmitter 18 is used for transmitting the first voice data triggered by the first communication object to the second terminal when the voice function is started,
the display 111 is further configured to synchronously display a current voice state and a voice broadcast identifier of the established communication session in the application interface where the first communication object and the second communication object are located;
the player 114 is configured to play the first voice data synchronously.
Optionally, the display 111 is further configured to display, on the application interface, a first virtual character and the virtual character group corresponding to the first communication object after the application interface completes establishment of the first communication session between the first communication object and the second communication object.
Optionally, the camera 112 is configured to acquire, in real time, a face image of the first communication object after the application interface completes establishment of the first communication session between the first communication object and the second communication object and before the application interface displays the first virtual character and the virtual character group corresponding to the first communication object,
the display 111 is further configured to display the face image in a first display area of the voice communication interface;
the first processor 110 is further configured to identify face feature information of the face image; mapping the face feature information to the face features of the first virtual character;
the first transmitter 18 is further configured to send the facial feature information to the server.
Optionally, the first receiver 17 is configured to receive a communication function touch instruction after the first avatar and the avatar group corresponding to the first communication object are displayed on the application interface,
the display 111 is further configured to call a function selection interface in a second display area of the application interface in response to the communication function touch instruction;
the first receiver 17 is further configured to receive a selection instruction on the function selection interface,
the display 111 is further configured to respond to the selection instruction, and display the functional object selected by the selection instruction and the first virtual character on the voice communication interface in a corresponding manner;
the first transmitter 18 is further configured to transmit the function object to the server.
Optionally, the display 111 is specifically configured to, when the selection instruction is a limb selection instruction, respond to the selection instruction, map the limb object selected by the selection instruction to the first virtual character synchronously, and then display the first virtual character.
Optionally, the display 111 is specifically configured to, when the selection instruction is an expression selection instruction, respond to the selection instruction, call up a function display area corresponding to the first virtual character on the voice communication interface, and display the selected expression object.
Optionally, the first transmitter 18 is specifically configured to send the voice communication request message to the server in response to the communication instruction;
the first receiver 17 is specifically configured to receive a voice interface establishment completion message that is fed back by the server in response to the communication instruction;
the first transmitter 18 is further specifically configured to send a message for establishing a real-time data channel to the server according to the voice interface establishment completion message;
the first receiver 17 is further specifically configured to receive the permission communication message that is fed back by the server in response to the real-time data channel message, where the permission communication message is used to characterize that the voice communication request is permitted.
It can be understood that, because the communication of the first communication session in the history record can be selected in the current communication interface, and the communication connection between the first terminal and the server is established by sending the voice communication request message to the server, when the communication connection is completed, that is, the first terminal receives the communication permission message, the first terminal can enter the voice communication interface and can see which communication objects are in the voice communication interface, so that when the voice function corresponding to the first terminal or the first communication object is opened, the voice call is performed on other communication objects, and the first terminal synchronizes the voice broadcast identifier of the first communication session to the second terminal through the server, so that the second terminal can display which communication session has voice communication ongoing, so that the first terminal provides an autonomous selection for establishing the communication connection with the server, and also provides a mechanism for autonomously performing the voice call, and the establishment of synchronous communication and the implementation of the synchronous communication can be flexibly performed, and a new implementation form of the virtual character representing the identity of the communication object is provided in the voice communication interface, thereby improving the man-machine interaction performance.
As shown in fig. 18, an embodiment of the present application provides a server, which may include: a second receiver 24, a second transmitter 25, a second processor 26, a second memory 27, and a second communication bus 28, wherein the second receiver 24, the second transmitter 25, the second memory 27, and the second processor 26 are connected through the second communication bus 28; the second processor 26 is configured to invoke the synchronous communication related program stored in the second memory 27.
The second receiver 24 is configured to receive a voice communication request message for a first communication session sent by the first terminal, where the voice communication request message carries an identifier of the first communication session, and the first communication session is any one of the communication sessions in the history record added by the first communication object;
the second processor 26 is configured to, when the application interface corresponding to the identifier of the first communication session is not found, establish, according to the voice communication request message, an application interface corresponding to the identifier of the first communication session, and generate a voice interface establishment completion message;
the second transmitter 25 is configured to send the voice interface establishment completion message to the first terminal;
the second receiver 24 is further configured to receive a message for establishing a real-time data channel sent by the first terminal, and establish a real-time data channel with the first terminal according to the message for establishing a real-time data channel;
the second transmitter 25 is further configured to transmit a communication permission message to the first terminal when the real-time data channel is established.
Optionally, the second receiver 24 is further configured to, after the sending of the communication permission message to the first terminal, receive first voice data sent by the first terminal,
the second sender 25 is further configured to forward the first voice data to a second terminal, where the second terminal is a terminal device corresponding to an online second communication object in the first communication session, and the second communication object is a communication object in a history record;
the second receiver 24 is further configured to receive the facial feature information sent by the first terminal after the communication permission message is sent to the first terminal,
the second transmitter 25 is further configured to forward the facial feature information to the second terminal;
the second receiver 24 is further configured to receive the function object sent by the first terminal after the communication permission message is sent to the first terminal,
the second sender 25 is further configured to forward the function object to the second terminal.
It can be understood that, in the embodiment of the present application, the server may provide the functions of establishing real-time communication and forwarding communication data for the first terminal and the second terminal, so that the first terminal and the second terminal can autonomously join a communication session and receive and send real-time data such as voice data, the establishment of synchronous communication and the implementation of synchronous communication can be flexibly performed, and the human-computer interaction performance is improved.
In practical applications, the Memory may be a volatile Memory (volatile Memory), such as a Random-Access Memory (RAM); or a non-volatile Memory (non-volatile Memory), such as a Read-Only Memory (ROM), a flash Memory (flash Memory), a Hard Disk (HDD), or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor.
The Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It is understood that the electronic devices for implementing the above processor functions may be other devices, and the embodiments of the present application are not limited in particular.
Each functional module in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solutions of the present embodiment substantially or partially contribute to the prior art, or all or part of the technical solutions may be embodied in the form of a software product, which is stored in a computer-readable storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (which is a processor) to execute all or part of the steps of the method according to the present embodiment. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiment of the present application provides a first computer-readable storage medium, which is applied to a first terminal, and the first computer-readable storage medium stores one or more programs, and the one or more programs are executable by one or more first processors to implement a synchronous communication method on a first terminal side in the embodiment of the present application.
A second computer-readable storage medium is provided in an embodiment of the present application, and is applied to a server, where the second computer-readable storage medium stores one or more programs, and the one or more programs are executable by one or more second processors to implement a synchronous communication method on a server side in an embodiment of the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.

Claims (18)

1. A method for synchronous communication, applied to a first terminal, includes:
starting an application interface according to a starting touch instruction, and displaying a first virtual character and a second virtual character on the application interface; the first virtual character is an avatar corresponding to the first communication object, and the second virtual character is an avatar corresponding to the second communication object;
receiving a communication instruction, and triggering a voice communication request according to the communication instruction;
when a communication permission message sent by a server responding to a voice communication request message is received, the establishment of a first communication session of the first communication object and the second communication object is completed on the application interface;
sending first voice data triggered by the first communication object to a second terminal where a second communication object is located by triggering a voice starting function, and synchronously displaying the voice state of the first virtual character and the voice broadcast identifier of the first communication session in an application interface where the first communication object and the second communication object are located; the voice broadcast identifier and the voice state are used for prompting the second terminal of the communication session of the voice call and the communication object of the voice call, so that the second terminal can select the voice communication.
2. The method of claim 1, wherein displaying the first avatar and the second avatar in the application interface comprises:
displaying a virtual character group consisting of a first virtual character and at least one second communication object which is communicated with the first communication object on the application interface; the second virtual character is a virtual character in the virtual character group except the first virtual character.
3. The method of claim 1, wherein after the application interface completes establishment of the first communication session between the first communication object and the second communication object, the method further comprises:
acquiring a face image of the first communication object in real time, and displaying the face image in a first display area of the application interface;
identifying face feature information of the face image;
mapping the face feature information to the face features of the first virtual character;
and sending the face feature information to the second terminal so that the second terminal can synchronously display the face feature of the first virtual character according to the face feature information.
4. The method according to any of claims 1-3, wherein after the application interface completes establishment of the first communication session between the first communication object and the second communication object, the method further comprises:
receiving a communication function touch instruction, responding to the communication function touch instruction, and calling a function selection interface in a second display area of the application interface;
receiving a selection instruction on the function selection interface, responding to the selection instruction, and correspondingly displaying the function object selected by the selection instruction and the first virtual character on the application interface;
and sending the functional object to the second terminal so that the second terminal correspondingly displays the functional object and the first virtual character.
5. The method according to claim 4, wherein the displaying, in response to the selection instruction, the functional object selected by the selection instruction on the application interface in correspondence with the first virtual character comprises:
and when the selection instruction is a limb selection instruction, responding to the selection instruction, synchronously mapping the limb object selected by the selection instruction to the first virtual character, and then displaying the first virtual character.
6. The method according to claim 4, wherein the displaying, in response to the selection instruction, the functional object selected by the selection instruction on the application interface in correspondence with the first virtual character comprises:
and when the selection instruction is an expression selection instruction, responding to the selection instruction, calling out a function display area corresponding to the first virtual character on the application interface, and displaying the selected expression object.
7. The method according to any of claims 1-3, wherein after the application interface completes establishment of the first communication session between the first communication object and the second communication object, the method further comprises:
respectively displaying online state prompt marks in areas corresponding to the first virtual character and the second virtual character; the online state prompt identifier represents the online or offline state of the virtual character.
8. The method according to any of claims 1-3, wherein after the application interface completes establishment of the first communication session between the first communication object and the second communication object, the method further comprises:
displaying a communication friend adding control in the application interface;
and responding to the adding operation aiming at the added communication friend control, and initiating an invitation of real-time communication to a communication object specified by the adding operation.
9. The method according to any one of claims 1-3, wherein after the application interface is opened according to the launch touch instruction, the method further comprises:
selecting a communication object to carry out communication session through an address book in communication application; alternatively, a communication session is triggered by the addition of a communication object.
10. The method according to any one of claims 1-3, further comprising:
and setting the image of the virtual character through the setting template of the virtual character.
11. The method of claim 1, wherein triggering a voice communication request according to the communication instruction comprises:
responding to the communication instruction, and sending the voice communication request message to a server;
receiving an application interface establishment completion message fed back by the server in response to the communication instruction;
according to the voice interface establishment completion message, sending a message for establishing a real-time data channel to the server;
receiving the permission communication message fed back by the server in response to the real-time data channel message, wherein the permission communication message is used for representing that the voice communication request is permitted.
12. A synchronous communication method is applied to a second terminal and comprises the following steps:
under the condition that a first communication object in a first communication session starts a voice function, displaying a voice broadcast identifier in a display area of the first communication session on a communication interface, and selecting voice communication through the voice broadcast identifier, wherein the voice broadcast identifier is used for displaying the communication object in the voice call; the first communication session is a communication connection established between the first terminal and a second communication object on the second terminal under the condition that the first terminal is allowed by the server;
adding the second communication object into the voice call corresponding to the first communication session by opening the first communication session, and displaying the voice state of the first virtual character in the application interface where the second communication object is located; the first virtual character is an avatar corresponding to the first communication object.
13. The method of claim 12, further comprising:
receiving face feature information sent by the first terminal; the face feature information is obtained by the first terminal by acquiring a face image of the first communication object in real time and identifying the face image;
and synchronously displaying the facial features corresponding to the first virtual character according to the facial feature information.
14. The method according to claim 12 or 13, further comprising:
receiving a functional object sent by the first terminal;
under the condition that the functional object is an expression object, displaying the expression object in a functional display area corresponding to the first virtual character;
and under the condition that the functional object is a limb object, synchronously mapping the limb object on the first virtual character, and displaying the first virtual character.
15. A synchronous communication device, comprising:
the display unit is used for starting an application interface according to a starting touch instruction and displaying a first virtual character and a second virtual character on the application interface; the first virtual character is an avatar corresponding to the first communication object, and the second virtual character is an avatar corresponding to the second communication object;
the first receiving unit is used for receiving a communication instruction and triggering a voice communication request according to the communication instruction;
the communication unit is used for completing the establishment of a first communication session of the first communication object and the second communication object on the application interface when receiving a communication permission message sent by a server in response to a voice communication request message;
the first sending unit is used for sending the first voice data triggered by the first communication object to the second terminal by triggering and starting the voice function;
the display unit is further configured to start a voice function through triggering, send first voice data triggered by the first communication object to a second terminal where a second communication object is located, and synchronously display a voice state of the first virtual character and a voice broadcast identifier of the first communication session in an application interface where the first communication object and the second communication object are located; the voice broadcast identifier and the voice state are used for prompting the second terminal of the communication session of the voice call and the communication object of the voice call, so that the second terminal can select the voice communication.
16. A synchronous communication device, comprising:
the display unit is used for displaying a voice broadcast identifier in a display area of a first communication session on a communication interface under the condition that the voice function of the first communication object in the first communication session is started, and selecting voice communication through the voice broadcast identifier, wherein the voice broadcast identifier is used for displaying a communication object which is in a voice call; the first communication session is a communication connection established between the first terminal and a second communication object on the second terminal under the condition that the first terminal is allowed by the server;
the communication unit is used for adding the second communication object into the voice call corresponding to the first communication session by opening the first communication session;
the display unit is further used for displaying the voice state of the first virtual character in the application interface where the second communication object is located; the first virtual character is an avatar corresponding to the first communication object.
17. A terminal, comprising:
a memory for storing a synchronous communication related program;
a processor for implementing the method of any one of claims 1 to 11, or any one of claims 12 to 14, when executing a synchronous communication-related program stored in the memory.
18. A computer-readable storage medium, in which one or more programs are stored, the one or more programs being executable by one or more processors to perform the synchronous communication method of any one of claims 1-11 or any one of claims 12-14.
CN202210044304.2A 2017-08-25 2017-08-25 Synchronous communication method, terminal and readable storage medium Active CN114244816B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210044304.2A CN114244816B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710744130.XA CN109428859B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and server
CN202210044304.2A CN114244816B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and readable storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201710744130.XA Division CN109428859B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and server

Publications (2)

Publication Number Publication Date
CN114244816A CN114244816A (en) 2022-03-25
CN114244816B true CN114244816B (en) 2023-02-21

Family

ID=65500295

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710744130.XA Active CN109428859B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and server
CN202210044304.2A Active CN114244816B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and readable storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201710744130.XA Active CN109428859B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and server

Country Status (1)

Country Link
CN (2) CN109428859B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10616151B1 (en) 2018-10-17 2020-04-07 Asana, Inc. Systems and methods for generating and presenting graphical user interfaces
CN110384933B (en) * 2019-08-26 2023-08-11 网易(杭州)网络有限公司 Deployment control method and device for virtual objects in game
CN111179317A (en) * 2020-01-04 2020-05-19 阔地教育科技有限公司 Interactive teaching system and method
CN113765756A (en) * 2020-06-02 2021-12-07 云米互联科技(广东)有限公司 Communication method of home terminal, terminal and storage medium
CN111986297A (en) * 2020-08-10 2020-11-24 山东金东数字创意股份有限公司 Virtual character facial expression real-time driving system and method based on voice control
US11769115B1 (en) * 2020-11-23 2023-09-26 Asana, Inc. Systems and methods to provide measures of user workload when generating units of work based on chat sessions between users of a collaboration environment
CN115914162A (en) * 2021-09-30 2023-04-04 上海掌门科技有限公司 Method, apparatus, medium and program product for providing group status
CN114598738A (en) * 2022-02-22 2022-06-07 网易(杭州)网络有限公司 Data processing method, data processing device, storage medium and computer equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机***有限公司 Instant communication method based on virtual character and system
CN104937545A (en) * 2012-10-26 2015-09-23 多音可可株式会社 Method for operating application providing group call service using mobile voice over internet protocol
CN105407408A (en) * 2014-09-11 2016-03-16 腾讯科技(深圳)有限公司 Method for realizing multiplayer voice and video communication on mobile terminal and mobile terminal
CN105577653A (en) * 2015-12-17 2016-05-11 小米科技有限责任公司 Method and apparatus for establishing video conversation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7277855B1 (en) * 2000-06-30 2007-10-02 At&T Corp. Personalized text-to-speech services
TWI439960B (en) * 2010-04-07 2014-06-01 Apple Inc Avatar editing environment
CN103391205B (en) * 2012-05-08 2017-06-06 阿里巴巴集团控股有限公司 The sending method of group communication information, client
CN103856386B (en) * 2012-11-28 2016-10-26 腾讯科技(深圳)有限公司 Information interacting method, system, server and instant messaging client
CN112152909B (en) * 2015-02-16 2022-11-01 钉钉控股(开曼)有限公司 User message reminding method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机***有限公司 Instant communication method based on virtual character and system
CN104937545A (en) * 2012-10-26 2015-09-23 多音可可株式会社 Method for operating application providing group call service using mobile voice over internet protocol
CN105407408A (en) * 2014-09-11 2016-03-16 腾讯科技(深圳)有限公司 Method for realizing multiplayer voice and video communication on mobile terminal and mobile terminal
CN105577653A (en) * 2015-12-17 2016-05-11 小米科技有限责任公司 Method and apparatus for establishing video conversation

Also Published As

Publication number Publication date
CN109428859B (en) 2022-01-11
CN114244816A (en) 2022-03-25
CN109428859A (en) 2019-03-05

Similar Documents

Publication Publication Date Title
CN114244816B (en) Synchronous communication method, terminal and readable storage medium
US11504636B2 (en) Games in chat
EP3713159A1 (en) Gallery of messages with a shared interest
CN105095873A (en) Picture sharing method and apparatus
CN104317932A (en) Photo sharing method and device
CN111835531B (en) Session processing method, device, computer equipment and storage medium
US11790614B2 (en) Inferring intent from pose and speech input
CN108876878B (en) Head portrait generation method and device
WO2018094911A1 (en) Multimedia file sharing method and terminal device
WO2021213057A1 (en) Help-seeking information transmitting method and apparatus, help-seeking information responding method and apparatus, terminal, and storage medium
CN113014471A (en) Session processing method, device, terminal and storage medium
CN113350802A (en) Voice communication method, device, terminal and storage medium in game
TW202008753A (en) Method and apparatus for sending message, and electronic device
CN111569436A (en) Processing method, device and equipment based on interaction in live broadcast fighting
CN114880062B (en) Chat expression display method, device, electronic device and storage medium
US20230362333A1 (en) Data processing method and apparatus, device, and readable storage medium
US20220115018A1 (en) Synchronous audio and text generation
CN112449098B (en) Shooting method, device, terminal and storage medium
US20240176470A1 (en) Automated tagging of content items
WO2023138184A1 (en) Prompt information display method and apparatus, storage medium and electronic device
EP4141704A1 (en) Method and apparatus for music generation, electronic device, storage medium
CN112820265B (en) Speech synthesis model training method and related device
CN109842546B (en) Conversation expression processing method and device
CN113518198A (en) Session interface display method, conference interface display method and device and electronic equipment
CN114327197A (en) Message sending method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40065461

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant