CN111918090A - Live broadcast picture display method and device, terminal and storage medium - Google Patents

Live broadcast picture display method and device, terminal and storage medium Download PDF

Info

Publication number
CN111918090A
CN111918090A CN202010797133.1A CN202010797133A CN111918090A CN 111918090 A CN111918090 A CN 111918090A CN 202010797133 A CN202010797133 A CN 202010797133A CN 111918090 A CN111918090 A CN 111918090A
Authority
CN
China
Prior art keywords
picture
client
virtual prop
sub
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010797133.1A
Other languages
Chinese (zh)
Other versions
CN111918090B (en
Inventor
陈文琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Fanxing Huyu IT Co Ltd
Original Assignee
Guangzhou Fanxing Huyu IT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Fanxing Huyu IT Co Ltd filed Critical Guangzhou Fanxing Huyu IT Co Ltd
Priority to CN202010797133.1A priority Critical patent/CN111918090B/en
Publication of CN111918090A publication Critical patent/CN111918090A/en
Application granted granted Critical
Publication of CN111918090B publication Critical patent/CN111918090B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25841Management of client data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application provides a live broadcast picture display method, a live broadcast picture display device, a terminal and a storage medium. The method is applied to an initiator client, and comprises the following steps: receiving a first bureau indication; displaying a first direct broadcasting picture according to the first game-play instruction; controlling the first virtual prop to move between the first sub-picture and the second sub-picture; in the process of controlling the first virtual prop to move, the position information of the first virtual prop is sent to a server, the server is used for forwarding the position information of the first virtual prop to at least one receiver client, and the at least one receiver client is used for displaying a second live broadcast picture in a rendering mode and displaying the first virtual prop in the second live broadcast picture in an overlapping mode based on the position information of the first virtual prop. The technical scheme provided by the embodiment of the application can enrich the interaction form during direct broadcasting of the connected wheat, and can ensure the synchronous pictures seen by the multi-party anchor during interaction.

Description

Live broadcast picture display method and device, terminal and storage medium
Technical Field
The embodiment of the application relates to the technical field of internet, in particular to a live broadcast picture display method, a live broadcast picture display device, a live broadcast picture display terminal and a storage medium.
Background
With the development of network technology and the wide popularization of live broadcast functions, a live broadcast mode with connected wheat is provided at present, at least two main broadcast users are connected with wheat for audiences to watch, and the mode is novel and unique, and the live broadcast content is enriched.
Related art provides interactive animation games in live broadcast. Because the difficulty of realizing position synchronization in a plurality of anchor pictures is high, in an interactive animation game in live broadcasting with wheat, independent animations are displayed in live pictures corresponding to all the anchors, for example, a first animation is displayed in a live picture corresponding to a first anchor, and a second animation is displayed in a live picture corresponding to a second anchor.
In the related art, an interactive animation game in live telecast lacks interaction among a plurality of anchor players.
Disclosure of Invention
The embodiment of the application provides a live broadcast picture display method, a live broadcast picture display device, a live broadcast terminal and a storage medium, and enriches interactive forms during live broadcast with wheat. The technical scheme is as follows:
on one hand, the embodiment of the application provides a live broadcast picture display method, which is applied to an initiator client, and the method comprises the following steps:
receiving a first game play indication for triggering a game play between the initiator client and at least one recipient client;
displaying a first direct broadcasting picture according to the first bureau instruction, wherein the first direct broadcasting picture comprises a first sub-picture corresponding to the initiator client, a second sub-picture corresponding to the at least one receiver client, and a first virtual prop;
controlling the first virtual prop to move between the first sub-picture and the second sub-picture;
in the process of controlling the first virtual prop to move, sending the position information of the first virtual prop to a server, wherein the server is used for forwarding the position information of the first virtual prop to at least one receiver client, and the at least one receiver client is used for displaying a second live broadcast picture in a superposition mode on the basis of the position information of the first virtual prop in the second live broadcast picture after the second live broadcast picture is rendered.
On the other hand, an embodiment of the present application provides a live view display method, which is applied to a recipient client, and the method includes:
receiving a second game play instruction, wherein the second game play instruction is used for triggering game play between an initiator client and a receiver client;
receiving live broadcast data and position information of a first virtual prop sent by the server;
displaying a second live broadcast picture according to the live broadcast data, wherein the second live broadcast picture comprises a first sub-picture corresponding to the initiator client and a second sub-picture corresponding to the receiver client;
and displaying the first virtual prop in the second live broadcast picture according to the position information of the first virtual prop.
In another aspect, an embodiment of the present application provides a live view display apparatus, where the apparatus includes:
a first receiving module, configured to receive a first game play pairing instruction, where the first game play pairing instruction is used to trigger game play between the initiator client and at least one recipient client;
the first display module is used for displaying a first direct broadcasting picture according to the first bureau instruction, wherein the first direct broadcasting picture comprises a first sub-picture corresponding to the initiator client, a second sub-picture corresponding to the at least one receiver client and a first virtual prop;
the virtual prop control module is used for controlling the first virtual prop to move between the first sub-picture and the second sub-picture;
the information sending module is used for sending the position information of the first virtual prop to a server in the process of controlling the first virtual prop to move, the server is used for forwarding the position information of the first virtual prop to the at least one receiver client, and the at least one receiver client is used for displaying a second live broadcast picture in a rendering mode and displaying the first virtual prop in the second live broadcast picture in a superposition mode based on the position information of the first virtual prop.
In another aspect, an embodiment of the present application provides a live view display apparatus, including:
a second receiving module, configured to receive a second game-play instruction, where the second game-play instruction is used to trigger game play between an initiator client and a recipient client;
the information receiving module is used for receiving live broadcast data and position information of the first virtual prop sent by the server;
the second display module is used for displaying a second live broadcast picture according to the live broadcast data, wherein the second live broadcast picture comprises a first sub-picture corresponding to the initiator client and a second sub-picture corresponding to the receiver client;
and the third display module is used for displaying the first virtual prop in the second live broadcast picture according to the position information of the first virtual prop.
In another aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor, a memory, and a flexible display screen, where the memory stores a computer program, and the computer program is loaded and executed by the processor to implement the live view display method as described above.
In still another aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program is loaded and executed by a processor to implement the live view display method as described above.
In yet another aspect, an embodiment of the present application provides a computer program product, where the computer program product includes computer instructions, where the computer instructions are stored in a computer-readable storage medium, and a processor of a computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device performs the live view display method provided in the foregoing aspect or various optional implementations of the aspect.
The technical scheme provided by the embodiment of the application can bring the beneficial effects of at least comprising:
the first virtual prop is controlled to move between the live broadcast pictures corresponding to all the anchor broadcasts by the multi-party anchor broadcasts during live wheat-connecting broadcast, so that the interactive form among the multi-party anchor broadcasts during live wheat-connecting broadcast is enriched. In addition, in the moving process, the first virtual prop displayed by the receiver client is determined based on the position information of the first virtual prop sent by the initiator client, so that the synchronization of live broadcast pictures corresponding to the multi-party anchor broadcast in live broadcast of live telecast is realized.
Drawings
FIG. 1 is a schematic illustration of an implementation environment shown in an exemplary embodiment of the present application;
fig. 2 is a flowchart illustrating a live view display method according to an exemplary embodiment of the present application;
fig. 3 is a flowchart illustrating a live view display method according to another exemplary embodiment of the present application;
FIG. 4 is a schematic view of an interface involved in the embodiment of FIG. 3;
fig. 5 is a flowchart illustrating a live view display method according to an exemplary embodiment of the present application;
FIG. 6 is a schematic view of an interface involved in the embodiment of FIG. 5;
fig. 7 is a flowchart illustrating a live view display method according to an exemplary embodiment of the present application;
fig. 8 is a block diagram of a live view display apparatus shown in an exemplary embodiment of the present application;
fig. 9 is a block diagram of a live view display apparatus shown in another exemplary embodiment of the present application;
fig. 10 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of an implementation environment provided by an embodiment of the present application is shown, where the implementation environment includes an initiator terminal 11, a recipient terminal 12, and a server 13.
The initiator terminal 11 is installed with an initiator client, and is configured to execute a live broadcast picture display method on the initiator client side, such as rendering and displaying a live broadcast picture, controlling a first virtual item to move among a plurality of live broadcast pictures, initiating position information of the first virtual item, and the like. The initiator terminal 11 is a smartphone, a tablet, a Personal Computer (PC), a smart wearable device, or the like.
The originator terminal 11 has an image capture function. Optionally, the initiator terminal 11 implements the image capturing function through a camera assembly or a screen recording application. For example, originating terminal 11 captures a portrait of the anchor user through a camera assembly, as well as the environment in which the anchor user is located. For another example, the initiator terminal 11 collects the content in the user interface of the initiator terminal 11 through a screen recording application.
The originator terminal 11 also has a function of data interaction with the server 13. Illustratively, the initiator terminal 11 sends the collected live broadcast data and the position information of the first virtual item to the server.
Optionally, the initiator terminal 11 implements the image capturing function and the data interaction function with the server 13 through the initiator client. Optionally, the initiator client is a live client.
The recipient terminal 12 is installed with a recipient client, and is configured to execute a live view display method on the recipient side, such as rendering and displaying a live view, receiving location information of the first virtual item, rendering and displaying the first virtual item, and the like. Recipient terminal 12 is a smartphone, tablet, PC, smart wearable device, or the like. Optionally, the recipient terminal also has an image capturing function and a function of data interaction with the server 13. Optionally, the receiver terminal 12 implements the image capturing function and the data interaction function with the server 13 through the receiver client. Optionally, the recipient client is a live client.
The server 13 may be one server, a server cluster formed by a plurality of servers, or a cloud computing service center. Optionally, the server 13 is a background server corresponding to the live client.
Optionally, the implementation environment further comprises a viewer terminal (not shown in fig. 1), which is a terminal used by the viewer user. The spectator terminals are smartphones, tablets, personal computers, smart wearable devices, etc. The viewer terminal has an image processing function, a function of data interaction with the server 0, and a rendering display function.
The initiator terminal 11 and the server 13 establish a communication connection therebetween through a wired or wireless network. A communication connection is established between the recipient terminal 12 and the server 13 via a wired or wireless network. The server 13 and the viewer's terminal establish a communication connection via a wired or wireless network.
The wireless or wired networks described above use standard communication techniques and/or protocols. The Network is typically the internet, but may be any other Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wireline or wireless Network, a private Network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Mark-up Language (HTML), Extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
Referring to fig. 2, it shows a live view display method provided in an embodiment of the present application, which is applied to the initiator client in the embodiment of fig. 1, and the method includes:
step 201, receiving a first bureau pairing instruction.
The first game play indication is used to trigger a game play between the initiator client and at least one recipient client. In a possible implementation manner, a game play-to-game control is displayed on a current display screen of an initiator client, after the initiator client receives a trigger request corresponding to the game play-to-game control, a game play-to-game request is sent to a server, the game play-to-game indication carries a client identifier of a receiver client, the server sends the game play-to-game request to a corresponding receiver client according to the client identifier, the receiver client receiving the game play-to-game request displays inquiry information, if a confirmation indication corresponding to the inquiry information is received, a play-to-game acceptance notification is returned to the server, the server sends the play-to-game acceptance notification to the initiator client, and the initiator client receives a first play-to-game indication.
Optionally, the current display picture of the initiator client is a live broadcast and microphone connection picture corresponding to the initiator client and another client, or a live broadcast picture of the initiator client.
The receiver client is selected by the user corresponding to the initiator client, or is set by default by the sender client. And when the current display picture is a live broadcast microphone connecting picture corresponding to the initiator client and other clients, the other clients are also the receiver clients.
When the current display picture is a live broadcast picture of the client of the sender, after the client of the initiator receives a trigger request corresponding to the game-play control, a selection interface is displayed, wherein the selection interface comprises a user account list, a user corresponding to the client of the initiator selects a target user account in the user account list, and the client corresponding to the target user account is also a receiver client.
And 202, displaying a first live-broadcasting picture according to the first game-play instruction.
The first live broadcast picture is also a live broadcast picture with live TV, and includes a first sub-picture corresponding to the initiator client, a second sub-picture corresponding to at least one recipient client, and a first virtual prop.
The first sub-picture is a picture acquired by the initiator client, and the second sub-picture is a picture acquired by the receiver client. The first sub-frame and the second sub-frame are arranged in parallel, end-to-end connected, and the like, which is not limited in the embodiment of the present application.
The first virtual prop is actually determined according to the game-play type, such as a virtual badminton, a virtual table tennis and the like. The game match type is a default type or is set by the user of the initiator in a self-defined way. For example, for different game play types, the initiator client displays different game play controls, and when the initiator client receives a trigger signal corresponding to a target game play control, the game play type corresponding to the target game play control is determined as the game play type selected by the initiator user.
Optionally, the initiator client sends the collected live broadcast data to the server, the server performs fusion processing on the live broadcast data sent by the initiator client and the live broadcast data sent by the receiver client to obtain fused live broadcast data, and the fused live broadcast data are respectively sent to the initiator client and the receiver client, and then the initiator client displays a live broadcast frame with the microphone according to the fused live broadcast data.
Step 203, controlling the first virtual prop to move between the first sub-picture and the second sub-picture.
In the embodiment of the application, the initiator client controls the first virtual prop to move between the first sub-picture and the second sub-picture, so that interaction among a plurality of anchor users is realized, and the interaction form during live broadcasting with wheat is enriched. The following embodiments explain a control manner for controlling the first virtual prop to move between the first sub-screen and the second sub-screen.
Step 204, in the process of controlling the movement of the first virtual item, sending the position information of the first virtual item to a server.
The server is used for forwarding the position information of the first virtual prop to at least one receiver client, and the at least one receiver client is used for displaying the first virtual prop in a superposition mode in the second live broadcast picture based on the position information of the first virtual prop after the second live broadcast picture is rendered and displayed. By the above mode, the synchronization of the live broadcast pictures corresponding to the multi-party anchor can be realized.
The position information of the first virtual item is represented by coordinates of the first virtual item on a display screen of the initiator client. The initial position information of the first virtual prop is preset, and then the position information is dynamically updated according to the moving direction and the moving speed of the first virtual prop.
Optionally, the initiator client adds SEI information in a video frame containing the first virtual item, where the SEI information carries location information of the first virtual item, and then sends the video frame containing the first virtual item to the server, so as to send the location information of the first virtual item to the server.
To sum up, according to the technical scheme provided by the embodiment of the application, the first virtual prop is controlled to move between the live broadcast pictures corresponding to all the anchor broadcasts when the live broadcasts are connected by the multi-anchor broadcasts, so that the interaction form between the multi-anchor broadcasts when the live broadcasts are connected is enriched. In addition, in the moving process, the first virtual prop displayed by the receiver client is determined based on the position information of the first virtual prop sent by the initiator client, so that the synchronization of live broadcast pictures corresponding to the multi-party anchor broadcast in live broadcast of live telecast is realized.
When the number of the receiver clients is different, the corresponding game-to-game types are also different. The following explains a live view display method when the recipient client is one, with reference to the embodiment of fig. 3. Referring to fig. 3 in combination, a flowchart of a live view display method provided by an embodiment of the present application is shown. The method is applied to the initiator client in the embodiment of fig. 1, and comprises the following steps:
step 301, receiving a first bureau instruction.
The first game play indication is used to trigger a game play between the initiator client and the at least one recipient client.
Step 302, performing face recognition on the first live-action picture to obtain a first face region corresponding to the initiator client and second face regions corresponding to at least one receiver client.
The first sub-picture is a picture collected by the initiator client and generally comprises a portrait of the initiator user, the second sub-picture is a picture collected by the receiver client and generally comprises a portrait of the receiver user, and the first face area and the second face area can be respectively identified by carrying out face identification on the portrait.
The face recognition algorithm for performing face recognition on the first live view includes, but is not limited to: a face recognition algorithm based on the feature points of the face, a face recognition algorithm based on the whole face image, a face recognition algorithm using a neural network, a face recognition algorithm based on a template, and the like.
Step 303, displaying a second virtual item corresponding to the initiator client in the first face area, and displaying a second virtual item corresponding to the recipient client in the second face area.
And the initiator client replaces and displays the first face area and the second face area as second virtual props, and the first virtual props are controlled to move between the first sub-picture and the second sub-picture through the second virtual props.
With reference to fig. 4, a schematic interface diagram for displaying a live interface provided by an embodiment of the present application is shown. The live broadcast interface comprises a first sub-picture 41 and a second sub-picture 42, a face area in the first sub-picture is replaced and displayed as a second virtual prop 44, a face area in the second sub-picture is replaced and displayed as a second virtual prop 44, and the initiator client controls the second virtual prop 44 to contact with the first virtual prop 43 so as to control the first virtual prop 43 to move between the first sub-picture 41 and the second sub-picture 42.
Step 304, determining the moving direction of the first virtual item according to the first contact information between the second virtual item and the first virtual item.
The initial moving direction of the first virtual prop is preset, a subsequent initiator user changes the position of the face area in the first sub-picture through movement, the position of the second virtual prop correspondingly changes, when the position of the second virtual prop intersects with the first virtual prop, namely the first virtual prop contacts with the second virtual prop, the moving direction of the first virtual prop changes at the moment.
Optionally, the terminal determines the moving direction of the first virtual item according to the first contact information and second contact information of the boundary of the first virtual item and the first sub-picture and/or the second sub-picture. It should be noted that, the contact of the first virtual prop with the boundary of the first sub-picture and/or the second sub-picture also changes the moving direction of the first virtual prop.
Step 305, obtaining the moving speed of the first virtual item.
In a possible implementation manner, the terminal obtains virtual article receiving information corresponding to the initiator client and/or the receiver client, and then determines the moving speed according to the virtual article receiving information corresponding to the initiator client and/or the receiver client.
The virtual article receiving information corresponding to the sender client is used for indicating the number of virtual articles sent to the initiator client by the audience client. The virtual article receiving information corresponding to the recipient client is used to indicate the number of virtual articles sent by the viewer client to the recipient client. Exemplarily, when the first virtual prop is controlled to move from the first sub-screen to the second sub-screen, the terminal determines the moving speed according to the virtual article receiving information corresponding to the initiator client; and when the first virtual prop is controlled to move from the second sub-picture to the first sub-picture, determining the moving speed according to the virtual article receiving information corresponding to the receiver client.
Optionally, the number of the virtual items is in a positive correlation with the moving speed, that is, the greater the number of the virtual items, the faster the moving speed is, and the smaller the number of the virtual items, the slower the moving speed is. In the live broadcast process, audience users present virtual articles to the anchor user to support the anchor user, and the number of the virtual articles received by the anchor user can effectively reflect the popularity of the anchor user. In the embodiment of the application, the moving speed of the first virtual item is determined according to the number of the virtual items received by the anchor user, so that the participation sense of audience users can be improved.
Optionally, the initiator client obtains time delay information corresponding to the sender client; acquiring a maximum speed threshold corresponding to the time delay information; when the moving speed of the first virtual prop is larger than the maximum speed threshold value, determining the maximum speed threshold value as the moving speed of the first virtual prop. The time delay information and the maximum time delay threshold value are in a negative correlation relationship, namely the larger the time delay is, the smaller the maximum speed threshold value is, the smaller the time delay is, and the larger the maximum speed threshold value is.
Due to the time delay of the receiving and sending of the live broadcast data, when the moving speed of the first virtual item is high, the picture is not synchronous. In this application embodiment, to the above situation, a maximum speed threshold is set for the moving speed of the first virtual prop, so that the occurrence of the unsynchronized picture can be avoided, and the display effect is improved.
In other possible implementations, the movement speed of the first virtual prop is set by default.
And step 306, controlling the first virtual prop to move between the first sub-picture and the second sub-picture according to the moving direction and the moving speed.
The initiator client determines the position information of the first virtual prop on the first direct broadcasting picture according to the moving speed and the moving speed of the first virtual prop, and the position information is dynamically changed, so that the effect of controlling the first virtual prop to move between the first sub-picture and the second sub-picture is achieved. For example, the initial position of the first virtual item is preset, and then the initiator client determines the next position of the first virtual item according to the initial position, the moving direction and the moving speed.
Step 307, in the process of controlling the movement of the first virtual item, sending the position information of the first virtual item to the server.
The server is used for forwarding the position information of the first virtual prop to at least one receiver client, and the at least one receiver client is used for displaying the first virtual prop in a superposition mode in the second live broadcast picture based on the position information of the first virtual prop after the second live broadcast picture is rendered and displayed.
Optionally, the server further sends the location information of the second virtual item to the server, and forwards the location information of the first virtual item to at least one recipient client through the server, where the at least one recipient client is configured to display the second virtual item in a superimposed manner in the second live broadcast frame based on the location information of the second virtual item after rendering the second live broadcast frame.
To sum up, according to the technical scheme provided by the embodiment of the application, the first virtual prop is controlled to move between the live broadcast pictures corresponding to all the anchor broadcasts when the live broadcasts are connected by the multi-anchor broadcasts, so that the interaction form between the multi-anchor broadcasts when the live broadcasts are connected is enriched. In addition, in the moving process, the first virtual prop displayed by the receiver client is determined based on the position information of the first virtual prop sent by the initiator client, so that the synchronization of live broadcast pictures corresponding to the multi-party anchor broadcast in live broadcast of live telecast is realized.
The following explains a live view display method when the recipient client is one, with reference to the embodiment of fig. 5. When a plurality of receiver clients are available, in the first live-action picture, a first sub-picture corresponding to the initiator client and a second sub-picture corresponding to at least one receiver client are sequentially connected end to end. Optionally, the initiator client transforms the first sub-picture and the second sub-picture and then sequentially connects the first sub-picture and the second sub-picture end to end. For example, a rectangular first sprite is transformed into a circular first sprite, and a rectangular second sprite is transformed into a circular second sprite.
With reference to fig. 6, a schematic interface diagram for displaying a live interface provided by an embodiment of the present application is shown. The live interface comprises a first sub-picture 61 and two second sub-pictures 62, wherein the first sub-picture 61 and the second sub-picture 62 are oval in shape and are sequentially connected end to end.
With reference to fig. 5 in combination, a flowchart of a live view display method provided by an embodiment of the present application is shown. The method is applied to the initiator client in the embodiment of fig. 1, and comprises the following steps:
step 501, a first bureau instruction is received.
The first game play indication is used to trigger a game play between the initiator client and the at least one recipient client.
Step 502, performing face recognition processing on a first sub-picture in a first live-action picture to obtain a first face region corresponding to the initiator client.
Step 503, performing face recognition processing on the second sub-picture in the first live-action picture to obtain second face areas respectively corresponding to at least one recipient client.
The face recognition algorithm used for face recognition may refer to the above embodiments, and is not described herein again.
And step 504, respectively displaying third virtual props at the designated positions in the first face area and the second face area.
The designated position can be set by the default of the client of the sender or can be set by the user in a self-defined way. Illustratively, the designated location is a nasal area. The third virtual item is set according to the game-play type, for example, the third virtual item is a game hook.
Referring to fig. 6 in combination, the nose in the face area in the first sprite 61 is displayed instead as a third virtual prop 64, and the nose in the face area in the second sprite 62 is displayed instead as the third virtual prop 64.
And 505, controlling the third virtual prop to be in contact with the first virtual prop.
And the initiator user changes the position of the face area in the live broadcast picture through movement, and the position of the third virtual prop in the live broadcast picture is correspondingly changed until the third virtual prop is contacted with the first virtual prop.
Step 506, controlling the third virtual prop to move so as to control the first virtual prop to move between the first sub-picture and the second sub-picture.
Referring collectively to fig. 6, the initiator client controls movement of the third virtual prop 64 to pull the first virtual prop 65 to move between the first sub-screen 61 and the second sub-screen 62.
After the third virtual prop contacts with the first virtual prop, when the third virtual prop moves, the first virtual prop also moves correspondingly, that is, the initiator user pulls the first virtual prop to move by controlling the movement of the third virtual prop. It should be noted that the initiator client controls the first virtual item to move to the adjacent picture according to a preset sequence, where the preset sequence is a clockwise sequence or a counterclockwise sequence.
Optionally, the first direct playing picture further includes at least one buffer area, and the buffer area is used for connecting adjacent pictures in the first sub-picture and the at least one second sub-picture. The number of the buffer areas is the same as the number of the pictures. Referring to fig. 6 in combination, there is a buffer area 63 between the first sub-picture 61 and two adjacent second sub-pictures 62.
Optionally, when the first virtual prop moves to the buffer area, the third virtual prop is controlled to be separated from the contact with the first virtual prop, and at this time, the first virtual prop automatically moves to the adjacent second sub-picture.
Step 507, in the process of controlling the movement of the first virtual item, sending the position information of the first virtual item to the server.
The server is used for forwarding the position information of the first virtual prop to at least one receiver client, and the at least one receiver client is used for displaying the first virtual prop in a superposed mode in the second live broadcast picture based on the position information of the first virtual prop after the second live broadcast picture is rendered and displayed.
Optionally, the server further sends the location information of the third virtual item to the server, and forwards the location information of the third virtual item to the at least one recipient client through the server, where the at least one recipient client is configured to display the third virtual item in a superimposed manner in the second live broadcast frame based on the location information of the third virtual item after rendering the second live broadcast frame.
Optionally, a countdown is further displayed on the first virtual prop, and when the countdown is finished, if the picture where the first virtual prop is located is not changed, the first virtual prop is controlled to disappear, and a preset animation is played at the disappearing position of the first virtual prop. The preset animation is an explosion special effect. Referring collectively to FIG. 6, first virtual prop 65 also displays a countdown timer.
In addition, the countdown duration is actually determined according to the virtual article receiving information. Illustratively, the initiator acquires virtual article receiving information corresponding to the initiator client and/or the receiver client; and determining the countdown duration corresponding to the countdown according to the virtual article receiving information corresponding to the initiator client and/or the receiver client. Optionally, the number of the virtual items has a positive correlation with the countdown length, that is, the longer the countdown length is, the larger the number of the virtual items is, and the shorter the countdown length is, the smaller the number of the virtual items is.
Optionally, the initiator client obtains a target picture where the first virtual prop is located, and determines a countdown duration corresponding to countdown according to virtual article receiving information corresponding to the client corresponding to the target picture.
To sum up, according to the technical scheme provided by the embodiment of the application, the first virtual prop is controlled to move between the live broadcast pictures corresponding to all the anchor broadcasts when the live broadcasts are connected by the multi-anchor broadcasts, so that the interaction form between the multi-anchor broadcasts when the live broadcasts are connected is enriched. In addition, in the moving process, the first virtual prop displayed by the receiver client is determined based on the position information of the first virtual prop sent by the initiator client, so that the synchronization of live broadcast pictures corresponding to the multi-party anchor broadcast in live broadcast of live telecast is realized.
Referring collectively to fig. 7, a flowchart of a live view display method provided by an exemplary embodiment of the present application is shown. The method is applied to the receiver client in the embodiment shown in fig. 1, and the method comprises the following steps:
step 701, receiving a second office alignment indication.
The second game play indication is used to trigger a game play between an initiator client and the recipient client.
Optionally, the receiving client displays query information after receiving the game match-making request forwarded by the server, where the query information is used to query whether to join the game match-making, and if the receiving client receives a confirmation instruction corresponding to the query information, the receiving client receives a second match-making instruction.
Step 702, receiving live broadcast data and position information of the first virtual item sent by the server.
After the sender client and the receiver client send the collected data to the server, the server processes the received data and respectively sends the processed data to the sender client and the receiver client.
And 703, displaying a second live broadcast picture according to the live broadcast data.
And the receiver client renders and displays a second live broadcast picture according to the received live broadcast data, wherein the second live broadcast picture comprises a first sub-picture corresponding to the initiator client and a second sub-picture corresponding to the receiver client.
Optionally, when there is one recipient client, a first face area in the first sub-screen displays the second virtual prop, and a second face area in the second sub-screen also displays the second virtual prop.
Optionally, when a plurality of recipient clients are available, a third virtual prop is displayed at a specified position in the first face area in the first sub-screen, and a third virtual prop is also displayed at a specified position in the second face area in the second sub-screen.
Step 704, displaying the first virtual item in the second live broadcast picture according to the position information of the first virtual item.
In the embodiment of the present application, the position of the first virtual item is sent to the server by the initiator client, and then forwarded to the recipient client by the server, that is, the first virtual positions displayed by the initiator client and the recipient client are both determined by the initiator client, so that the synchronization of the frames of the initiator client and the recipient client can be effectively ensured.
It should be noted that the screen displayed by the recipient client is the same as the screen displayed by the initiator client.
To sum up, according to the technical scheme provided by the embodiment of the application, the first virtual prop is controlled to move between the live broadcast pictures corresponding to all the anchor broadcasts when the live broadcasts are connected by the multi-anchor broadcasts, so that the interaction form between the multi-anchor broadcasts when the live broadcasts are connected is enriched. In addition, in the moving process, the first virtual prop displayed by the receiver client is determined based on the position information of the first virtual prop sent by the initiator client, so that the synchronization of live broadcast pictures corresponding to the multi-party anchor broadcast in live broadcast of live telecast is realized.
In the following, embodiments of the apparatus of the present application are described, and for portions of the embodiments of the apparatus not described in detail, reference may be made to technical details disclosed in the above-mentioned method embodiments.
Referring to fig. 8, a block diagram of a live view display apparatus according to an exemplary embodiment of the present application is shown. The live view display device may be implemented as all or a part of the terminal by software, hardware, or a combination of both. The live view display device includes:
a first receiving module 801, configured to receive a first game play pairing instruction, where the first game play pairing instruction is used to trigger game play between the initiator client and at least one recipient client.
A first display module 802, configured to display a first live-action picture according to the first bureau-to-bureau indication, where the first live-action picture includes a first sub-picture corresponding to the initiator client, a second sub-picture corresponding to the at least one recipient client, and a first virtual prop.
A virtual prop control module 803, configured to control the first virtual prop to move between the first sub-screen and the second sub-screen.
An information sending module 804, configured to send, to a server, location information of the first virtual item in a process of controlling movement of the first virtual item, where the server is configured to forward the location information of the first virtual item to the at least one recipient client, and the at least one recipient client is configured to display, after rendering a second live broadcast picture, the first virtual item in the second live broadcast picture in a superimposed manner based on the location information of the first virtual item.
To sum up, according to the technical scheme provided by the embodiment of the application, the first virtual prop is controlled to move between the live broadcast pictures corresponding to all the anchor broadcasts when the live broadcasts are connected by the multi-anchor broadcasts, so that the interaction form between the multi-anchor broadcasts when the live broadcasts are connected is enriched. In addition, in the moving process, the first virtual prop displayed by the receiver client is determined based on the position information of the first virtual prop sent by the initiator client, so that the synchronization of live broadcast pictures corresponding to the multi-party anchor broadcast in live broadcast of live telecast is realized.
In an optional embodiment provided based on the embodiment shown in fig. 8, when there is one recipient client, the first sub-screen includes a second virtual prop corresponding to the initiator client, and the second sub-screen includes a second virtual prop corresponding to the recipient client;
the first display module 802 is configured to:
performing face recognition on the first live-action picture to obtain a first face area corresponding to the initiator client and second face areas corresponding to the at least one receiver client respectively;
and displaying a second virtual prop corresponding to the initiator client side in the first face area, and displaying a second virtual prop corresponding to the receiver client side in the second face area.
The virtual prop control module 803 is configured to:
determining the moving direction of the first virtual prop through first contact information between the second virtual prop and the first virtual prop;
acquiring the moving speed of the first virtual prop;
and controlling the first virtual prop to move between the first sub-picture and the second sub-picture according to the moving direction and the moving speed.
Optionally, the virtual prop control module 803 is configured to:
and determining the moving direction of the first virtual prop according to the first contact information and second contact information of the boundary of the first virtual prop and the first sub-picture and/or the second sub-picture.
Optionally, the virtual prop control module 803 is configured to:
acquiring virtual article receiving information corresponding to the initiator client and/or the receiver client, wherein the virtual article receiving information corresponding to the sender client is used for indicating the number of virtual articles sent to the initiator client by the audience client, and the virtual article receiving information corresponding to the receiver client is used for indicating the number of virtual articles sent to the receiver client by the audience client;
and determining the moving speed according to the virtual article receiving information corresponding to the initiator client and/or the receiver client.
Optionally, the virtual prop control module 803 is configured to:
acquiring time delay information corresponding to the sender client;
acquiring a maximum speed threshold corresponding to the time delay information;
when the moving speed of the first virtual prop is larger than the maximum speed threshold value, determining the maximum speed threshold value as the moving speed of the first virtual prop.
In an optional embodiment provided based on the embodiment shown in fig. 8, when the number of the recipient clients is multiple, in the first live-action picture, the first sub-picture and the at least one second sub-picture are sequentially connected end to end, the first sub-picture includes a third virtual prop corresponding to the initiator client, and the second sub-picture includes a third virtual prop corresponding to the recipient client;
the first display module 802 is configured to:
performing face recognition processing on the first sub-picture in the first live-action picture to obtain a first face area corresponding to the initiator client;
performing face recognition processing on the second sub-picture in the first direct broadcasting picture to obtain second face areas corresponding to the at least one receiver client;
and respectively displaying the third virtual props at the designated positions in the first face area and the second face area.
The virtual prop control module 803 is configured to:
controlling the third virtual prop to contact the first virtual prop;
controlling the third virtual prop to move so as to control the first virtual prop to move between the first sub-picture and the second sub-picture.
Optionally, the first direct-playing picture further includes at least one buffer area, and the buffer area is used for joining adjacent pictures in the first sub-picture and the at least one second sub-picture;
the virtual prop control module 803 is configured to control the third virtual prop to be separated from contact with the first virtual prop when the first virtual prop moves to the buffer area.
Optionally, the first virtual prop includes a countdown timer, the apparatus further comprising: a duration determination module (not shown in fig. 8).
The duration determining module is configured to:
acquiring virtual article receiving information corresponding to the initiator client and/or the receiver client;
and determining the countdown duration corresponding to the countdown according to the virtual article receiving information corresponding to the initiator client and/or the receiver client.
Optionally, the virtual prop control module 803 is configured to: and when the countdown is finished, if the picture of the first virtual prop is not changed, controlling the first virtual prop to disappear, and playing a preset animation at the disappearing position of the first virtual prop.
Referring to fig. 9, a block diagram of a live view display apparatus according to an exemplary embodiment of the present application is shown. The live view display device may be implemented as all or a part of the terminal by software, hardware, or a combination of both. The live view display device includes:
a second receiving module 901, configured to receive a second game-play instruction, where the second game-play instruction is used to trigger game play between the initiator client and the recipient client.
An information receiving module 902, configured to receive live broadcast data and location information of the first virtual item sent by the server.
A second display module 903, configured to display a second live broadcast picture according to the live broadcast data, where the second live broadcast picture includes a first sub-picture corresponding to the initiator client and a second sub-picture corresponding to the recipient client.
A third display module 904, configured to display the first virtual item in the second live view according to the position information of the first virtual item.
To sum up, according to the technical scheme provided by the embodiment of the application, the first virtual prop is controlled to move between the live broadcast pictures corresponding to all the anchor broadcasts when the live broadcasts are connected by the multi-anchor broadcasts, so that the interaction form between the multi-anchor broadcasts when the live broadcasts are connected is enriched. In addition, in the moving process, the first virtual prop displayed by the receiver client is determined based on the position information of the first virtual prop sent by the initiator client, so that the synchronization of live broadcast pictures corresponding to the multi-party anchor broadcast in live broadcast of live telecast is realized.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 10 shows a block diagram of a terminal 1000 according to an exemplary embodiment of the present application. The terminal 1000 can be: a smartphone, a tablet, an MP3 player, an MP4 player, a laptop, or a desktop computer. Terminal 1000 can also be referred to as user equipment, portable terminal, laptop terminal, desktop terminal, or the like by other names.
In general, terminal 1000 can include: a processor 1001 and a memory 1002.
Processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a Graphics Processing Unit (GPU) which is responsible for rendering and drawing the content required to be displayed on the display screen.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. The memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1002 is used to store a computer program for execution by the processor 1001 to implement the song playback method provided by the method embodiments of the present application.
In some embodiments, terminal 1000 can also optionally include: a peripheral interface 1003 and at least one peripheral. The processor 1001, memory 1002 and peripheral interface 1003 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1003 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, touch screen display 1005, camera assembly 1006, audio circuitry 1007, positioning assembly 1008, and power supply 1009.
Peripheral interface 1003 may be used to connect at least one Input/Output (I/O) related peripheral to processor 1001 and memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1001, the memory 1002, and the peripheral interface 1003 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 1004 is used to receive and transmit Radio Frequency (RF) signals, also known as electromagnetic signals. The radio frequency circuitry 1004 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or Wireless Fidelity (WiFi) networks. In some embodiments, the rf circuit 1004 may further include Near Field Communication (NFC) related circuits, which are not limited in this application.
The display screen 1005 is used to display a User Interface (UI). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1005 is a touch display screen, the display screen 1005 also has the ability to capture touch signals on or over the surface of the display screen 1005. The touch signal may be input to the processor 1001 as a control signal for processing. At this point, the display screen 1005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display screen 1005 can be one, providing a front panel of terminal 1000; in other embodiments, display 1005 can be at least two, respectively disposed on different surfaces of terminal 1000 or in a folded design; in still other embodiments, display 1005 can be a flexible display disposed on a curved surface or on a folded surface of terminal 1000. Even more, the display screen 1005 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1005 may be made of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The camera assembly 1006 is used to capture images or video. Optionally, the camera assembly 1006 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and a Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1001 for processing or inputting the electric signals to the radio frequency circuit 1004 for realizing voice communication. For stereo sound collection or noise reduction purposes, multiple microphones can be provided, each at a different location of terminal 1000. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1001 or the radio frequency circuit 1004 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1007 may also include a headphone jack.
A Location component 1008 is employed to locate a current geographic Location of terminal 1000 for enabling navigation or Location Based Service (LBS). The Positioning component 1008 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1009 is used to supply power to various components in terminal 1000. The power source 1009 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 1009 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1000 can also include one or more sensors 1010. The one or more sensors 1010 include, but are not limited to: acceleration sensor 1011, gyro sensor 1012, pressure sensor 1013, fingerprint sensor 1014, optical sensor 1015, and proximity sensor 1016.
Acceleration sensor 1011 can detect acceleration magnitudes on three coordinate axes of a coordinate system established with terminal 1000. For example, the acceleration sensor 1011 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1001 may control the touch display screen 1005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011. The acceleration sensor 1011 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1012 may detect a body direction and a rotation angle of the terminal 1000, and the gyro sensor 1012 and the acceleration sensor 1011 may cooperate to acquire a 3D motion of the user on the terminal 1000. From the data collected by the gyro sensor 1012, the processor 1001 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1013 may be disposed on a side frame of terminal 1000 and/or on a lower layer of touch display 1005. When pressure sensor 1013 is disposed on a side frame of terminal 1000, a user's grip signal on terminal 1000 can be detected, and processor 1001 performs left-right hand recognition or shortcut operation according to the grip signal collected by pressure sensor 1013. When the pressure sensor 1013 is disposed at a lower layer of the touch display screen 1005, the processor 1001 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1014 is used to collect a fingerprint of the user, and the processor 1001 identifies the user according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying, and changing settings, etc. Fingerprint sensor 1014 can be disposed on the front, back, or side of terminal 1000. When a physical key or vendor Logo is provided on terminal 1000, fingerprint sensor 1014 can be integrated with the physical key or vendor Logo.
The optical sensor 1015 is used to collect the ambient light intensity. In one embodiment, the processor 1001 may control the display brightness of the touch display screen 1005 according to the intensity of the ambient light collected by the optical sensor 1015. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1005 is turned down. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the intensity of the ambient light collected by the optical sensor 1015.
Proximity sensor 1016, also known as a distance sensor, is typically disposed on a front panel of terminal 1000. Proximity sensor 1016 is used to gather the distance between the user and the front face of terminal 1000. In one embodiment, when proximity sensor 1016 detects that the distance between the user and the front surface of terminal 1000 gradually decreases, processor 1001 controls touch display 1005 to switch from a bright screen state to a dark screen state; when proximity sensor 1016 detects that the distance between the user and the front of terminal 1000 is gradually increased, touch display screen 1005 is controlled by processor 1001 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in FIG. 10 is not intended to be limiting and that terminal 1000 can include more or fewer components than shown, or some components can be combined, or a different arrangement of components can be employed.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein a computer program, which is loaded and executed by a processor of a terminal to implement the image processing method in the above-described method embodiments.
Alternatively, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which includes computer instructions stored in a computer-readable storage medium, and a processor of a computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions to cause the computer device to perform the live view display method provided in the foregoing aspect or various alternative implementations of the aspect.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. As used herein, the terms "first," "second," and the like, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. A live broadcast picture display method is applied to an initiator client, and comprises the following steps:
receiving a first game play indication for triggering a game play between the initiator client and at least one recipient client;
displaying a first direct broadcasting picture according to the first bureau instruction, wherein the first direct broadcasting picture comprises a first sub-picture corresponding to the initiator client, a second sub-picture corresponding to the at least one receiver client and a first virtual prop;
controlling the first virtual prop to move between the first sub-picture and the second sub-picture;
in the process of controlling the first virtual prop to move, sending the position information of the first virtual prop to a server, wherein the server is used for forwarding the position information of the first virtual prop to at least one receiver client, and the at least one receiver client is used for displaying a second live broadcast picture in a superposition mode on the basis of the position information of the first virtual prop in the second live broadcast picture after the second live broadcast picture is rendered.
2. The method according to claim 1, wherein when there is one recipient client, the first sub-screen includes a second virtual prop corresponding to the initiator client, and the second sub-screen includes a second virtual prop corresponding to the recipient client;
the displaying a first live-air picture according to the first bureau-to-bureau indication comprises:
performing face recognition on the first live-action picture to obtain a first face area corresponding to the initiator client and second face areas corresponding to the at least one receiver client respectively;
displaying a second virtual prop corresponding to the initiator client side in the first face area, and displaying a second virtual prop corresponding to the receiver client side in the second face area;
the controlling the first virtual prop to move from the first sub-picture to the second sub-picture comprises:
determining the moving direction of the first virtual prop through first contact information between the second virtual prop and the first virtual prop;
acquiring the moving speed of the first virtual prop;
and controlling the first virtual prop to move between the first sub-picture and the second sub-picture according to the moving direction and the moving speed.
3. The method of claim 2, wherein said determining a direction of movement of the first virtual item from first contact information between the second virtual item and the first virtual item comprises:
and determining the moving direction of the first virtual prop according to the first contact information and second contact information of the boundary of the first virtual prop and the first sub-picture and/or the second sub-picture.
4. The method of claim 2, wherein said obtaining a movement speed of the first virtual prop comprises:
acquiring virtual article receiving information corresponding to the initiator client and/or the receiver client, wherein the virtual article receiving information corresponding to the sender client is used for indicating the number of virtual articles sent to the initiator client by the audience client, and the virtual article receiving information corresponding to the receiver client is used for indicating the number of virtual articles sent to the receiver client by the audience client;
and determining the moving speed according to the virtual article receiving information corresponding to the initiator client and/or the receiver client.
5. The method according to claim 4, wherein the determining the moving speed according to the virtual article reception information corresponding to the initiator client and/or the receiver client comprises:
acquiring time delay information corresponding to the sender client;
acquiring a maximum speed threshold corresponding to the time delay information;
when the moving speed of the first virtual prop is larger than the maximum speed threshold value, determining the maximum speed threshold value as the moving speed of the first virtual prop.
6. The method according to claim 1, wherein when the number of the recipient clients is multiple, in the first live-action picture, the first sub-picture and the at least one second sub-picture are sequentially connected end to end, the first sub-picture includes a third virtual prop corresponding to the initiator client, and the second sub-picture includes a third virtual prop corresponding to the recipient client;
the displaying a first live-air picture according to the first bureau-to-bureau indication comprises:
performing face recognition processing on the first sub-picture in the first live-action picture to obtain a first face area corresponding to the initiator client;
performing face recognition processing on the second sub-picture in the first direct broadcasting picture to obtain second face areas corresponding to the at least one receiver client;
respectively displaying the third virtual props at the designated positions in the first face area and the second face area;
the controlling the first virtual prop to move between the first sub-picture and the second sub-picture comprises:
controlling the third virtual prop to contact the first virtual prop;
controlling the third virtual prop to move so as to control the first virtual prop to move between the first sub-picture and the second sub-picture.
7. The method of claim 6, wherein the first live view further comprises at least one buffer area for concatenating adjacent ones of the first sprite and the at least one second sprite;
the controlling the first virtual prop to move between the first sub-picture and the second sub-picture comprises:
when the first virtual prop moves to the buffer area, the third virtual prop is controlled to be separated from the contact with the first virtual prop.
8. The method of claim 6, wherein the first virtual item comprises a countdown, and further comprising, prior to displaying the first live view according to the first game play indication:
acquiring virtual article receiving information corresponding to the initiator client and/or the receiver client;
and determining the countdown duration corresponding to the countdown according to the virtual article receiving information corresponding to the initiator client and/or the receiver client.
9. The method of claim 8, further comprising:
and when the countdown is finished, if the picture of the first virtual prop is not changed, controlling the first virtual prop to disappear, and playing a preset animation at the disappearing position of the first virtual prop.
10. A live broadcast picture display method is applied to a receiver client, and comprises the following steps:
receiving a second game play instruction, wherein the second game play instruction is used for triggering game play between an initiator client and a receiver client;
receiving live broadcast data and position information of a first virtual prop sent by the server;
displaying a second live broadcast picture according to the live broadcast data, wherein the second live broadcast picture comprises a first sub-picture corresponding to the initiator client and a second sub-picture corresponding to the receiver client;
and displaying the first virtual prop in the second live broadcast picture according to the position information of the first virtual prop.
11. A live view display apparatus, comprising:
a first receiving module, configured to receive a first game play pairing instruction, where the first game play pairing instruction is used to trigger game play between the initiator client and at least one recipient client;
the first display module is used for displaying a first direct broadcasting picture according to the first bureau instruction, wherein the first direct broadcasting picture comprises a first sub-picture corresponding to the initiator client, a second sub-picture corresponding to the at least one receiver client and a first virtual prop;
the virtual prop control module is used for controlling the first virtual prop to move between the first sub-picture and the second sub-picture;
the information sending module is used for sending the position information of the first virtual prop to a server in the process of controlling the first virtual prop to move, the server is used for forwarding the position information of the first virtual prop to the at least one receiver client, and the at least one receiver client is used for displaying a second live broadcast picture in a rendering mode and displaying the first virtual prop in the second live broadcast picture in a superposition mode based on the position information of the first virtual prop.
12. A live view display apparatus, comprising:
a second receiving module, configured to receive a second game-play instruction, where the second game-play instruction is used to trigger game play between an initiator client and a recipient client;
the information receiving module is used for receiving live broadcast data and position information of the first virtual prop sent by the server;
the second display module is used for displaying a second live broadcast picture according to the live broadcast data, wherein the second live broadcast picture comprises a first sub-picture corresponding to the initiator client and a second sub-picture corresponding to the receiver client;
and the third display module is used for displaying the first virtual prop in the second live broadcast picture according to the position information of the first virtual prop.
13. A terminal, characterized in that the terminal comprises a processor and a memory, the memory storing a computer program which is loaded and executed by the processor to implement the live view display method of any one of claims 1 to 9 or the live view display method of claim 10.
14. A computer-readable storage medium, in which a computer program is stored, the computer program being loaded and executed by a processor to implement the live view display method according to any one of claims 1 to 9 or the live view display method according to claim 10.
CN202010797133.1A 2020-08-10 2020-08-10 Live broadcast picture display method and device, terminal and storage medium Active CN111918090B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010797133.1A CN111918090B (en) 2020-08-10 2020-08-10 Live broadcast picture display method and device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010797133.1A CN111918090B (en) 2020-08-10 2020-08-10 Live broadcast picture display method and device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111918090A true CN111918090A (en) 2020-11-10
CN111918090B CN111918090B (en) 2023-03-28

Family

ID=73284853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010797133.1A Active CN111918090B (en) 2020-08-10 2020-08-10 Live broadcast picture display method and device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111918090B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346632A (en) * 2020-11-11 2021-02-09 腾讯科技(深圳)有限公司 Virtual item processing method and device, electronic equipment and storage medium
CN112561576A (en) * 2020-12-08 2021-03-26 广州繁星互娱信息科技有限公司 Interface display method, device and equipment for live application and storage medium
CN112954377A (en) * 2021-02-04 2021-06-11 广州繁星互娱信息科技有限公司 Live broadcast fighting picture display method, live broadcast fighting method and device
CN113596558A (en) * 2021-07-14 2021-11-02 网易(杭州)网络有限公司 Interaction method, device, processor and storage medium in game live broadcast
CN113645472A (en) * 2021-07-05 2021-11-12 北京达佳互联信息技术有限公司 Interaction method and device based on playing object, electronic equipment and storage medium
CN113923467A (en) * 2021-10-09 2022-01-11 广州繁星互娱信息科技有限公司 Cross-manufacturer live broadcast wheat connecting method and device, electronic equipment and medium
CN114630138A (en) * 2022-03-14 2022-06-14 上海哔哩哔哩科技有限公司 Configuration information issuing method and system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080270541A1 (en) * 2006-04-24 2008-10-30 Ellis Barlow Keener Interactive audio/video method on the internet
CN102008823A (en) * 2009-04-26 2011-04-13 艾利维公司 Method and system for controlling movements of objects in a videogame
CN106231368A (en) * 2015-12-30 2016-12-14 深圳超多维科技有限公司 Main broadcaster's class interaction platform stage property rendering method and device, client
US20170072307A1 (en) * 2007-12-15 2017-03-16 Sony Interactive Entertainment America Llc Web-Based Game Controller
CN107566911A (en) * 2017-09-08 2018-01-09 广州华多网络科技有限公司 A kind of live broadcasting method, device, system and electronic equipment
CN107680157A (en) * 2017-09-08 2018-02-09 广州华多网络科技有限公司 It is a kind of based on live interactive approach and live broadcast system, electronic equipment
CN109068181A (en) * 2018-07-27 2018-12-21 广州华多网络科技有限公司 Football game exchange method, system, terminal and device based on net cast
WO2019015405A1 (en) * 2017-07-18 2019-01-24 腾讯科技(深圳)有限公司 Virtual prop allocation method, server, client and storage medium
CN109963187A (en) * 2017-12-14 2019-07-02 腾讯科技(深圳)有限公司 A kind of cartoon implementing method and device
CN110545442A (en) * 2019-09-26 2019-12-06 网易(杭州)网络有限公司 live broadcast interaction method and device, electronic equipment and readable storage medium
CN111432266A (en) * 2020-03-31 2020-07-17 北京达佳互联信息技术有限公司 Interactive information display method, device, terminal and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080270541A1 (en) * 2006-04-24 2008-10-30 Ellis Barlow Keener Interactive audio/video method on the internet
US20170072307A1 (en) * 2007-12-15 2017-03-16 Sony Interactive Entertainment America Llc Web-Based Game Controller
CN102008823A (en) * 2009-04-26 2011-04-13 艾利维公司 Method and system for controlling movements of objects in a videogame
CN106231368A (en) * 2015-12-30 2016-12-14 深圳超多维科技有限公司 Main broadcaster's class interaction platform stage property rendering method and device, client
WO2019015405A1 (en) * 2017-07-18 2019-01-24 腾讯科技(深圳)有限公司 Virtual prop allocation method, server, client and storage medium
CN107566911A (en) * 2017-09-08 2018-01-09 广州华多网络科技有限公司 A kind of live broadcasting method, device, system and electronic equipment
CN107680157A (en) * 2017-09-08 2018-02-09 广州华多网络科技有限公司 It is a kind of based on live interactive approach and live broadcast system, electronic equipment
CN109963187A (en) * 2017-12-14 2019-07-02 腾讯科技(深圳)有限公司 A kind of cartoon implementing method and device
CN109068181A (en) * 2018-07-27 2018-12-21 广州华多网络科技有限公司 Football game exchange method, system, terminal and device based on net cast
CN110545442A (en) * 2019-09-26 2019-12-06 网易(杭州)网络有限公司 live broadcast interaction method and device, electronic equipment and readable storage medium
CN111432266A (en) * 2020-03-31 2020-07-17 北京达佳互联信息技术有限公司 Interactive information display method, device, terminal and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346632A (en) * 2020-11-11 2021-02-09 腾讯科技(深圳)有限公司 Virtual item processing method and device, electronic equipment and storage medium
CN112561576A (en) * 2020-12-08 2021-03-26 广州繁星互娱信息科技有限公司 Interface display method, device and equipment for live application and storage medium
CN112954377A (en) * 2021-02-04 2021-06-11 广州繁星互娱信息科技有限公司 Live broadcast fighting picture display method, live broadcast fighting method and device
CN113645472A (en) * 2021-07-05 2021-11-12 北京达佳互联信息技术有限公司 Interaction method and device based on playing object, electronic equipment and storage medium
CN113596558A (en) * 2021-07-14 2021-11-02 网易(杭州)网络有限公司 Interaction method, device, processor and storage medium in game live broadcast
CN113923467A (en) * 2021-10-09 2022-01-11 广州繁星互娱信息科技有限公司 Cross-manufacturer live broadcast wheat connecting method and device, electronic equipment and medium
CN113923467B (en) * 2021-10-09 2024-05-28 广州繁星互娱信息科技有限公司 Cross-manufacturer live-broadcast wheat connecting method and device, electronic equipment and medium
CN114630138A (en) * 2022-03-14 2022-06-14 上海哔哩哔哩科技有限公司 Configuration information issuing method and system
CN114630138B (en) * 2022-03-14 2023-12-08 上海哔哩哔哩科技有限公司 Configuration information issuing method and system

Also Published As

Publication number Publication date
CN111918090B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN111918090B (en) Live broadcast picture display method and device, terminal and storage medium
CN108810576B (en) Live wheat-connecting method and device and storage medium
CN110267067B (en) Live broadcast room recommendation method, device, equipment and storage medium
CN109729411B (en) Live broadcast interaction method and device
CN108900859B (en) Live broadcasting method and system
CN109348247B (en) Method and device for determining audio and video playing time stamp and storage medium
CN110213608B (en) Method, device, equipment and readable storage medium for displaying virtual gift
CN108174275B (en) Image display method and device and computer readable storage medium
CN109803154B (en) Live broadcast method, equipment and storage medium for chess game
CN112118477B (en) Virtual gift display method, device, equipment and storage medium
CN110213612B (en) Live broadcast interaction method and device and storage medium
CN110418152B (en) Method and device for carrying out live broadcast prompt
US20220191557A1 (en) Method for displaying interaction data and electronic device
CN111050189A (en) Live broadcast method, apparatus, device, storage medium, and program product
CN107896337B (en) Information popularization method and device and storage medium
CN113271470B (en) Live broadcast wheat connecting method, device, terminal, server and storage medium
CN113318442A (en) Live interface display method, data uploading method and data downloading method
CN111669640B (en) Virtual article transfer special effect display method, device, terminal and storage medium
CN114116053A (en) Resource display method and device, computer equipment and medium
CN111045945B (en) Method, device, terminal, storage medium and program product for simulating live broadcast
WO2022227581A1 (en) Resource display method and computer device
CN112367533B (en) Interactive service processing method, device, equipment and computer readable storage medium
CN111954018B (en) Live broadcast room management method, system, device, equipment and storage medium
CN112040267A (en) Chorus video generation method, chorus method, apparatus, device and storage medium
CN113141538B (en) Media resource playing method, device, terminal, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant