CN110572690B - Method, device and computer readable storage medium for live broadcast - Google Patents

Method, device and computer readable storage medium for live broadcast Download PDF

Info

Publication number
CN110572690B
CN110572690B CN201910947821.9A CN201910947821A CN110572690B CN 110572690 B CN110572690 B CN 110572690B CN 201910947821 A CN201910947821 A CN 201910947821A CN 110572690 B CN110572690 B CN 110572690B
Authority
CN
China
Prior art keywords
information
client
thank
user
virtual item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910947821.9A
Other languages
Chinese (zh)
Other versions
CN110572690A (en
Inventor
陈春勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910947821.9A priority Critical patent/CN110572690B/en
Publication of CN110572690A publication Critical patent/CN110572690A/en
Application granted granted Critical
Publication of CN110572690B publication Critical patent/CN110572690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure provides a method, apparatus, and computer-readable storage medium for use in live broadcasting. The method performed by the first client comprises: detecting a presentation signal of a virtual article triggered in a live broadcast process at a first client, wherein the presentation signal is used for transferring the virtual article to a second client for live broadcast; after detecting the presenting signal of the virtual item, sending request information to a server, wherein the request information comprises a request for feedback information corresponding to the virtual item and identification information of the first user; receiving information returned by the server, wherein the information comprises the feedback information and the identification information of the first user; and displaying the feedback information.

Description

Method, device and computer readable storage medium for use in live broadcast
Technical Field
The present disclosure relates to the field of online live broadcasting, and more particularly, to a method, apparatus, and computer-readable storage medium for use in live broadcasting.
Background
An online live system is a system that broadcasts a real-time video stream (also referred to as a live video stream) generated by an anchor client to a plurality of viewer clients so that the plurality of viewers watch the real-time video stream simultaneously. Online live systems are typically divided by channel, room, or live room. The same live room typically includes one anchor client and multiple viewer clients.
In live broadcasting, the viewer may present the virtual item to the main broadcasting through the viewer client. When a viewer (e.g., viewer a) gifts a virtual item (e.g., gift X) to the anchor, both the anchor client and each viewer client in the same live room display the message "viewer a has gifted gift X". After the anchor sees the information through the anchor client, the anchor can verbally thank the viewer, say "thank for viewer a". However, when there are many viewers present in the same live broadcast room, the anchor may not thank each viewer presenting the present. In addition, the anchor may not thank some viewers when the value of the gift they send is low. Therefore, when the audience sends the gift but does not receive the thank you from the anchor, the audience can lose the sense of existence because the audience is not concerned by the anchor, and further lose the power of continuously watching the live broadcast, and the gift sending interactive experience of the audience is reduced.
Disclosure of Invention
To overcome the defects in the prior art, the present disclosure provides a method, an apparatus, and a computer-readable storage medium for use in live broadcasting.
According to an aspect of the present disclosure, there is provided a method for use in live broadcasting, including: detecting a presentation signal of a virtual article triggered in a live broadcast process at a first client, wherein the presentation signal is used for transferring the virtual article to a second client for live broadcast; after detecting a presentation signal of the virtual item, sending request information to a server, wherein the request information comprises a request for feedback information corresponding to the virtual item and identification information of the first user; receiving information returned by the server, wherein the information comprises the feedback information and the identification information of the first user; and displaying the feedback information.
According to an example of the present disclosure, the feedback information includes thank you information.
According to one example of the present disclosure, the thank you information has at least one type.
According to an example of the present disclosure, the thank you information includes at least one of a dynamic image, an expression image, or a thank you text, wherein the dynamic image or the expression image is generated by the server according to at least one of a specific expression or a specific action acquired from the second client in the live broadcast.
According to an example of the present disclosure, the type of thank you information is determined according to a value of the virtual item.
According to an example of the present disclosure, wherein the thank you information comprises at least a dynamic image when the value of the virtual item is above a threshold.
According to an example of the present disclosure, wherein the thank you information comprises thank you text when the value of the virtual item is below a threshold.
According to another aspect of the present disclosure, there is provided a method for use in live broadcasting, including: receiving request information from a first client, wherein the request information is sent after the first client detects a presentation signal of a virtual item triggered in a live broadcast process, the presentation signal is used for transferring the virtual item to a second client which carries out live broadcast, and the request information comprises a request for feedback information corresponding to the virtual item and identification information of a first user; and sending information to the first client, wherein the information comprises the feedback information and the identification information of the first user.
According to an example of the present disclosure, the feedback information includes thank you information.
According to one example of the present disclosure, the thank you information has at least one type.
According to an example of the present disclosure, the thank you information includes at least one of a dynamic image, an expression image, or thank you text, wherein the dynamic image or the expression image is generated according to at least one of a specific expression or a specific action acquired from the second client in the live broadcast.
According to an example of the present disclosure, the method further comprises: and determining the type of the thank you information according to the value of the virtual article.
According to an example of the present disclosure, wherein determining the type of the thank you information according to the value of the virtual item comprises: determining that the thank you information includes at least a dynamic image when the value of the virtual item is above a threshold.
According to an example of the present disclosure, wherein determining the type of the thank you information according to the value of the virtual item comprises: determining that the thank you information comprises thank you text when the value of the virtual item is below a threshold.
According to an example of the present disclosure, the method further comprises: and acquiring at least one of a specific expression or a specific action of a second user in the live broadcast from the second client.
According to another aspect of the present disclosure, there is provided a method for use in live broadcasting, comprising: detecting at least one of a specific expression or a specific action of the user at the second client; when at least one of the specific expression or the specific action of the user is detected, acquiring a video, wherein the video comprises at least one of the specific expression or the specific action of the user; and sending the video to a server.
According to another aspect of the present disclosure, there is provided an apparatus for use in live broadcasting, including: the detection unit is configured to detect a presentation signal of a virtual article triggered in a live broadcast process at a first client, wherein the presentation signal is used for transferring the virtual article to a second client which carries out live broadcast; a transmitting unit configured to transmit request information including a request for feedback information corresponding to the virtual item and identification information of the first user to a server, upon detection of a gifting signal of the virtual item; a receiving unit configured to receive information returned by the server, where the information includes the feedback information and identification information of the first user; and a display unit configured to display the feedback information.
According to an example of the present disclosure, the feedback information includes thank you information.
According to one example of the present disclosure, the thank you information has at least one type.
According to an example of the present disclosure, the thank you information includes at least one of a dynamic image, an expression image, or a thank you text, wherein the dynamic image or the expression image is generated by the server according to at least one of a specific expression or a specific action acquired from the second client in the live broadcast.
According to an example of the present disclosure, the type of thank you information is determined according to a value of the virtual item.
According to an example of the present disclosure, wherein the thank you information comprises at least a dynamic image when the value of the virtual item is above a threshold.
According to an example of the present disclosure, wherein the thank you information comprises thank you text when the value of the virtual item is below a threshold.
According to another aspect of the present disclosure, there is provided an apparatus for use in live broadcasting, including: a receiving unit configured to receive request information from a first client, the request information being sent after the first client detects a gifting signal of a virtual item triggered in a live broadcast process, the gifting signal being used for transferring the virtual item to a second client performing the live broadcast, and the request information including a request for feedback information corresponding to the virtual item and identification information of the first user; and a transmitting unit configured to transmit information to the first client, the information including the feedback information and identification information of the first user.
According to an example of the present disclosure, the feedback information includes thank you information.
According to one example of the present disclosure, the thank you information has at least one type.
According to an example of the present disclosure, the thank you information includes at least one of a dynamic image, an expression image, or thank you text, wherein the dynamic image or the expression image is generated according to at least one of a specific expression or a specific action acquired from the second client in the live broadcast.
According to an example of the present disclosure, the apparatus further includes: a determination unit configured to determine the thank you information according to a value of the virtual item.
According to an example of the present disclosure, the determining unit is configured to determine that the thank you information comprises at least a dynamic image when the value of the virtual item is higher than a threshold value.
According to an example of the present disclosure, the determining unit is configured to determine that the thank you information includes thank you text when the value of the virtual item is below a threshold value.
According to an example of the present disclosure, the apparatus further includes: and acquiring at least one of a specific expression or a specific action of a second user in the live broadcast from the second client.
According to another aspect of the present disclosure, there is provided an apparatus for use in live broadcasting, including: a detection unit configured to detect at least one of a specific expression or a specific action of the user at the second client; a capturing unit configured to capture a video including at least one of a specific expression or a specific motion of the user when the at least one of the specific expression or the specific motion of the user is detected; and a transmitting unit configured to transmit the video to a server.
According to another aspect of the present disclosure, there is provided an apparatus for use in live broadcasting, including: a processor; and a memory, wherein the memory has stored therein a computer-executable program that, when executed by the processor, performs the above-described method.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon instructions, which, when executed by a processor, cause the processor to perform the above-described method.
According to the method, apparatus, and computer-readable storage medium for use in live broadcasting of the above aspects of the present disclosure, the viewer client may send a request for feedback information corresponding to a virtual item and identification information of a corresponding user to the server, receive the feedback information and the identification information of the corresponding user from the server, and display the feedback information, upon detecting a gifting signal of the virtual item. By the method, the audience can receive the feedback information pushed by the server no matter whether the anchor carries out the oral thank mortgage on the audience presenting the articles, so that the ceremony feeling of the thank mortgage exclusive to the audience is created, and the ceremony interactive experience of the audience is improved. Furthermore, in this way, the feedback information is only visible by the viewer, avoiding interference with other viewers in the same live room.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic diagram of a live system in which embodiments of the present disclosure may be applied;
fig. 2 is a flow chart of a method performed by a first client for use in live broadcasting according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a still image in a moving image according to an embodiment of the present disclosure;
FIG. 4A illustrates a schematic diagram of a first client displaying thank you information, in accordance with embodiments of the present disclosure;
fig. 4B illustrates another schematic diagram of the first client displaying thank you information, in accordance with an embodiment of the disclosure;
fig. 4C illustrates another schematic diagram of the first client displaying thank you information, in accordance with an embodiment of the disclosure;
fig. 4D illustrates another schematic diagram of the first client displaying thank you information, in accordance with an embodiment of the disclosure;
fig. 5 is a flow chart of a method performed by a server for use in live broadcasting according to an embodiment of the present disclosure;
fig. 6 is a schematic flow chart of a server sending thank you information to a first client according to an embodiment of the disclosure;
fig. 7 is a flow chart of a method performed by a second client for use in live broadcasting in accordance with an embodiment of the present disclosure;
FIG. 8 illustrates a schematic diagram of detecting a particular expression of a anchor according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a specific flow of a live system implementing a method according to an embodiment of the present disclosure;
fig. 10 is another schematic diagram of a specific flow of a live system implementing a method according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an apparatus for use in live broadcasting according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of another apparatus for use in live broadcasting according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of yet another apparatus for use in live broadcasting according to an embodiment of the present disclosure;
fig. 14 illustrates an architecture of a computer device according to an embodiment of the disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, example embodiments according to the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numerals refer to like elements throughout. It should be understood that: the embodiments described herein are merely illustrative and should not be construed as limiting the scope of the disclosure.
First, a live broadcast system in which an embodiment of the present disclosure can be applied is described with reference to fig. 1. Fig. 1 is a schematic diagram of a live system in which embodiments of the present disclosure may be applied. As shown in fig. 1, a live system 100 includes a viewer terminal 110, a server 120, and a anchor terminal 130.
The viewer terminal 110 may run a first client, which may be a client for acquiring and viewing a live video stream in a live room. The first client may also be referred to as a viewer client. For convenience, the first client and the viewer client may be used interchangeably hereinafter. Further, the viewer terminal 110 may be a smart phone, a tablet computer, a laptop portable computer, a desktop computer, or the like.
Anchor terminal 130 may run a second client, which may be a client for recording a live video stream in a live room and sending the live video stream to server 120. The second client may also be referred to as the anchor client. For convenience, the second client and the anchor client may be used interchangeably hereinafter. Further, the anchor terminal 130 may be a smart phone, a tablet computer, a laptop portable computer, a desktop computer, or the like.
The server 120 may be a server for managing a live broadcast. For example, the server 120 may implement one or more of a live function, manage user accounts, manage live rooms, implement virtual good gifting functions, or implement a charging system, among others.
After the spectator client and the anchor client access the server 120, the anchor client sends a live video stream to the server 120, and the server 120 receives the live video stream and forwards the live video stream to the spectator client belonging to the same live room. Accordingly, the viewer client plays the live video stream so that the viewer can view the live video stream. In a live broadcast, the viewer may gift a virtual item to the anchor through the viewer client. When the spectator client detects the presentation signal of the virtual item, a request for thank you information corresponding to the virtual item and the identification information of the spectator may be sent to the server. The server may then send thank you information corresponding to the virtual item and the identification information of the viewer to the viewer client. The viewer client may then display thank you information to the viewer. By the mode, whether the anchor carries out oral thank you to the audiences giving goods, the audiences can receive the thank you information pushed by the server, so that ceremonial feelings exclusive to the thank you of the audiences are created, and the ceremony interactive experience of the audiences is improved. Furthermore, in this way, the thank you information is only visible to the viewer, avoiding interference with other viewers in the same live room.
It should be appreciated that although one viewer terminal, one server, and one anchor terminal are shown in fig. 1, this is merely illustrative and the live system may include more viewer terminals, and/or more servers, and/or more anchor terminals. Further, in the present disclosure, the viewer may be a first user watching a live broadcast through a first client. For convenience, the viewer and the first user may be used interchangeably hereinafter. Further, in the present disclosure, the anchor may be a second user who is live through a second client. For convenience, the anchor and the second user may be used interchangeably hereinafter.
Next, a method performed by the first client in the live broadcast according to an embodiment of the present disclosure will be described with reference to fig. 2. Fig. 2 is a flow chart of a method 200 performed by a first client for use in live broadcasting according to an embodiment of the present disclosure. As shown in fig. 2, in step S201, the first client detects a gift signal of a virtual item. Specifically, in step S201, a comp signal of a virtual item triggered during a live broadcast is detected at the first client.
According to an example of the present disclosure, the comp signal in step S201 may be generated according to an operation of the first user watching a live broadcast through the first client. For example, the comp signal may be generated based on the first user performing a selection operation on the virtual item. Specifically, the first client may display virtual item icons corresponding to respective virtual items. In this case, the comp signal may be generated by the first user clicking on the virtual item icon corresponding to the virtual item. Alternatively, the first client may display virtual item icons and "comp" icons corresponding to respective virtual items. In this case, the comp signal may be generated by the first user clicking on a "comp" icon corresponding to the virtual item.
Further, according to an example of the present disclosure, the comp signal in step S201 may be used to transfer the virtual item to a second client that is live. In particular, the gifting signal in step S201 may be used to trigger gifting of the virtual item to a second user that is live through a second client. For example, when the first client detects a gifting signal for the virtual item, the first client may send a virtual item gifting request to the server, where the virtual item gifting request may trigger the server to perform an operation of gifting the virtual item to the second user.
The "virtual item" described above may be an item modeled with visual User Interface (UI) elements, which do not have actual physical shapes and structures. The virtual item has an item type and an item quantity. The article types include, but are not limited to, virtual stars, virtual roses, virtual love hearts, virtual cakes, virtual little yellow ducks, virtual lollipops, virtual ice cream, virtual high-heeled shoes, virtual rings, or virtual airplanes. The quantity of the item may be an actual quantity of the item, or a quantity of virtual units (e.g., virtual coins, etc.) corresponding to the item. In the present disclosure, since the first user may give away the virtual item to the second user, the virtual item may also be referred to as a virtual gift.
Then, in step S202, the first client, upon detecting the gifting signal of the virtual item, transmits request information including a request for feedback information corresponding to the virtual item and the identification information of the first user to the server. In step S203, the first client receives information returned by the server from the server, where the information includes the feedback information and the identification information of the first user.
According to an example of the present disclosure, the feedback information may include thank you information, which may be used to represent thank you for the first user to give away the virtual item. The feedback information may also include other information, such as information about the mood of the second user after receiving the virtual item (such as an emotional bag representing excitement) to represent the mood of the second user after receiving the virtual item.
Further, according to an example of the present disclosure, the thank you information may have at least one type. For example, the thank you information may include at least one of a dynamic image, an expressive image, or thank you text.
In this example, the server may determine the type of thank you information based on the value of the virtual item. For example, the thank you information may include at least a dynamic image when the value of the virtual item is above a threshold. Specifically, the thank you information may include a dynamic image and/or an expressive image when the value of the virtual item is above a threshold. Alternatively, the thank you information may include both the dynamic image and the thank you text when the value of the virtual item is above a threshold. Alternatively, the thank you information may include both the emoticon and the thank you text when the value of the virtual item is above a threshold. Alternatively, the thank you information may include an emoticon, and a thank you text when the value of the virtual item is higher than the threshold value. For another example, the thank you information may include thank you text, but not include the dynamic image and/or the emoticon image, when the value of the virtual item is lower than or equal to the threshold value. The "value of the virtual item" here may be a price or an amount corresponding to the virtual item. The "threshold" here may be a predetermined amount of money, for example 1 dollar.
Further, the dynamic image described herein may include a plurality of static images. For example, the dynamic image may be a conventional gif image. Further, the moving image may be a moving image with respect to the second user. For example, the dynamic image may be generated by the server from a performance of the second user in the live broadcast (i.e., currently live broadcast), which may include at least one of a particular expression or a particular action of the second user in the current live broadcast. Here, the "specific expression" may be a smiley or the like, and the "specific action" may be a heart, a kiss, or the like. Fig. 3 shows a schematic diagram of a still image among dynamic images according to an embodiment of the present disclosure. As shown in fig. 3, the second user is smiling and hearting in the still image included in the moving image. The generation of the dynamic image will be described in detail in the method 500 described below in conjunction with fig. 5.
Further, the expression image described herein may be a still image. For example, the static image may be a conventional jpg image. Further, the expression image may be a still image with respect to the second user. For example, the emoticon may be generated by the server from at least one of a particular emoticon or a particular action of the second user in the live broadcast (i.e., currently live broadcast). Here, the "specific expression" may be smiling or the like, and the "specific action" may be hearting, kissing or the like.
Further, the thank you text described herein may be a thank you note randomly generated by the server, such as at least one of "thank you long so good see also send me gift", "like you send gift, happy, thank you send gift, i happy, good happy to receive gift, pen core", and the like.
Furthermore, in the example described herein that the "information about mood of the second user after receiving the virtual item" is the emoticon, the emoticon may be generated by the server according to at least one of a specific emotion or a specific motion of the second user in the live broadcast (i.e., currently in the live broadcast). For example, the emoticon may be generated by the server based on at least one of a specific emoticon or a specific action presented by the second user after receiving the virtual item in the live broadcast (i.e., currently live broadcast).
Further, according to an example of the present disclosure, the Identification information of the first User may be a User Identification (UID) of the first User. For example, a first user may need to register when using a first client for the first time, and after successful registration, the server may assign a UID to the first user. Further, the server may notify the first client of the assigned UID.
In addition, in the present disclosure, the request information sent by the first client to the server may further include identification information of the client, which may uniquely identify the client. Similarly, the information returned by the server to the first client may also include identification information of the client. In this way, the server may identify each of the plurality of clients with which it communicates.
Then, in step S204, the first client displays the thank you information. For example, the first client may display the thank you information on a live interface.
Fig. 4A to 4D are schematic diagrams illustrating the first client displaying thank you information. Specifically, fig. 4A shows a schematic diagram of the first client displaying thank you information according to an embodiment of the disclosure. In the example shown in fig. 4A, the thank you message is the thank you text "i really have good happy to receive a gift, a refill". As shown in fig. 4A, the first client displays a thank you text "i really happy to receive a gift, a refill" on the live broadcast interface. Fig. 4B illustrates another schematic diagram of the first client displaying thank you information according to an embodiment of the disclosure. In the example shown in fig. 4B, the thank you information is the thank you text "thank you so nice and send me gift". As shown in fig. 4B, the first client displays the thank you text "thank you for you so nice and send me gift" on the live interface. Fig. 4C illustrates another schematic diagram of the first client displaying thank you information according to an embodiment of the disclosure. In the example shown in fig. 4C, the thank you information is a dynamic image and a thank you text "thank you long looking so good, send me gift also". As shown in fig. 4C, the first client displays a dynamic image and a thank you text "thank you for you and send me gift" on the live interface. Fig. 4D illustrates another schematic diagram of the first client displaying thank you information according to an embodiment of the disclosure. In the example shown in fig. 4D, the thank you information is a dynamic image and a thank you "i really have good happy to receive a gift, a refill". As shown in fig. 4D, the first client displays a dynamic image and thank you text "i really happy to receive a gift, a refill" on the live interface.
By the method for live broadcasting of the embodiment of the disclosure, after detecting the giving signal of the virtual item, the first client may send a request for thank you information corresponding to the virtual item and identification information of a corresponding user to the server, receive the thank you information and the identification information of the corresponding user from the server, and display the thank you information. By the mode, whether the anchor carries out oral thank you to the audiences giving goods, the audiences can receive the thank you information pushed by the server, so that ceremonial feelings exclusive to the thank you of the audiences are created, and the ceremony interactive experience of the audiences is improved. Furthermore, in this way, the thank you information is only visible to the viewer, avoiding interference with other viewers in the same live room.
Next, a method performed by a server in a live broadcast according to an embodiment of the present disclosure will be described with reference to fig. 5. Fig. 5 is a flow chart of a method 500 performed by a server for use in live broadcasting according to an embodiment of the present disclosure. Since the specific details of the following operations performed according to the method 500 are the same as those described above with reference to fig. 2, a repeated description of the same details is omitted herein to avoid repetition.
As shown in fig. 5, in step S501, a server receives request information from a first client, wherein the request information is transmitted after the first client detects a gifting signal for a virtual item triggered during a live broadcast, and the request information includes a request for feedback information corresponding to the virtual item and identification information of the first user.
According to an example of the present disclosure, the comp signal in step S501 may be generated according to an operation of the first user watching a live broadcast through the first client. For example, the comp signal may be generated based on the first user performing a selection operation on the virtual item. Specifically, the first client may display virtual item icons corresponding to respective virtual items. In this case, the comp signal may be generated by the first user clicking on a virtual item icon corresponding to the virtual item. Alternatively, the first client may display a virtual item icon and a "comp" icon corresponding to each virtual item. In this case, the comp signal may be generated by the first user clicking on a "comp" icon corresponding to the virtual item.
Further, according to an example of the present disclosure, the comp signal in step S501 may be used to transfer the virtual item to a second client that is live. Specifically, the gifting signal in step S501 may be used to trigger gifting of the virtual item to a second user who is live through a second client. For example, when the first client detects the gifting signal of the virtual item, the first client may send a virtual item gifting request to the server, wherein the virtual item gifting request may trigger the server to perform an operation of gifting the virtual item to the second user. Accordingly, the server may donate the virtual item to the second user according to the virtual item donation request.
Then, in step S502, the server sends information to the first client, where the information includes the feedback information and the identification information of the first user.
According to an example of the present disclosure, the feedback information may include thank you information, which may be used to represent a thank you for the first user for gifting the virtual item. The feedback information may also include other information, such as information about the mood of the second user after receiving the virtual item (such as an emotional bag representing excitement) to represent the mood of the second user after receiving the virtual item.
According to one example of the present disclosure, thank you information may have at least one type. For example, the thank you information may include at least one of a dynamic image, an expressive image, or thank you text.
In this example, the server may determine the type of thank you information based on the value of the virtual item. For example, the thank you information may include at least a dynamic image when the value of the virtual item is above a threshold. Specifically, the thank you information may include a dynamic image and/or an expressive image when the value of the virtual item is above a threshold. Alternatively, the thank you information may include both the dynamic image and the thank you text when the value of the virtual item is above a threshold. Alternatively, the thank you information may include both the emoticon and the thank you text when the value of the virtual item is above a threshold. Alternatively, the thank you information may include an emoticon, and a thank you text when the value of the virtual item is higher than the threshold value. For another example, the thank you information may include thank you text, but not include the dynamic image and/or the emoticon image, when the value of the virtual item is lower than or equal to the threshold value.
In addition, after the server determines the type of thank you information, the server may determine the specific content of the thank you information. For example, after the server determines that the thank you information includes a moving image, the server may select one of the stored moving images as the thank you information. For another example, after the server determines that the thank you information includes the thank you text, the server may select one of the stored plurality of thank you texts as the thank you information. For another example, after the server determines that the thank you information includes both the moving image and the thank you text, the server may select one moving image from the stored moving images and one thank you text from the stored thank you texts, and use the selected moving image and the selected thank you text as the thank you information.
The moving image described herein may be a moving image with respect to the second user. For example, the dynamic image may be generated by the server from a performance of the second user in the live broadcast (i.e., currently live broadcast), which may include at least one of a particular expression or a particular action of the second user in the current live broadcast. Here, the "specific expression" may be smiling or the like, and the "specific action" may be hearting, kissing or the like.
A specific process of the server generating a moving image will be described below. Specifically, the server may obtain, from the second client, at least one of a specific expression or a specific action of the second user in the current live broadcast. For example, in a live broadcast, when the second user exhibits at least one of a particular expression or a particular action, the second client may capture a video that includes at least one of the particular expression or the particular action of the second user. The second client may then send the captured video to the server. The server may then process the received video to obtain at least one of a specific expression or a specific action of the second user. The "processing" herein may include at least one of matting, ray compensation, filtering or sharpening, and the like. For example, the server may perform a cutout on the received video to cutout a background image of the second user in the live broadcast while retaining only at least one of a specific expression or a specific action of the second user.
After the server acquires at least one of a specific expression or a specific action of the second user in the current live broadcast from the second client, the server may generate a dynamic image according to the acquired at least one of the specific expression or the specific action. For example, the server may add one or more decoration elements to at least one of the acquired specific expressions or specific actions to generate a dynamic image. The "decorative element" here may be an atmosphere decorative element, such as at least one of a love heart, a balloon, etc. In fig. 3 described above, the dynamic image includes a specific expression and a specific motion of the second user, and a plurality of love hearts.
In addition, the server can set the time length of the moving image. For example, the time length of the moving image may be 3 to 5 seconds.
Further, in one live broadcast, the server may generate a plurality of moving images. For the generated plurality of moving images, the server may store at least a part of the plurality of moving images, so that the server selects one moving image from the stored moving images as thank you information. For example, the server may store a predetermined number of moving images. The predetermined number may be 10.
In the present disclosure, the server generates the moving image only according to at least one of the specific expression or the specific action of the second user in the current live broadcast, and does not generate the moving image according to at least one of the specific expression or the specific action of the second user in the historical live broadcast. By the method, the dynamic image generated by the server has real-time performance, so that the reality degree of thank you is improved when the dynamic image is displayed to audiences, and the gift delivery interactive experience of the audiences is improved.
Further, the expression image described herein may be a still image with respect to the second user. The specific process of generating the expression image by the server is similar to the specific process of generating the dynamic image by the server, and is not described herein again.
An exemplary process of sending thank you information from the server to the first client is described below with reference to fig. 6. Fig. 6 is a schematic flow chart of the server sending thank you information to the first client according to an embodiment of the disclosure. As shown in fig. 6, the user gifts the virtual item to the anchor in the live broadcast. The server may then determine the value of the virtual item that the user donates this time. When the value of the virtual item given by the user this time is lower than 1 yuan, the server can push a thank you text to the corresponding client according to the UID of the user (for example, grabbing the UID of the user and pushing the thank you text to the corresponding client). When the value of the virtual item given by the user this time is greater than 1 yuan, the server may push the dynamic image to the corresponding client according to the UID of the user (for example, grab the UID of the user, and push the dynamic image to the corresponding client).
By the method for use in live broadcasting of the embodiment of the present disclosure, after detecting the give-away signal of the virtual item, the first client may send a request for thank you information corresponding to the virtual item and identification information of the corresponding user to the server, receive the thank you information and the identification information of the corresponding user from the server, and display the thank you information. By the mode, whether the anchor carries out oral thank you to the audiences giving goods, the audiences can receive the thank you information pushed by the server, so that ceremonial feelings exclusive to the thank you of the audiences are created, and the ceremony interactive experience of the audiences is improved. Furthermore, in this way, the thank you information is only visible to the viewer, avoiding interference with other viewers in the same live room.
Next, a method performed by the second client in the live broadcast according to an embodiment of the present disclosure will be described with reference to fig. 7. Fig. 7 is a flow chart of a method 700 performed by a second client for use in live broadcasting according to an embodiment of the present disclosure. Since specific details of the following operations performed according to the method 700 are the same as those described above with reference to fig. 2 and 5, a repeated description of the same details is omitted herein to avoid repetition.
As shown in fig. 7, in step S701, the second client detects at least one of a specific expression or a specific motion of the user.
Specifically, first, in live broadcasting, the second client may detect a face image of an anchor from a live video stream. For example, the second client may detect a human face image of the anchor from a live video stream according to pattern features and the Adaboost algorithm contained in the human face image. The pattern features included in the face image may include at least one of histogram features, color features, template features, structural features, or Haar features, etc.
After the facial image of the anchor is detected, the second client side can extract the expression features and/or the action features of the anchor in the facial image. The expressive features may refer to the geometric relationships (such as distance, area, angle, etc.) between facial features such as eyes, nose, mouth, etc. The action features may refer to the geometric relationships (such as distance, area, and angle) between the physical features of arms, palms, fingers, and the like.
Then, the second client may determine whether the extracted expression features and/or action features satisfy a predetermined condition to enable detection of a specific expression and/or a specific action of the anchor. For the expressive features, the predetermined condition may be that the geometric relationship between the facial features such as eyes, nose, mouth, etc. satisfies a predetermined geometric relationship, for example, the angle of drop of the canthus, the angle of rise of the mouth corner, etc. satisfy the predetermined condition. In an example where the expression is smiling, the corner of the anchor eye may droop by a predetermined angle and the corner of the anchor mouth may rise by a predetermined angle. Therefore, by determining whether the expressive features satisfy the predetermined conditions, detection of the anchor smile can be achieved. For the action feature, the predetermined condition may be that a geometric relationship between body features such as an arm, a palm, and a finger satisfies a predetermined geometric relationship, for example, a distance between both hands, a bending angle of a finger, and the like satisfy the predetermined condition. In the example of motion being centrode, the distance between the two hands of the anchor is substantially zero and the fingers of the anchor are bent by a predetermined angle. Therefore, by determining whether the motion characteristics satisfy the predetermined condition, detection of the anchor ratio can be realized.
Fig. 8 shows a schematic diagram of detecting a specific expression of a anchor according to an embodiment of the present disclosure. In the example shown in fig. 8, the anchor is smiling. And the second client side judges the expression characteristics of the anchor so as to realize the detection of the smile of the anchor.
Then, in step S702, the second client captures a video when detecting at least one of a specific expression or a specific motion of the user, wherein the video includes the at least one of the specific expression or the specific motion of the user. For example, when the second client detects at least one of a specific expression or a specific action of the user, the second client may record a screen of a live interface of the second client (i.e., record a screen) for a predetermined time to capture a video. The predetermined time may be 3 to 5 seconds.
Then, in step S703, the second client transmits the video to the server. Accordingly, the server, upon receiving the video, may generate a dynamic image according to at least one of a specific expression or a specific motion of the user in the video. For example, the server may process the received video to obtain at least one of a specific expression or a specific action of the second user; the server may then generate a dynamic image from the acquired at least one of the specific expression or the specific motion. The "processing" herein may include at least one of matting, ray compensation, filtering or sharpening, and the like.
By the method for the live broadcasting, the server can generate the dynamic image according to at least one of the specific expression or the specific action of the anchor in the current live broadcasting without making the dynamic image by other software, so that asynchronous interaction operation of the audience and the anchor is avoided, and synchronous interaction operation of the audience and the anchor is realized.
The specific flow of implementing the above method by the live broadcast system will be described below with reference to fig. 9-10. Fig. 9 is a schematic diagram of a specific flow of a live system implementing a method according to an embodiment of the present disclosure. As shown in fig. 9, the anchor may be live through the second client. In live broadcasting, user a may give a gift to the anchor through the first client, and the anchor may vocally thank user a, e.g., the anchor may speak "thank for the gift given by user a". The second client may perform face detection on the live video stream to identify that the anchor is smiling or happy. And when the second client identifies that the anchor is smiling or more than heart, the second client starts an automatic screen recording function to generate a video. The server can then intelligently matte the video to generate dynamic gif expressions. In the live broadcast, when the user B gives a gift to the anchor, the server may intelligently push the generated dynamic gif expression to the user B.
Fig. 10 is another schematic diagram of a specific flow of a live system implementing a method according to an embodiment of the present disclosure. As shown in fig. 10, the anchor may be live through the second client. In live broadcast, user 1 may give a gift to the main broadcast through the first client. The second client may perform face detection on the live video stream to identify that the anchor is smiling or happy. And when the second client identifies that the anchor is smiling or more heartily, the second client starts the automatic screen recording function to generate the video. The server can then intelligently matte the video to generate dynamic gif expressions. In this live broadcast, the second client may detect faces in real-time to generate a plurality of dynamic gif expressions. In the live broadcast, when any one of the users 1 to n gives a gift to the main broadcast, the server may push the generated dynamic gif emoticon to the user.
Further, it has been described above that, upon detection of the gifting signal of the virtual item by the first client, the first client may send a virtual item gifting request to the server, wherein the virtual item gifting request may trigger the server to perform an operation of gifting the virtual item to the second user. According to another embodiment of the present disclosure, in this case, the server may generate the above-described feedback information according to the virtual good gifting request and transmit the feedback information to the first client without transmitting the feedback information to the first client after receiving the above-described request information. This is because, when the server receives the virtual good present request, the server can determine that the first user presents the item to the second user, and therefore, the server can automatically return the feedback information including the thank you information to the first client without waiting for the request of the first client for the thank you information. By the embodiment, the interactive process between the client and the server can be simplified, and the communication efficiency is improved.
Hereinafter, an apparatus corresponding to the method illustrated in fig. 2 according to an embodiment of the present disclosure is described with reference to fig. 11. Fig. 11 is a schematic structural diagram of an apparatus 1100 for use in live broadcasting according to an embodiment of the present disclosure. Since the function of the apparatus 1100 is the same as the details of the method described above with reference to fig. 2, a detailed description of the same is omitted here for the sake of simplicity. As shown in fig. 11, the apparatus 1100 includes: a detecting unit 1110, configured to detect, at a first client, a gifting signal of a virtual item triggered in a live broadcast process, where the gifting signal is used to transfer the virtual item to a second client performing live broadcast; a transmitting unit 1120 configured to transmit request information including a request for feedback information corresponding to the virtual item and identification information of the first user to a server, upon detection of the gifting signal of the virtual item; a receiving unit 1130 configured to receive information returned by the server, where the information includes the feedback information and the identification information of the first user; and a display unit 1140 configured to display the feedback information. The apparatus 1100 may include other components in addition to the four units, however, since these components are not related to the contents of the embodiments of the present disclosure, illustration and description thereof are omitted herein. The apparatus 1100 may be the viewer terminal 110 described above.
According to one example of the present disclosure, the comp signal may be generated from an operation of a first user watching a live broadcast through a first client. For example, the comp signal may be generated based on the first user performing a selection operation on the virtual item. Specifically, the first client may display virtual item icons corresponding to respective virtual items. In this case, the comp signal may be generated by the first user clicking on a virtual item icon corresponding to the virtual item. Alternatively, the first client may display a virtual item icon and a "comp" icon corresponding to each virtual item. In this case, the comp signal may be generated by the first user clicking on a "comp" icon corresponding to the virtual item.
Further, according to an example of the present disclosure, a comp signal may be used to transfer the virtual item to a second client that is live. For example, the gifting signal may be used to trigger gifting of the virtual item to a second user that is live through a second client. For example, when the detection unit 1110 detects a gifting signal of the virtual item, the transmission unit 1120 may transmit a virtual item gifting request to the server, wherein the virtual item gifting request may trigger the server to perform an operation of gifting the virtual item to the second user.
According to an example of the present disclosure, the feedback information may include thank you information, which may be used to represent thank you for the first user to give away the virtual item. The feedback information may also include other information, such as information about the mood of the second user after receiving the virtual item (such as an emotional bag representing excitement) to represent the mood of the second user after receiving the virtual item.
Further, according to an example of the present disclosure, the thank you information may have at least one type. For example, the thank you information may include at least one of a dynamic image, an expressive image, or thank you text.
In this example, the server may determine the type of thank you information based on the value of the virtual item. For example, the thank you information may include at least a dynamic image when the value of the virtual item is above a threshold. Specifically, the thank you information may include a dynamic image and/or an expressive image when the value of the virtual item is above a threshold. Alternatively, the thank you information may include both the dynamic image and the thank you text when the value of the virtual item is above a threshold. Alternatively, the thank you information may include both the emoticon and the thank you text when the value of the virtual item is above a threshold. Alternatively, the thank you information may include an emoticon, and a thank you text when the value of the virtual item is higher than the threshold value. For another example, the thank you information may include thank you text, but not include the dynamic image and/or the emoticon image, when the value of the virtual item is lower than or equal to the threshold value. The "value of the virtual item" herein may be a price or an amount corresponding to the virtual item. The "threshold" here may be a predetermined amount of money, for example 1 dollar.
With the apparatus for use in live broadcasting of the embodiment of the present disclosure, after detecting the presentation signal of the virtual item, the first client may send a request for thank you information corresponding to the virtual item and identification information of a corresponding user to the server, receive the thank you information and the identification information of the corresponding user from the server, and display the thank you information. By the mode, whether the anchor carries out oral thank you to the audiences giving goods, the audiences can receive the thank you information pushed by the server, so that ceremonial feelings exclusive to the thank you of the audiences are created, and the ceremony interactive experience of the audiences is improved. Furthermore, in this way, the thank you information is only visible to the viewer, avoiding interference with other viewers in the same live room.
Hereinafter, an apparatus corresponding to the method illustrated in fig. 5 according to an embodiment of the present disclosure is described with reference to fig. 12. Fig. 12 is a schematic structural diagram of an apparatus 1200 used in live broadcasting according to an embodiment of the present disclosure. Since the function of the apparatus 1200 is the same as the details of the method described above with reference to fig. 5, a detailed description of the same is omitted here for the sake of simplicity. As shown in fig. 12, the apparatus 1200 includes: a receiving unit 1210 configured to receive request information from a first client, the request information being sent after the first client detects a give-away signal of a virtual item triggered in a live broadcast process, the give-away signal being used to transfer the virtual item to a second client performing live broadcast, and the request information including a request for feedback information corresponding to the virtual item and identification information of the first user; and a sending unit 1220 configured to send information to the first client, where the information includes the feedback information and identification information of the first user. The apparatus 1200 may include other components in addition to the two units, however, since these components are not related to the content of the embodiments of the present disclosure, illustration and description thereof are omitted herein. The apparatus 1200 may be the server 120 described above.
According to one example of the present disclosure, the comp signal may be generated from an operation of a first user watching a live broadcast through a first client. For example, the comp signal may be generated based on the first user performing a selection operation on the virtual item. Specifically, the first client may display virtual item icons corresponding to respective virtual items. In this case, the comp signal may be generated by the first user clicking on a virtual item icon corresponding to the virtual item. Alternatively, the first client may display virtual item icons and "comp" icons corresponding to respective virtual items. In this case, the comp signal may be generated by the first user clicking on a "comp" icon corresponding to the virtual item.
Further, according to an example of the present disclosure, a comp signal may be used to transfer the virtual item to a second client that is live. For example, the gifting signal may trigger gifting of the virtual item to a second user that is live through a second client. For example, when the first client detects a gifting signal for the virtual item, the first client may send a virtual item gifting request to the server, where the virtual item gifting request may trigger the server to perform an operation of gifting the virtual item to the second user. Accordingly, the receiving unit 1210 may receive the virtual item gifting request and may donate the virtual item to the second user according to the virtual item gifting request.
According to an example of the present disclosure, the feedback information may include thank you information, which may be used to represent thank you for the first user to give away the virtual item. The feedback information may also include other information, such as information about the mood of the second user after receiving the virtual item (such as an emotional bag representing excitement) to represent the mood of the second user after receiving the virtual item.
Further, according to one example of the present disclosure, thank you information may have at least one type. For example, the thank you information may include at least one of a dynamic image, an expressive image, or thank you text.
In this example, the sending unit 1220 may determine the type of thank you information according to the value of the virtual item. For example, the thank you information may include at least a dynamic image when the value of the virtual item is above a threshold. Specifically, the thank you information may include a dynamic image and/or an expressive image when the value of the virtual item is above a threshold. Alternatively, the thank you information may include both the dynamic image and the thank you text when the value of the virtual item is above a threshold. Alternatively, the thank you information may include both the emoticon and the thank you text when the value of the virtual item is above a threshold. Alternatively, the thank you information may include an emoticon, and a thank you text when the value of the virtual item is higher than the threshold value. For another example, the thank you information may include thank you text, but not include the dynamic image and/or the emoticon image, when the value of the virtual item is lower than or equal to the threshold value.
In addition, after the transmitting unit 1220 determines the type of thank you information, the transmitting unit 1220 may determine the specific contents of the thank you information. For example, after the transmitting unit 1220 determines that the thank you information includes a moving image, the transmitting unit 1220 may select one of the stored moving images as the thank you information. For another example, after the sending unit 1220 determines that the thank you information includes the thank you text, the sending unit 1220 may select one of the stored thank you texts as the thank you information. For another example, after the sending unit 1220 determines that the thank you information includes both a moving image and a thank you text, the sending unit 1220 may select one moving image from among the stored moving images and one thank you text from among the stored thank you texts, and use the selected moving image and the selected thank you text as the thank you information.
The moving image described herein may be a moving image with respect to the second user. For example, the dynamic image may be generated by the server from a performance of the second user in the live broadcast (i.e., currently live broadcast), which may include at least one of a particular expression or a particular action of the second user in the current live broadcast. Here, the "specific expression" may be a smiley or the like, and the "specific action" may be a heart, a kiss, or the like.
A specific process of the server generating a moving image will be described below. The server may further comprise a generating unit (not shown in the figure). Specifically, the generation unit may acquire at least one of a specific expression or a specific action of the second user in the current live broadcast from the second client. For example, in a live broadcast, when the second user exhibits at least one of a particular expression or a particular action, the second client may capture a video that includes at least one of the particular expression or the particular action of the second user. The second client may then send the captured video to the server. Then, the generation unit may process the received video to acquire at least one of a specific expression or a specific motion of the second user. The "processing" herein may include at least one of matting, light compensation, filtering or sharpening, and the like. For example, the generating unit may perform matting on the received video to matte out a background image of the second user in the live broadcast while retaining only at least one of a specific expression or a specific motion of the second user.
After the generation unit acquires at least one of a specific expression or a specific motion of the second user in the current live broadcast from the second client, the generation unit may generate a dynamic image according to the acquired at least one of the specific expression or the specific motion. For example, the generation unit may add one or more decorative elements to at least one of the acquired specific expression or specific motion to generate the dynamic image. The "decorative element" here may be an atmosphere decorative element, such as at least one of a love heart, a balloon, etc. In fig. 3 described above, the dynamic image includes a specific expression and a specific motion of the second user, and a plurality of love hearts.
Further, the generation unit may set a time length of the moving image. For example, the time length of the moving image may be 3 to 5 seconds.
Further, in one live broadcast, the generating unit may generate a plurality of moving images. For the plurality of generated moving images, the generation unit may store at least part of the moving images in the plurality of moving images, so that the server selects one moving image from the stored moving images as thank you information. For example, the generation unit may store a predetermined number of moving images. The predetermined number may be 10.
In the present disclosure, the generation unit generates the moving image only from at least one of the specific expression or the specific motion of the second user in the current live broadcast, and does not generate the moving image from at least one of the specific expression or the specific motion of the second user in the history live broadcast. By the method, the dynamic image generated by the server has real-time performance, so that the reality degree of thank you is improved when the dynamic image is displayed to audiences, and the gift delivery interactive experience of the audiences is improved.
With the apparatus for use in live broadcasting of the embodiment of the present disclosure, after detecting the presentation signal of the virtual item, the first client may send a request for thank you information corresponding to the virtual item and identification information of a corresponding user to the server, receive the thank you information and the identification information of the corresponding user from the server, and display the thank you information. By the mode, whether the anchor carries out oral thank you to the audiences giving goods, the audiences can receive the thank you information pushed by the server, so that ceremonial feelings exclusive to the thank you of the audiences are created, and the ceremony interactive experience of the audiences is improved. Furthermore, in this way, the thank you information is only visible to the viewer, avoiding interference with other viewers in the same live room.
Hereinafter, an apparatus corresponding to the method illustrated in fig. 7 according to an embodiment of the present disclosure is described with reference to fig. 13. Fig. 13 is a schematic structural diagram of an apparatus 1300 for use in live broadcasting according to an embodiment of the present disclosure. Since the function of the apparatus 1300 is the same as the details of the method described above with reference to fig. 7, a detailed description of the same is omitted here for the sake of simplicity. As shown in fig. 13, the apparatus 1300 includes: a detection unit 1310 configured to detect at least one of a specific expression or a specific motion of a user; a capturing unit 1320 configured to capture a video when at least one of a specific expression or a specific motion of the user is detected, wherein the video includes at least one of the specific expression or the specific motion of the user; and a transmitting unit 1330 configured to transmit the video to a server. The apparatus 1300 may include other components in addition to the three units, however, since these components are not related to the contents of the embodiments of the present disclosure, illustration and description thereof are omitted herein. The apparatus 1300 may be the anchor terminal 130 described above.
According to an example of the present disclosure, first, in live, the detection unit 1310 may detect a face image of a main broadcast from a live video stream. For example, the detection unit 1310 may detect a human face image of a main broadcast from a live video stream according to pattern features and an Adaboost algorithm included in the human face image. The pattern features included in the face image may include at least one of histogram features, color features, template features, structural features, or Haar features, etc.
Upon detecting the anchor face image, the detection unit 1310 may extract anchor expression features and/or action features in the face image. The expressive features may refer to the geometric relationships (such as distance, area, angle, etc.) between facial features such as eyes, nose, mouth, etc. The motion characteristics may refer to geometric relationships (such as distance, area, and angle) between physical features such as arms, palms, fingers, and the like.
Then, the detection unit 1310 may determine whether the extracted expression features and/or motion features satisfy a predetermined condition to enable detection of a specific expression and/or a specific motion of the anchor. For the expressive features, the predetermined condition may be that the geometric relationship between the facial features such as eyes, nose, mouth, etc. satisfies a predetermined geometric relationship, for example, the angle of drop of the canthus, the angle of rise of the mouth corner, etc. satisfy the predetermined condition. In an example where the expression is smiling, the corner of the anchor eye may droop by a predetermined angle and the corner of the anchor mouth may raise by a predetermined angle. Therefore, by determining whether the expressive features satisfy the predetermined conditions, detection of the anchor smile can be achieved. For the action feature, the predetermined condition may be that a geometric relationship between body features such as an arm, a palm, and a finger satisfies a predetermined geometric relationship, for example, a distance between both hands, a bending angle of a finger, and the like satisfy the predetermined condition. In the example of motion being centrode, the distance between the two hands of the anchor is substantially zero and the fingers of the anchor are bent by a predetermined angle. Therefore, by determining whether the motion characteristics satisfy the predetermined condition, detection of the anchor ratio can be realized.
Then, when the detection unit 1310 detects at least one of a specific expression or a specific motion of the user, the capturing unit 1320 captures a video, wherein the video includes at least one of a specific expression or a specific motion of the user. For example, when the detecting unit 1310 detects at least one of a specific expression or a specific motion of the user, the capturing unit 1320 may capture a video for a predetermined time with respect to a live interface of the second client. The predetermined time may be 3 to 5 seconds.
Then, the transmitting unit 1330 transmits the video to the server. Accordingly, the server, upon receiving the video, may generate a dynamic image according to at least one of a specific expression or a specific motion of the user in the video. For example, the server may process the received video to obtain at least one of a specific expression or a specific action of the second user; the server may then generate a dynamic image from the acquired at least one of the specific expression or the specific action. The "processing" herein may include at least one of matting, ray compensation, filtering or sharpening, and the like.
By the device for live broadcasting of the embodiment of the disclosure, the server can generate the dynamic image according to at least one of the specific expression or the specific action of the anchor in the current live broadcasting without making the dynamic image by other software, thereby avoiding asynchronous interaction operation between the audience and the anchor and realizing synchronous interaction operation between the audience and the anchor.
Furthermore, apparatuses (e.g., servers, terminals) and/or clients (e.g., first client, second client) according to embodiments of the present disclosure may also be implemented by means of the architecture of a computing device shown in fig. 14. Fig. 14 illustrates an architecture of the computing device. As shown in fig. 14, computing device 1400 may include a bus 1410, one or more CPUs 1420, Read Only Memory (ROM)1430, Random Access Memory (RAM)1440, a communication port 1450 to connect to a network, input/output components 1460, hard disk 1470, and the like. Storage devices in computing device 1400, such as ROM 1430 or hard disk 1470 may store various data or files used in computer processing and/or communications and program instructions executed by the CPU. The computing device 1400 may also include a user interface 1480. Of course, the architecture shown in FIG. 14 is merely exemplary, and one or more components of the computing device shown in FIG. 14 may be omitted as needed in implementing different devices.
Embodiments of the present disclosure may also be implemented as a computer-readable storage medium. A computer readable storage medium according to an embodiment of the present disclosure has computer readable instructions stored thereon. The computer readable instructions, when executed by a processor, may perform a method according to embodiments of the present disclosure described with reference to the above figures. The computer-readable storage medium includes, but is not limited to, volatile memory and/or non-volatile memory, for example. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
Those skilled in the art will appreciate that the disclosure of the present disclosure is susceptible to numerous variations and modifications. For example, the various devices or components described above may be implemented in hardware, or may be implemented in software, firmware, or a combination of some or all of the three.
Furthermore, as used in this disclosure and in the claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are inclusive in the plural, unless the context clearly dictates otherwise. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. Also, the word "comprise" or "comprises", and the like, means that the element or item preceding the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" and "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
Furthermore, flowcharts are used in this disclosure to illustrate the operations performed by the system according to embodiments of the present disclosure. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to or removed from these processes.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
While the present disclosure has been described in detail above, it will be apparent to those skilled in the art that the present disclosure is not limited to the embodiments described in the present specification. The present disclosure can be implemented as modifications and variations without departing from the spirit and scope of the present disclosure defined by the claims. Accordingly, the description of the present specification is for the purpose of illustration and is not intended to be in any way limiting of the present disclosure.

Claims (15)

1. A method for use in a live broadcast, for a first client, comprising:
detecting a presentation signal of a virtual article triggered in a live broadcast process at a first client, wherein the presentation signal is used for transferring the virtual article to a second client for live broadcast;
after detecting a presentation signal of the virtual item, sending request information to a server, wherein the request information comprises a request for feedback information corresponding to the virtual item and identification information of a first user;
receiving information returned by the server, wherein the information comprises the feedback information and the identification information of the first user; and
the feedback information is displayed on the display unit,
the feedback information comprises thank you information, and the thank you information comprises at least one of a dynamic image or an expression image, wherein the dynamic image or the expression image is generated by the server according to a video acquired from the second client in the live broadcasting process, and the video is acquired by the second client when at least one of a specific expression or a specific action of the user is detected in the live broadcasting process.
2. The method of claim 1, wherein the type of thank you information is determined based on a value of the virtual item.
3. The method of claim 2, wherein
When the value of the virtual article is higher than a threshold value, the thank you information at least comprises a dynamic image; or
The thank you information comprises thank you text when the value of the virtual item is below a threshold.
4. A method for use in a live broadcast, for a server, comprising:
receiving request information from a first client, wherein the request information is sent after the first client detects a presentation signal of a virtual item triggered in a live broadcast process, the presentation signal is used for transferring the virtual item to a second client which carries out live broadcast, and the request information comprises a request of feedback information corresponding to the virtual item and identification information of a first user; and
sending information to the first client, the information including the feedback information and identification information of the first user,
the feedback information comprises thank you information, and the thank you information comprises at least one of a dynamic image or an expression image, wherein the dynamic image or the expression image is generated by the server according to a video acquired from the second client in the live broadcasting process, and the video is acquired by the second client when at least one of a specific expression or a specific action of the user is detected in the live broadcasting process.
5. The method of claim 4, further comprising:
and determining the type of the thank you information according to the value of the virtual article.
6. The method of claim 5, wherein determining the type of thank you information based on the value of the virtual item comprises:
determining that the thank you information comprises at least a dynamic image when the value of the virtual item is above a threshold; or
Determining that the thank you information comprises thank you text when the value of the virtual item is below a threshold.
7. The method of claim 4, further comprising:
and acquiring a video including at least one of a specific expression or a specific action of a second user in the live broadcasting process from the second client.
8. A method for use in a live broadcast, for a second client, comprising:
detecting at least one of a specific expression or a specific action of the user at the second client;
when at least one of the specific expression or the specific action of the user is detected, acquiring a video, wherein the video comprises at least one of the specific expression or the specific action of the user; and
sending the video to a server, enabling the server to generate a dynamic image or an expression image based on the video, and returning feedback information to a first client in response to request information sent by the first client, wherein the request information is used for detecting a give-away signal of a virtual article triggered in a live broadcast process, the feedback information comprises thank you information, and the thank you information comprises at least one of the dynamic image or the expression image.
9. The method of claim 8, wherein the detecting at least one of a particular expression or a particular action of the user at the second client comprises:
detecting a face image of the user from a live video stream;
after the face image of the user is detected, extracting at least one of expression features or action features of the user in the face image;
judging whether at least one of the extracted expression features or action features meets a preset condition; and
determining that at least one of a specific expression or a specific motion of the user is detected when at least one of the extracted expression or motion features satisfies a predetermined condition.
10. The method of claim 8, wherein capturing video upon detection of at least one of a particular expression or a particular action of the user comprises:
and when at least one of the specific expression or the specific action of the user is detected, recording a screen of a live interface of the second client to acquire a video.
11. An apparatus for use in a live broadcast, for a first client, comprising:
the detection unit is configured to detect a presentation signal of a virtual item triggered in a live broadcast process at a first client, wherein the presentation signal is used for transferring the virtual item to a second client which carries out live broadcast;
a transmitting unit configured to transmit request information including a request for feedback information corresponding to the virtual item and identification information of a first user to a server, upon detection of a gifting signal of the virtual item;
a receiving unit configured to receive information returned by the server, where the information includes the feedback information and identification information of the first user; and
a display unit configured to display the feedback information,
the feedback information comprises thank you information, and the thank you information comprises at least one of a dynamic image or an expression image, wherein the dynamic image or the expression image is generated by the server according to a video acquired from the second client in the live broadcasting process, and the video is acquired by the second client when at least one of a specific expression or a specific action of the user is detected in the live broadcasting process.
12. An apparatus for use in a live broadcast, for a server, comprising:
a receiving unit configured to receive request information from a first client, the request information being sent after the first client detects a gifting signal of a virtual item triggered in a live broadcast process, the gifting signal being used for transferring the virtual item to a second client performing the live broadcast, and the request information including a request for feedback information corresponding to the virtual item and identification information of a first user; and
a transmitting unit configured to transmit information to the first client, the information including the feedback information and identification information of the first user,
the feedback information comprises thank you information, and the thank you information comprises at least one of a dynamic image or an expression image, wherein the dynamic image or the expression image is generated by the server according to a video acquired from the second client in the live broadcasting process, and the video is acquired by the second client when at least one of a specific expression or a specific action of the user is detected in the live broadcasting process.
13. An apparatus for use in a live broadcast, for a second client, comprising:
a detection unit configured to detect at least one of a specific expression or a specific action of the user at the second client;
a capturing unit configured to capture a video including at least one of a specific expression or a specific motion of the user when the at least one of the specific expression or the specific motion of the user is detected; and
a sending unit configured to send the video to a server, so that the server generates a dynamic image or an expression image based on the video, and returns feedback information to a first client in response to request information sent by the first client, wherein the request information is used for detecting a give-away signal of a virtual item triggered in a live broadcast process, and the feedback information comprises thank you information which comprises at least one of the dynamic image or the expression image.
14. An apparatus for use in a live broadcast, comprising:
a processor; and
a memory, wherein the memory has stored therein a computer-executable program that, when executed by the processor, performs the method of any of claims 1-3.
15. A computer readable storage medium having stored thereon instructions that, when executed by a processor, cause the processor to perform the method of any one of claims 1-3.
CN201910947821.9A 2019-09-29 2019-09-29 Method, device and computer readable storage medium for live broadcast Active CN110572690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910947821.9A CN110572690B (en) 2019-09-29 2019-09-29 Method, device and computer readable storage medium for live broadcast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910947821.9A CN110572690B (en) 2019-09-29 2019-09-29 Method, device and computer readable storage medium for live broadcast

Publications (2)

Publication Number Publication Date
CN110572690A CN110572690A (en) 2019-12-13
CN110572690B true CN110572690B (en) 2022-09-23

Family

ID=68783976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910947821.9A Active CN110572690B (en) 2019-09-29 2019-09-29 Method, device and computer readable storage medium for live broadcast

Country Status (1)

Country Link
CN (1) CN110572690B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111970530B (en) * 2020-08-24 2022-12-06 北京字节跳动网络技术有限公司 Virtual gift display method, server and target receiving end
CN112492330B (en) * 2020-10-30 2023-06-20 北京达佳互联信息技术有限公司 Live broadcast interaction method and device
CN114942715A (en) * 2021-02-10 2022-08-26 北京字节跳动网络技术有限公司 Dynamic expression display method and device, electronic equipment and computer readable storage medium
CN114095745A (en) * 2021-11-16 2022-02-25 广州博冠信息科技有限公司 Live broadcast interaction method and device, computer storage medium and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303658A (en) * 2016-08-19 2017-01-04 百度在线网络技术(北京)有限公司 It is applied to exchange method and the device of net cast
CN106911968A (en) * 2017-03-17 2017-06-30 武汉斗鱼网络科技有限公司 A kind of live middle realization method and system for obtaining privilege information
CN107493515A (en) * 2017-08-30 2017-12-19 乐蜜有限公司 It is a kind of based on live event-prompting method and device
CN107911738A (en) * 2017-11-30 2018-04-13 广州酷狗计算机科技有限公司 A kind of method and apparatus for making expression present
CN108230028A (en) * 2017-12-29 2018-06-29 广州华多网络科技有限公司 More main broadcaster's direct broadcasting rooms give the method, apparatus and electronic equipment of virtual present
CN108337568A (en) * 2018-02-08 2018-07-27 北京潘达互娱科技有限公司 A kind of information replies method, apparatus and equipment
CN109640104A (en) * 2018-11-27 2019-04-16 平安科技(深圳)有限公司 Living broadcast interactive method, apparatus, equipment and storage medium based on recognition of face
CN109672899A (en) * 2018-12-13 2019-04-23 南京邮电大学 The Wonderful time of object game live scene identifies and prerecording method in real time

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8640181B1 (en) * 2010-09-15 2014-01-28 Mlb Advanced Media, L.P. Synchronous and multi-sourced audio and video broadcast
KR101853670B1 (en) * 2015-03-01 2018-05-02 엘지전자 주식회사 Broadcast signal transmission device, broadcast signal reception device, broadcast signal transmission method, and broadcast signal reception method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303658A (en) * 2016-08-19 2017-01-04 百度在线网络技术(北京)有限公司 It is applied to exchange method and the device of net cast
CN106911968A (en) * 2017-03-17 2017-06-30 武汉斗鱼网络科技有限公司 A kind of live middle realization method and system for obtaining privilege information
CN107493515A (en) * 2017-08-30 2017-12-19 乐蜜有限公司 It is a kind of based on live event-prompting method and device
CN107911738A (en) * 2017-11-30 2018-04-13 广州酷狗计算机科技有限公司 A kind of method and apparatus for making expression present
CN108230028A (en) * 2017-12-29 2018-06-29 广州华多网络科技有限公司 More main broadcaster's direct broadcasting rooms give the method, apparatus and electronic equipment of virtual present
CN108337568A (en) * 2018-02-08 2018-07-27 北京潘达互娱科技有限公司 A kind of information replies method, apparatus and equipment
CN109640104A (en) * 2018-11-27 2019-04-16 平安科技(深圳)有限公司 Living broadcast interactive method, apparatus, equipment and storage medium based on recognition of face
CN109672899A (en) * 2018-12-13 2019-04-23 南京邮电大学 The Wonderful time of object game live scene identifies and prerecording method in real time

Also Published As

Publication number Publication date
CN110572690A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN110572690B (en) Method, device and computer readable storage medium for live broadcast
CN111405299B (en) Live broadcast interaction method based on video stream and corresponding device thereof
CN110134484B (en) Message icon display method and device, terminal and storage medium
KR101540544B1 (en) Message service method using character, user device for performing the method, message application comprising the method
CN106846040A (en) Virtual present display methods and system in a kind of direct broadcasting room
CN111050222B (en) Virtual article issuing method, device and storage medium
CN108256921B (en) Method and device for pushing information for user
CN107210830B (en) Object presenting and recommending method and device based on biological characteristics
Koh et al. Developing a hand gesture recognition system for mapping symbolic hand gestures to analogous emojis in computer-mediated communication
CN110716641B (en) Interaction method, device, equipment and storage medium
CN110716634A (en) Interaction method, device, equipment and display equipment
CN111314204A (en) Interaction method, device, terminal and storage medium
CN107948743A (en) Video pushing method and its device, storage medium
CN111683265A (en) Live broadcast interaction method and device
CN108289230A (en) A kind of recommendation method, apparatus, equipment and the storage medium of TV shopping content
CN111738777A (en) Coupon pushing method and device, storage medium and intelligent terminal
JP2014041502A (en) Video distribution device, video distribution method, and video distribution program
CN110610249A (en) Information processing method, information display method, device and service terminal
CN109683711B (en) Product display method and device
CN111627115A (en) Interactive group photo method and device, interactive device and computer storage medium
JP2019212039A (en) Information processing device, information processing method, program, and information processing system
CN111274489A (en) Information processing method, device, equipment and storage medium
WO2020108324A1 (en) Method for extracting order items, order processing method and device, equipment and medium
CN108958690B (en) Multi-screen interaction method and device, terminal equipment, server and storage medium
CN109587035B (en) Head portrait display method and device of session interface, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40019353

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant