CN109618181B - Live broadcast interaction method and device, electronic equipment and storage medium - Google Patents

Live broadcast interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109618181B
CN109618181B CN201811435115.8A CN201811435115A CN109618181B CN 109618181 B CN109618181 B CN 109618181B CN 201811435115 A CN201811435115 A CN 201811435115A CN 109618181 B CN109618181 B CN 109618181B
Authority
CN
China
Prior art keywords
information
special effect
display special
determining
audience
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811435115.8A
Other languages
Chinese (zh)
Other versions
CN109618181A (en
Inventor
梁培佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201811435115.8A priority Critical patent/CN109618181B/en
Publication of CN109618181A publication Critical patent/CN109618181A/en
Application granted granted Critical
Publication of CN109618181B publication Critical patent/CN109618181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/54Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for retrieval
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention provides a live broadcast interaction method and device, electronic equipment and a storage medium, and relates to the technical field of computers, wherein the live broadcast interaction method comprises the following steps: acquiring interactive behavior information of a network anchor, wherein the interactive behavior information comprises voice information and action information; determining a name of the audience user corresponding to the voice information according to the voice information; determining a target display special effect corresponding to the interactive behavior information based on the action information and the audience user name; and sending a display instruction to a spectator terminal associated with the network anchor to control the spectator terminal to display the target display special effect. The technical scheme of the embodiment of the invention not only can enable the network anchor to quickly and clearly reply the audience of the gift delivery, but also can improve the interactivity between the network anchor and the audience and improve the experience of the audience.

Description

Live broadcast interaction method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a live broadcast interaction method, a live broadcast interaction apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of internet technology, live webcasting is more and more popular with people, and due to the limitation of live webcasting modes, it is very important to improve the interactivity between the live webcasting and audiences.
At present, methods for network anchor to feed back thank audiences mainly include text reply, voice reply, simple special effect reply and the like. The interaction mode between the network anchor and the audience is not only lack of interest, but also is easy to cause confusion when the audience for the ceremony is large in number.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present invention and therefore may include information that does not constitute prior art known to a person of ordinary skill in the art.
Disclosure of Invention
Embodiments of the present invention provide a live broadcast interaction method, a live broadcast interaction apparatus, an electronic device, and a computer-readable storage medium, so as to overcome at least to some extent the problems that the interactivity between a webcast and viewers is weak and the webcast cannot quickly and accurately hear a gift viewer due to limitations and defects of related technologies.
Additional features and advantages of the invention will be set forth in the detailed description which follows, or may be learned by practice of the invention.
According to a first aspect of the embodiments of the present invention, a live broadcast interaction method is provided, including: acquiring interactive behavior information of a network anchor, wherein the interactive behavior information comprises voice information and action information; determining a name of the audience user corresponding to the voice information according to the voice information; determining a target display special effect corresponding to the interactive behavior information based on the action information and the audience user name; and sending a display instruction to a spectator terminal associated with the network anchor to control the spectator terminal to display the target display special effect.
In some example embodiments of the present invention, based on the foregoing solution, the acquiring the interaction behavior information of the web cast includes: judging whether the network anchor receives virtual gifts of audiences or not; and when the network anchor receives the virtual gift, starting to acquire the interactive behavior information of the network anchor.
In some example embodiments of the present invention, based on the foregoing scheme, the determining the name of the viewer user corresponding to the voice information includes: recognizing the voice information of the network anchor to acquire corresponding text information; and acquiring the corresponding audience user name according to the text information.
In some example embodiments of the present invention, based on the foregoing solution, the determining that the target corresponding to the interaction behavior information displays a special effect includes: identifying the action information of the network anchor to acquire corresponding image characteristics; determining a corresponding action display special effect according to the image characteristics; and determining a target display special effect corresponding to the interactive behavior information based on the audience user name and the action display special effect.
In some example embodiments of the present invention, based on the foregoing solution, obtaining a corresponding viewer user name according to the text information includes: detecting whether the audience user name in a preset audience user name list exists in the text information; and when detecting that any one of the audience user names exists in the text information, determining that the corresponding relation exists between the audience user names and the text information.
In some example embodiments of the present invention, based on the foregoing solution, obtaining a corresponding viewer user name according to the text information further includes: when detecting that all the audience user names are not in the text information, carrying out fuzzy matching on all the audience user names in the text information; and when detecting that the fuzzy matching of any one of the audience user names and the text in the text information is successful, determining that the corresponding relation exists between the audience user names and the text information.
In some example embodiments of the present invention, based on the foregoing solution, the determining that the target corresponding to the interaction behavior information displays a special effect includes: acquiring a plurality of candidate display special effects related to the interaction behavior information of the network anchor; obtaining grade classification information corresponding to the virtual gifts; determining the target display special effect from the plurality of candidate display special effects according to the grade classification information.
According to a second aspect of the embodiments of the present invention, there is provided a live broadcast interaction apparatus, including: the information acquisition unit is used for acquiring interactive behavior information of the network anchor, wherein the interactive behavior information comprises voice information and action information; the name determining unit is used for determining the name of the audience user corresponding to the voice information according to the voice information; the special effect matching unit is used for determining a target display special effect corresponding to the interactive behavior information based on the action information and the audience user name; and the instruction sending unit is used for sending a display instruction to a spectator terminal associated with the network anchor so as to control the spectator terminal to display the target display special effect.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus, including: a processor; and a memory having computer readable instructions stored thereon that, when executed by the processor, implement any of the live interaction methods described above.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a live interaction method according to any one of the above.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
according to the live broadcast interaction method in the exemplary embodiment of the invention, when the audience corresponding to the audience user name presents the virtual gift to the network anchor, the voice information and the action information of the network anchor are obtained, the audience user name is determined according to the voice information, the target display special effect is determined based on the determined audience user name and the action information, and the target display special effect is sent to the audience end for display. On one hand, the target display special effect is determined according to the audience user name and the action information determined by the voice information of the network anchor, so that the thank you action special effect of the network anchor can be displayed, the audience user name of the thank you object can be highlighted, the pertinence and interestingness of live broadcast interaction are increased, the interactivity between the network anchor and the client is improved, and the viewing experience of audiences is improved; on the other hand, the network anchor can not only quickly complete the thank you event through the voice information and the action information, the interaction mode is simpler, more convenient and faster, but also can continuously, accurately and clearly complete the interaction with the audience for ceremonies, and the use experience of the audience and the network anchor is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 schematically illustrates a schematic diagram of a live interaction method according to some embodiments of the invention;
FIG. 2 schematically illustrates an example of a display of a special effects application based on interaction behavior information matching, according to some embodiments of the invention;
FIG. 3 schematically illustrates a diagram of a live interaction process in accordance with some embodiments of the invention;
FIG. 4 schematically illustrates a schematic diagram of a live interaction device, in accordance with some embodiments of the present invention;
FIG. 5 schematically illustrates a structural schematic of a computer system of an electronic device according to some embodiments of the invention;
FIG. 6 schematically illustrates a schematic diagram of a computer-readable storage medium according to some embodiments of the invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the drawings are merely schematic illustrations and are not necessarily drawn to scale. The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In this exemplary embodiment, a live broadcast interaction method is first provided, where the live broadcast interaction method may be applied to a terminal device, such as an electronic device like a mobile phone or a computer. The terminal device includes a sound pickup unit such as a microphone device, and an image pickup unit such as a camera. Fig. 1 schematically illustrates a schematic diagram of a live interaction method flow in accordance with some embodiments of the invention. Referring to fig. 1, the live interaction method may include the steps of:
step S110, acquiring interactive behavior information of the network anchor, wherein the interactive behavior information comprises voice information and action information;
step S120, determining the name of the audience user corresponding to the voice information according to the voice information;
step S130, determining a target display special effect corresponding to the interactive behavior information based on the action information and the audience user name;
step S140, sending a display instruction to the audience associated with the network anchor to control the audience to display the display special effect.
According to the live broadcast interaction method in the embodiment, on one hand, the target display special effect is determined according to the audience user name and the action information determined by the voice information of the network anchor, so that the thank you action special effect of the network anchor can be displayed, the audience user name of a thank you object can be highlighted, the pertinence and the interestingness of live broadcast interaction are improved, the interactivity between the network anchor and the client side is improved, and the viewing experience of audiences is improved; on the other hand, the network anchor can not only quickly complete the thank you event through the voice information and the action information, the interaction mode is simpler, more convenient and faster, but also can continuously, accurately and clearly complete the interaction with the audience for ceremonies, and the use experience of the audience and the network anchor is improved.
Next, the live interaction method in the present exemplary embodiment will be further explained.
In step S110, the interaction behavior information of the webmaster is obtained, where the interaction behavior information includes voice information and action information.
In an exemplary embodiment of the present invention, the network anchor may obtain and view the virtual donation given by the audience through the terminal device, where the audience refers to all audience terminal devices that interact with the network anchor through a server connection. The virtual gift is a gift for communication interaction in the virtual world, and the virtual gift may be a virtual gift, such as a virtual flower, a virtual love, or a virtual rocket that is sent to the webcast, or may be a valuable behavior of the audience to the webcast, such as turning on a webcast guard, turning on a noble member, or becoming a carrier leader, and the like, and this is not particularly limited in this exemplary embodiment. The interactive behavior information comprises voice information of the network anchor and action information, for example, when the network anchor thanks for gifts sent by audiences, the voice information of the network anchor is 'thank for fresh flowers sent by Zhang III', and the action information is 'love for love'.
In an exemplary embodiment of the present invention, when a spectator gives a virtual gift to a network anchor, a server determines whether the network anchor receives the virtual gift, and determines that the network anchor receives the virtual gift sent by the spectator, a terminal device of the network anchor starts to identify and acquire interaction behavior information of the network anchor, where the interaction behavior information may be voice information and action information of the network anchor; of course, in other exemplary embodiments of the present disclosure, the interaction behavior information may also include other information such as expression information, which is not particularly limited in this exemplary embodiment.
It should be noted that, in addition to the case that the audience gives a virtual gift, the triggering manner/triggering timing for starting to identify and acquire the interaction behavior information of the webcast may also be when the webcast says a sentence with characters such as "thank you", "thank you" or the like, or the webcast makes similar thank you actions such as "love you", "kiss you" or the like. Of course, other triggering manners/triggering occasions are also possible, and this is not particularly limited in this exemplary embodiment.
In step S120, a viewer user name corresponding to the voice information is determined according to the voice information.
In an exemplary embodiment of the invention, the interactive behavior information of the webcast is recognized through the sound collection unit and the image collection unit of the terminal device, for example, when the webcast receives a virtual gift given by a viewer, the webcast speaks a thank you sentence "flower thank you for the viewer, and matches with an action than love, the interactive behavior information that the sound collection unit can recognize is voice information" flower thank you for three ", and the image collection unit can recognize the action information" love you'll ".
In an exemplary embodiment of the present invention, voice information of a network anchor is collected through a sound collection unit of a terminal device, such as a microphone, the collected voice information is recognized, and corresponding text information is extracted. The text information is sent to a server, the server obtains the name of the audience user corresponding to the text information according to the text information, for example, the text information "thank you for fresh flowers" in the voice information is obtained through recognition of a sound collection unit, and the name of the audience user corresponding to the text information is obtained through the server and is called "zhangsan".
Specifically, the server obtains the audience user names in a preset audience user name list, where the preset audience user name list may be a list of all audience user names watching live broadcasts, or a list of audience user names giving a gift to a network anchor. Whether the name of the audience user obtained from the preset list of the name of the audience user exists in the recognized text information or not is detected, the detection process can be that the name of the audience user is compared word by word in the text information, for example, whether the name of the audience user 'Zhang III' exists in fresh flowers sent by the text information 'Zhang III' is detected, the 'Zhang III' and the 'thank you' are matched firstly, the matching is continued to be carried out on the 'Zhang III' when the matching is unsuccessful, the matching of the whole text is finished when the matching with the 'fresh flowers' is finished, and of course, when the matching of the 'Zhang III' and the 'Zhang III' is successful, the whole matching process can be immediately finished. In this exemplary embodiment, other precise matching methods may also be used, and are not particularly limited herein. When the server detects that any one of the viewer user names exists in the text information, the server determines that the viewer user name and the text information have an association relation. For example, any viewer user name "zhang san" is obtained from the viewer name list, and the text message "zhang san" is detected together with the flower sent by thank you, "zhang san" exists in the text message, so that the relationship between "zhang san" and the text message is considered to exist.
Further, when the server detects that all the audience user names in the audience user name list are not in the text information, fuzzy matching is carried out on all the audience user names in the text information. The fuzzy matching refers to matching a field with the highest similarity degree with the name of the audience user in the text information when the input name of the audience user is not in the text information, the field can be the same as the pronunciation of the name of the audience user or determining that the name of the audience user has a corresponding relationship with the text information when detecting that any one of the names of the audience user is successfully fuzzy matched with the text in the text information, for example, if fuzzy matching between chapter three in a list of the names of the audience user and chapter three of identified text information, namely flower sent by chapter three, is successfully matched, chapter three is determined to have a corresponding relationship with the text information.
Step S130, determining a target display special effect corresponding to the interactive behavior information based on the action information and the viewer user name.
In an exemplary embodiment of the present invention, the action information of the network anchor is collected by an image collecting unit of the terminal device, such as a camera, and the collected action information is identified and corresponding image features are obtained. The image features are transmitted to a server, the server inputs the image features into a pre-trained image recognition model for recognition to obtain a category corresponding to the action information, for example, the image acquisition unit recognizes the action of the image features 'love heart' in the action information, and the server recognizes the action information to obtain the category 'love heart action'.
In an exemplary embodiment of the present invention, the special effects database refers to a database that is created by relevant personnel in advance according to the needs of the network anchor and the audience and includes a large number of display special effects, and the special effects database is stored in the server. The target display special effect means a dynamic special effect of realizing a special effect on the terminal device through a program code. And matching a target display special effect corresponding to the interactive behavior information in a special effect database according to the interactive action information of the network anchor identified by the terminal equipment.
Specifically, the server matches a corresponding text display special effect in the special effect database according to the determined audience user name, the text display special effect can display text information on a display interface of an audience, and the audience user name is highlighted to be close to the text information. For example, as shown in fig. 2, a sound collection unit of the terminal device recognizes text information "thank you for flowers" contained in voice information of the network anchor, sends the recognized text information to the server for recognition processing to obtain a name of a viewer user "zhangsan", and displays a special effect for characters corresponding to the text information and the name of the viewer user in a matching manner. Likewise, the server matches the display special effect corresponding to the action information (i.e., the action display special effect) in the special effect database according to the determined category corresponding to the action information, and in an alternative embodiment, different action information may match different display special effects. In an alternative embodiment, the motion display special effect is displayed at the corresponding image feature part, so that the display special effect has a substitution feeling. For example, with continued reference to FIG. 2, the web-cast makes a "love heart" action with a gesture, and an action display effect is displayed on the web-cast's gesture. In an alternative embodiment, the target display effect may include an action display effect (a display effect corresponding to the action information) and indication information indicating a name of the viewer user.
Further, a target display special effect corresponding to the interaction behavior information of the network anchor is determined according to the determined character display special effect and the action display special effect. Specifically, the server obtains a plurality of candidate display special effects associated with the interactive behavior information from a preset special effect database according to the identified interactive behavior information of the network anchor, for example, for a "love heart" action display special effect, there are a plurality of candidate display special effects associated with the "love heart", the plurality of candidate display special effects are different in size, shape, color, display time, and the like, and the plurality of display special effects are divided into a plurality of classification levels according to the degree of gorgeous display. The method includes the steps that grade classification information corresponding to virtual gifts is obtained according to the virtual gifts sent by audiences, for example, the virtual gifts are divided into 1 to 10 grades, 10 grades are the highest, 1 grade is the lowest, and similarly, the candidate display special effects are also divided into corresponding 1 to 10 grades. And determining a corresponding target special effect, such as a virtual gift with the grade of 5, from the multiple candidate display special effects according to the acquired grade classification message, wherein the corresponding interactive behavior information of the network anchor is matched with the target display special effect with the grade of 5.
In step S140, a display instruction is sent to the audience associated with the network anchor to control the audience to display the target display special effect.
In an example embodiment of the present invention, the server sends a display instruction to the audience having an association relationship with the network anchor, where the display instruction enables the terminal device of the audience to display a display special effect corresponding to the interaction behavior information of the network anchor. For example, with continued reference to the display effect shown in fig. 2, a special display effect corresponding to the interaction behavior information of the network anchor is displayed at the viewer side. Referring to fig. 3, fig. 3 schematically illustrates a schematic diagram of a live interaction process according to some embodiments of the present invention, which is described in detail below.
In step S301, it is detected whether the network anchor receives a virtual gift from the audience, and when the network anchor receives the virtual gift, the network anchor starts to identify and acquire the interaction behavior information of the network anchor.
In step S302, the webcast says "thank you for a given gift" to the audience of the gift.
In step S303, the text information "thank you for a certain delivered gift" included in the voice information of the web cast is recognized.
In step S304, the name of the viewer user who the web cast wants thank is determined based on the recognized text information.
In step S305, matching is performed with the identified text information according to a preset list of viewer usernames, such as a list of viewer usernames for gifting in the most recent time period.
In step S306, when the viewer user name associated with the text information is precisely matched in the preset viewer user name list, the display special effect corresponding to the viewer user name is obtained.
In step S307, when the exact matching in the preset list of the viewer usernames fails, fuzzy matching is performed according to the preset list of the viewer usernames and the text information, and when the fuzzy matching is successful, a display special effect of the corresponding viewer usernames is obtained.
In step S308, the special effect of the viewer user name and the text information are displayed on the viewer side.
In step S309, the image characteristics of the motion information are acquired by image recognition of the motion information of the network anchor, and the corresponding display special effect is acquired based on the image characteristics.
In step S310, the display effect corresponding to the viewer user name and the display effect corresponding to the action information of the network anchor are simultaneously displayed on the viewer side.
In step S311, the round of identification is ended, and whether the network anchor receives the virtual gift of the audience is continuously monitored.
It is noted that although the steps of the methods of the present invention are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In addition, in the present exemplary embodiment, a live interactive apparatus is also provided. Referring to fig. 4, the live interaction apparatus 400 includes: an information acquisition unit 410, a name determination unit 420, a special effect matching unit 430, and an instruction transmission unit 440. Wherein: the information obtaining unit 410 is configured to obtain interaction behavior information of the webmaster, where the interaction behavior information includes voice information and action information; the name determining unit 420 is configured to determine, according to the voice information, a name of the viewer user corresponding to the voice information; the special effect matching unit 430 is configured to determine a target display special effect corresponding to the interaction behavior information based on the action information and the name of the viewer user; the instruction sending unit 440 is configured to send a display instruction to a viewer associated with the network anchor to control the viewer to display the display special effect.
In an exemplary embodiment of the present invention, based on the foregoing scheme, the information obtaining unit 410 is configured to: judging whether the network anchor receives the virtual article; and when the network anchor receives the virtual gift, starting to acquire the interactive behavior information of the network anchor.
In an exemplary embodiment of the present invention, based on the foregoing scheme, the name determining unit 420 includes: the identification acquisition unit is used for identifying the voice information of the network anchor and acquiring corresponding text information; and the name acquisition unit is used for acquiring the corresponding name of the audience user according to the text information.
In an exemplary embodiment of the present invention, based on the foregoing scheme, the name obtaining unit includes: a detecting unit, configured to detect whether a viewer user name in a preset viewer user name list exists in the text information; the determining unit is used for determining that the target audience user name in the preset audience user name list has an association relation with the text information when detecting that the target audience user name exists in the text information; and the acquisition unit is used for acquiring the target audience user name related to the text information.
In an exemplary embodiment of the present invention, based on the foregoing, the determination unit is configured to: when detecting that the audience user names in the preset audience user name list do not exist in the text information, fuzzy matching is carried out on the target audience user names with the highest similarity degree with the text information in the audience user name list; and taking the target audience user name as the audience user name associated with the text information.
In an exemplary embodiment of the present invention, based on the foregoing scheme, the special effect matching unit 430 is configured to: identifying the action information of the network anchor to acquire corresponding image characteristics; determining a corresponding action display special effect according to the image characteristics; and determining a target display special effect corresponding to the interactive behavior information based on the audience user name and the action display special effect.
In an exemplary embodiment of the present invention, based on the foregoing scheme, the special effect matching unit 430 is configured to: acquiring a plurality of candidate display special effects related to the interaction behavior information of the network anchor; obtaining grade classification information corresponding to the virtual gifts; determining the target display special effect from the plurality of candidate display special effects according to the grade classification information. .
The specific details of each module of the above-mentioned live broadcast interaction device have been described in detail in the corresponding live broadcast interaction method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the live interaction device are mentioned, this division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the live broadcast interaction method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 500 according to such an embodiment of the invention is described below with reference to fig. 5. The electronic device 500 shown in fig. 5 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 5, the electronic device 500 is embodied in the form of a general purpose computing device. The components of the electronic device 500 may include, but are not limited to: the at least one processing unit 510, the at least one memory unit 520, a bus 530 connecting various system components (including the memory unit 520 and the processing unit 510), and a display unit 540.
Wherein the storage unit stores program code that is executable by the processing unit 510 to cause the processing unit 510 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 510 may execute step S110 shown in fig. 1, and acquire interaction behavior information of the web cast, where the interaction behavior information includes voice information and action information; step S120, determining the name of the audience user corresponding to the voice information according to the voice information; step S130, determining a target display special effect corresponding to the interactive behavior information based on the action information and the audience user name; step S140, sending a display instruction to the audience associated with the network anchor to control the audience to display the display special effect.
The storage unit 520 may include readable media in the form of volatile storage units, such as a random access memory unit (RAM)521 and/or a cache memory unit 522, and may further include a read only memory unit (ROM) 523.
The storage unit 520 may also include a program/utility 524 having a set (at least one) of program modules 525, such program modules 525 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 530 may be one or more of any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 500 may also communicate with one or more external devices 570 (e.g., keyboard, pointing device, Bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 500, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 500 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 550. Also, the electronic device 500 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 560. As shown, the network adapter 560 communicates with the other modules of the electronic device 500 over the bus 530. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 500, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the live broadcast interaction method according to the embodiment of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above-mentioned "exemplary methods" section of the present description, when said program product is run on the terminal device.
Referring to fig. 6, a program product 600 for implementing the live interaction method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A live interaction method, comprising:
acquiring interactive behavior information of a network anchor, wherein the interactive behavior information comprises voice information and action information;
determining audience user names and text information corresponding to the voice information according to the voice information;
based on the action information, the name of the audience user and the text information, obtaining a plurality of candidate display special effects related to the interaction behavior information of the network anchor, obtaining grade classification information corresponding to the virtual gift, and determining a target display special effect from the candidate display special effects according to the grade classification information;
sending a display instruction to a spectator terminal associated with the network anchor to control the spectator terminal to display the target display special effect; the target display special effect comprises a character display special effect and an action display special effect; the character display special effect comprises a user name display special effect and a text information display special effect;
wherein the determining of the target display special effect corresponding to the interactive behavior information includes:
identifying the action information of the network anchor to acquire corresponding image characteristics; determining a corresponding action display special effect according to the image characteristics; and determining a target display special effect corresponding to the interactive behavior information based on the audience user name and the action display special effect.
2. The live interaction method of claim 1, wherein obtaining interaction behavior information of the webmaster comprises:
judging whether the network anchor receives virtual gifts of audiences or not;
and when the network anchor receives the virtual gift, starting to acquire the interactive behavior information of the network anchor.
3. The live interaction method of claim 1, wherein the determining the name of the viewer user corresponding to the voice message comprises:
recognizing the voice information of the network anchor to acquire corresponding text information;
and acquiring the corresponding audience user name according to the text information.
4. The live interaction method of claim 3, wherein obtaining a corresponding viewer user name from the text information comprises:
detecting whether the audience user name in a preset audience user name list exists in the text information;
and when detecting that any one of the audience user names exists in the text information, determining that the corresponding relation exists between the audience user names and the text information.
5. The live interaction method of claim 3, wherein obtaining a corresponding viewer user name from the text information further comprises:
when detecting that all the audience user names are not in the text information, carrying out fuzzy matching on all the audience user names in the text information;
and when detecting that any one of the audience user names is successfully matched with the text in the text information in a fuzzy mode, determining that the corresponding relation exists between the audience user names and the text information.
6. A live interaction device, comprising:
the information acquisition unit is used for acquiring interactive behavior information of the network anchor, wherein the interactive behavior information comprises voice information and action information;
the name determining unit is used for determining the name and the text information of the audience user corresponding to the voice information according to the voice information;
the special effect matching unit is used for acquiring a plurality of candidate display special effects related to the interaction behavior information of the network anchor based on the action information, the name of the audience and the text information, acquiring grade classification information corresponding to the virtual gift, and determining a target display special effect from the candidate display special effects according to the grade classification information;
the instruction sending unit is used for sending a display instruction to a spectator terminal associated with the network anchor so as to control the spectator terminal to display the target display special effect; the target display special effect comprises a character display special effect and an action display special effect; the character display special effect comprises a user name display special effect and a text information display special effect;
wherein the determining of the target display special effect corresponding to the interactive behavior information includes: identifying the action information of the network anchor to acquire corresponding image characteristics; determining a corresponding action display special effect according to the image characteristics; and determining a target display special effect corresponding to the interactive behavior information based on the audience user name and the action display special effect.
7. An electronic device, comprising:
a processor; and
a memory having computer-readable instructions stored thereon that, when executed by the processor, implement a live interaction method as recited in any of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the live interaction method of any one of claims 1 to 5.
CN201811435115.8A 2018-11-28 2018-11-28 Live broadcast interaction method and device, electronic equipment and storage medium Active CN109618181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811435115.8A CN109618181B (en) 2018-11-28 2018-11-28 Live broadcast interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811435115.8A CN109618181B (en) 2018-11-28 2018-11-28 Live broadcast interaction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109618181A CN109618181A (en) 2019-04-12
CN109618181B true CN109618181B (en) 2021-11-12

Family

ID=66005757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811435115.8A Active CN109618181B (en) 2018-11-28 2018-11-28 Live broadcast interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109618181B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110798696B (en) * 2019-11-18 2022-09-30 广州虎牙科技有限公司 Live broadcast interaction method and device, electronic equipment and readable storage medium
CN110809172A (en) * 2019-11-19 2020-02-18 广州虎牙科技有限公司 Interactive special effect display method and device and electronic equipment
CN111064987B (en) * 2019-12-14 2021-06-25 北京字节跳动网络技术有限公司 Information display method and device and electronic equipment
CN111010612B (en) * 2019-12-19 2021-05-14 广州方硅信息技术有限公司 Method, device and equipment for receiving voice gift and storage medium
CN111327919A (en) * 2020-03-23 2020-06-23 广州华多网络科技有限公司 Method, device, system, equipment and storage medium for virtual gift feedback processing
CN111683265B (en) * 2020-06-23 2021-10-29 腾讯科技(深圳)有限公司 Live broadcast interaction method and device
CN113839913B (en) * 2020-06-24 2024-02-27 腾讯科技(深圳)有限公司 Interactive information processing method, related device and storage medium
CN113315979A (en) * 2020-08-10 2021-08-27 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and storage medium
CN111954063B (en) * 2020-08-24 2022-11-04 北京达佳互联信息技术有限公司 Content display control method and device for video live broadcast room
CN112822501B (en) * 2020-08-28 2023-11-14 腾讯科技(深圳)有限公司 Information display method and device in live video broadcast, storage medium and electronic equipment
CN112770171A (en) * 2020-12-31 2021-05-07 北京达佳互联信息技术有限公司 Content display method, device, system, equipment and storage medium
CN113194350B (en) * 2021-04-30 2022-08-19 百度在线网络技术(北京)有限公司 Method and device for pushing data to be broadcasted and method and device for broadcasting data
CN113329234B (en) * 2021-05-28 2022-06-10 腾讯科技(深圳)有限公司 Live broadcast interaction method and related equipment
CN114095745A (en) * 2021-11-16 2022-02-25 广州博冠信息科技有限公司 Live broadcast interaction method and device, computer storage medium and electronic equipment
CN114327182B (en) * 2021-12-21 2024-04-09 广州博冠信息科技有限公司 Special effect display method and device, computer storage medium and electronic equipment
CN115623239A (en) * 2022-10-21 2023-01-17 宁波理查德文化创意有限公司 Personalized live broadcast control method based on use habit

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106231378A (en) * 2016-07-28 2016-12-14 北京小米移动软件有限公司 The display packing of direct broadcasting room, Apparatus and system
CN106303658A (en) * 2016-08-19 2017-01-04 百度在线网络技术(北京)有限公司 It is applied to exchange method and the device of net cast
CN106804007A (en) * 2017-03-20 2017-06-06 合网络技术(北京)有限公司 The method of Auto-matching special efficacy, system and equipment in a kind of network direct broadcasting
CN107864357A (en) * 2017-09-28 2018-03-30 努比亚技术有限公司 Video calling special effect controlling method, terminal and computer-readable recording medium
WO2018121477A1 (en) * 2016-12-28 2018-07-05 腾讯科技(深圳)有限公司 Information processing method, terminal, and system, and computer storage medium
CN108337568A (en) * 2018-02-08 2018-07-27 北京潘达互娱科技有限公司 A kind of information replies method, apparatus and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106231378A (en) * 2016-07-28 2016-12-14 北京小米移动软件有限公司 The display packing of direct broadcasting room, Apparatus and system
CN106303658A (en) * 2016-08-19 2017-01-04 百度在线网络技术(北京)有限公司 It is applied to exchange method and the device of net cast
WO2018121477A1 (en) * 2016-12-28 2018-07-05 腾讯科技(深圳)有限公司 Information processing method, terminal, and system, and computer storage medium
CN106804007A (en) * 2017-03-20 2017-06-06 合网络技术(北京)有限公司 The method of Auto-matching special efficacy, system and equipment in a kind of network direct broadcasting
CN107864357A (en) * 2017-09-28 2018-03-30 努比亚技术有限公司 Video calling special effect controlling method, terminal and computer-readable recording medium
CN108337568A (en) * 2018-02-08 2018-07-27 北京潘达互娱科技有限公司 A kind of information replies method, apparatus and equipment

Also Published As

Publication number Publication date
CN109618181A (en) 2019-04-12

Similar Documents

Publication Publication Date Title
CN109618181B (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN110446115B (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN107278374B (en) Interactive advertisement display method, terminal and smart city interactive system
CN106971009B (en) Voice database generation method and device, storage medium and electronic equipment
CN108847214B (en) Voice processing method, client, device, terminal, server and storage medium
EP3579140A1 (en) Method and apparatus for processing video
CN111683263B (en) Live broadcast guiding method, device, equipment and computer readable storage medium
EP3862869A1 (en) Method and device for controlling data
CN107430858A (en) The metadata of transmission mark current speaker
CN112040263A (en) Video processing method, video playing method, video processing device, video playing device, storage medium and equipment
CN112653902B (en) Speaker recognition method and device and electronic equipment
EP3118850A1 (en) System and method for providing related content at low power, and computer readable recording medium having program recorded therein
CN111601145A (en) Content display method, device and equipment based on live broadcast and storage medium
CN104866275B (en) Method and device for acquiring image information
CN106358059B (en) Multimedia information processing method, equipment and system
WO2017166651A1 (en) Voice recognition model training method, speaker type recognition method and device
CN108573393B (en) Comment information processing method and device, server and storage medium
CN112399258A (en) Live playback video generation playing method and device, storage medium and electronic equipment
US20170171594A1 (en) Method and electronic apparatus of implementing voice interaction in live video broadcast
CN109032345A (en) Apparatus control method, device, equipment, server-side and storage medium
US20190213998A1 (en) Method and device for processing data visualization information
CN114327182A (en) Special effect display method and device, computer storage medium and electronic equipment
CN112616064A (en) Live broadcast room information processing method and device, computer storage medium and electronic equipment
CN111741321A (en) Live broadcast control method, device, equipment and computer storage medium
CN114064943A (en) Conference management method, conference management device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant