CN113727135B - Live broadcast interaction method and device, electronic equipment, storage medium and program product - Google Patents

Live broadcast interaction method and device, electronic equipment, storage medium and program product Download PDF

Info

Publication number
CN113727135B
CN113727135B CN202111114147.XA CN202111114147A CN113727135B CN 113727135 B CN113727135 B CN 113727135B CN 202111114147 A CN202111114147 A CN 202111114147A CN 113727135 B CN113727135 B CN 113727135B
Authority
CN
China
Prior art keywords
virtual
video content
prop
live
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111114147.XA
Other languages
Chinese (zh)
Other versions
CN113727135A (en
Inventor
汤晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202111114147.XA priority Critical patent/CN113727135B/en
Publication of CN113727135A publication Critical patent/CN113727135A/en
Priority to PCT/CN2022/077169 priority patent/WO2023045235A1/en
Application granted granted Critical
Publication of CN113727135B publication Critical patent/CN113727135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4437Implementing a Virtual Machine [VM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The utility model relates to a live broadcast interaction method, device, electronic equipment, storage medium and program product, through in the live broadcast interface, show the first video content that produces in the virtual scene of target, respond to the operation of looking over the stage property transaction interface of first account, show the stage property transaction interface in the live broadcast interface, thus show the virtual stage property associated with the subject object in the first video content in the stage property transaction interface in the live broadcast, realize the video content in the live broadcast interface and the relevance of virtual stage property, can improve the interactivity between spectator and the game stage property, thereby guide the spectator who watches the live broadcast to trade the virtual stage property.

Description

Live broadcast interaction method and device, electronic equipment, storage medium and program product
Technical Field
The present disclosure relates to the field of live broadcast technologies, and in particular, to a live broadcast interaction method and apparatus, an electronic device, a storage medium, and a program product.
Background
With the progress and development of network technology, the network live broadcast is obviously developed and applied, and more users begin to use the live broadcast platform to entertain and entertain due to the interactivity and entertainment of the live broadcast platform, so that the game live broadcast also takes place.
In the related technology, when watching a game and broadcasting directly, a spectator can check a selling interface of the game props, various game props are displayed in the selling interface, and the spectator can buy any game prop to a game account of the spectator. However, the interactivity between spectators and play items during live play of a game is to be improved.
Disclosure of Invention
The present disclosure provides a live broadcast interaction method, apparatus, electronic device, storage medium, and program product, to at least solve a technical problem in the related art that interactivity between a spectator and a game item needs to be improved. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a live broadcast interaction method is provided, including:
displaying first video content generated in a target virtual scene in a live interface;
responding to the viewing operation of the first account on the prop trading interface, and displaying the prop trading interface of the live broadcast room in the live broadcast interface;
displaying, in the item transaction interface, a virtual item associated with a subject object in the first video content, where the virtual item is an item used in the target virtual scene.
In one embodiment, the method further comprises:
in response to a trial operation of the virtual prop by the first account, determining a first virtual prop in the virtual props;
acquiring second video content obtained by rendering by using the first virtual prop;
and displaying the second video content in the live interface.
In one embodiment, after the presenting the first video content generated in the target virtual scene, the method further comprises:
and when detecting that the control result of the second account on the main object in the target virtual scene meets a preset condition, displaying a reminding mark corresponding to the control result in the live broadcast interface.
In one embodiment, the method further comprises:
and responding to the viewing operation of the reminding mark, and displaying third video content in the live broadcast interface, wherein the third video content comprises a video clip generated when the control result meets the preset condition.
In one embodiment, the manipulation result is generated when the second account adopts a second virtual prop; while presenting the third video content, the method further comprises:
and displaying the detail information of the second virtual prop in the live broadcast interface.
According to a second aspect of the embodiments of the present disclosure, there is provided a live broadcast interaction method, including:
sending first video content generated in a target virtual scene to a viewer end, wherein the first video content is used for displaying in a live interface of the viewer end;
determining a subject object in the first video content;
acquiring a virtual prop associated with the main object, wherein the virtual prop is a prop used in the target virtual scene;
and sending the virtual prop to the audience, wherein the virtual prop is used for being displayed in a prop trading interface of a live broadcast room.
In one embodiment, the obtaining the virtual prop associated with the subject object includes:
and searching the corresponding relation between the main body object and the virtual prop according to the main body object to obtain the virtual prop corresponding to the main body object.
In one embodiment, the method further comprises:
acquiring a trial request of a first account for a first virtual prop, wherein the selection request carries an identifier of the first virtual prop;
acquiring second video content obtained by rendering by using the first virtual prop according to the identifier of the first virtual prop;
and sending the second video content to the audience.
In one embodiment, after the transmitting the first video content generated in the target virtual scene, the method further comprises:
and when detecting that the control result of the second account on the subject object in the target virtual scene meets a preset condition, sending the control result to the audience, wherein the control result is used for indicating the audience to display a reminding mark in a live broadcast interface.
In one embodiment, after the sending of the manipulation result to the viewer, the method further includes:
and sending third video content to the audience, wherein the third video content comprises a video clip generated when the control result meets the preset condition.
In one embodiment, the manipulation result is generated when the second account adopts a second virtual prop; while transmitting the third video content to the viewer side, the method further comprises:
and sending the detail information of the second virtual prop to the audience.
According to a third aspect of the embodiments of the present disclosure, there is provided a live broadcast interaction apparatus, including:
the first content display module is configured to display first video content generated in a target virtual scene in a live interface;
the transaction interface display module is configured to execute viewing operation of the prop transaction interface in response to the first account, and display the prop transaction interface of the live broadcast room in the live broadcast interface;
a virtual item presentation module configured to perform presentation of a virtual item associated with a subject object in the first video content in the item trading interface, where the virtual item is an item used in the target virtual scene.
In one embodiment, the apparatus further comprises:
a virtual item determination module configured to perform a trial operation on the virtual item in response to the first account, to determine a first virtual item among the virtual items;
the video content acquisition module is configured to execute acquisition of second video content obtained by rendering by adopting the first virtual prop;
a second content presentation module configured to perform presentation of the second video content in the live interface.
In one embodiment, the apparatus further comprises:
and the prompt mark display module is configured to display a prompt mark corresponding to the control result in the live broadcast interface when the control result of the second account on the main body object in the target virtual scene is detected to meet a preset condition.
In one embodiment, the apparatus further comprises:
and the third content display module is configured to execute a viewing operation responding to the reminding mark, and display third video content in the live broadcast interface, wherein the third video content comprises a video clip generated when the control result meets the preset condition.
In one embodiment, the manipulation result is generated when the second account adopts a second virtual prop; the device further comprises:
and the prop information display module is configured to display the detail information of the second virtual prop in the live broadcast interface.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a live broadcast interaction apparatus, including:
the first content sending module is configured to execute sending of first video content generated in a target virtual scene to a viewer, wherein the first video content is used for displaying in a live interface of the viewer;
a subject object determination module configured to perform determining a subject object in the first video content;
a virtual prop obtaining module configured to perform obtaining of a virtual prop associated with the subject object, where the virtual prop is a prop used in the target virtual scene;
and the virtual item sending module is configured to send the virtual item to the audience, and the virtual item is used for being displayed in an item trading interface of a live broadcast room.
In one embodiment, the virtual item obtaining module is configured to perform searching in a corresponding relationship between a main object and a virtual item according to the main object, so as to obtain a virtual item corresponding to the main object.
In one embodiment, the apparatus further comprises:
the system comprises a request acquisition module, a first virtual item selection module and a second virtual item selection module, wherein the request acquisition module is configured to execute a selection request for acquiring a first virtual item from a first account, and the selection request carries an identifier of the first virtual item;
the second content acquisition module is configured to execute the second video content obtained by rendering by adopting the first virtual item according to the identifier of the first virtual item;
a first content transmission module configured to perform transmission of the second video content to the viewer side.
In one embodiment, the apparatus further comprises:
and the control result sending module is configured to send a control result to the audience terminal when detecting that a control result of the second account on the main object in the target virtual scene meets a preset condition, wherein the control result is used for indicating the audience terminal to display a reminding mark in a live broadcast interface.
In one embodiment, the apparatus further comprises:
and the second content sending module is configured to execute sending of third video content to the audience, wherein the third video content comprises a video clip generated when the manipulation result meets the preset condition.
In one embodiment, the manipulation result is generated when the second account adopts a second virtual prop; the device further comprises:
and the item information sending module is configured to execute sending of the detail information of the second virtual item to the audience.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the live interaction method described in any of the above embodiments.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions of the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the live interaction method described in any of the above embodiments.
According to a seventh aspect of the embodiments of the present disclosure, a computer program product is provided, where the computer program product includes instructions, and when the instructions are executed by a processor of an electronic device, the electronic device is enabled to execute the live broadcast interaction method described in any of the above embodiments.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the first video content generated in the target virtual scene is displayed in the live broadcasting interface, the item trading interface of the live broadcasting room is displayed in the live broadcasting interface in response to the viewing operation of the first account on the item trading interface, so that the virtual item associated with the main object in the first video content is displayed in the item trading interface of the live broadcasting room, the association between the video content in the live broadcasting interface and the virtual item is realized, and the interactivity between audiences and game items can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a diagram of an application environment illustrating a live interaction method in accordance with an exemplary embodiment.
Fig. 2 is a flow diagram illustrating a live interaction method in accordance with an exemplary embodiment.
Fig. 3 is a flow diagram illustrating a live interaction method in accordance with an exemplary embodiment.
Fig. 4 is a flow diagram illustrating a live interaction method in accordance with an exemplary embodiment.
Fig. 5 is a flow diagram illustrating a live interaction method in accordance with an exemplary embodiment.
Fig. 6 is a flow diagram illustrating a live interaction method in accordance with an exemplary embodiment.
Fig. 7 is a flow diagram illustrating a live interaction method in accordance with an exemplary embodiment.
Fig. 8 is a block diagram illustrating a live interaction device, according to an example embodiment.
Fig. 9 is a block diagram illustrating another live interaction device, according to an example embodiment.
FIG. 10 is a block diagram illustrating an electronic device in accordance with an example embodiment.
FIG. 11 is a block diagram illustrating another electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should also be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are both information and data that are authorized by the user or sufficiently authorized by various parties.
A virtual scene is a virtual scene that is displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene and a three-dimensional virtual scene, and may be used for virtual scene engagement between at least two virtual characters, for example. In some embodiments, the virtual scene is typically generated by an application in a computer device, such as a terminal, for presentation based on hardware (e.g., a screen) in the terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a personal computer device such as a notebook computer or a stationary computer.
The subject object may be a movable object in the virtual scene. The subject object may be at least one of a virtual character, a virtual animal, a virtual vehicle. Illustratively, when the virtual scene is a three-dimensional virtual scene, the subject object may be a three-dimensional stereo model created based on an animated skeleton technique. Each subject object has its own shape, volume and orientation in the three-dimensional virtual scene and occupies a portion of the space in the three-dimensional virtual scene.
The virtual props refer to props which can be used by virtual objects in a virtual environment, can be skins of game roles, can be virtual weapons which can hurt other virtual roles, such as pistols, rifles, sniper guns, daggers, knives, swords, axes and the like, can be supply props such as bullets and the like, can be quick clips, sighting glasses, silencers and the like which are installed on specified virtual weapons, can provide virtual pendants with added partial attributes for the virtual weapons, and can be defense props such as shields, armors, armored cars and the like.
In the game live broadcast in the conventional art, the anchor can sell the game props instead of the game props, and the scheme of selling the props among the present game live broadcast is: the anchor can select the game props sold when a certain game is live. The game items selected by the anchor may be displayed in a vending panel in the live room. The spectator may purchase the game items in the vending panel into their own game account with one touch. Illustratively, anchor a is live any style of game and is configured to support the viewing audience to purchase items such as the skin, hero, charms, etc. of the style of game in the live room. When audience B enters a live broadcast room to watch live broadcasts, the audience B can see the entrance for selling props, find the hero or hero skin required by the audience B, and can purchase the items by one key. Audience B can bind game account information in advance, and after the purchase is successful, the game account has corresponding products, and meanwhile, the anchor obtains certain benefits. Through further research, in the traditional technology, the live broadcast interaction mode is similar to that of an unmanned supermarket, the interactivity between audiences and game props is not high, the enthusiasm of the audiences for purchase cannot be improved, the game props in the selling panel have no pertinence, and the relevance of the video contents live broadcast by the anchor is not strong.
Based on this, the live interaction method provided by the present disclosure may be applied in an application environment as shown in fig. 1. At least one of the audience 110 and the live broadcast server 120 communicate with each other via a network, and the anchor 130, the live broadcast server 120, and the cloud game server 140 communicate with each other via the network. The viewer side 110 runs an application program capable of being used for watching live broadcast, and the anchor side 130 runs an application program capable of being used for live broadcast, and it is understood that the application program for watching live broadcast and the application program for live broadcast may be the same type of application program or different types of application programs.
The anchor 130 may run a cloud game, which refers to a game based on cloud computing. In the cloud game operation mode, the operation main body of the game application program and the game picture presentation main body are separated, the game picture can be displayed on the anchor terminal 130, the anchor terminal 130 sends a game operation instruction to the cloud game service terminal 140, the cloud game service terminal 140 performs video rendering according to the game operation instruction, and returns the game video content to the anchor terminal 130. The anchor terminal 130 may further send cloud game related information to the live broadcast service terminal 120, where the cloud game related information may include a game name selected by the anchor, a game character selected by the anchor, an anchor game account, and the like. The live broadcast server 120 obtains game video content from the cloud game server 140 according to the cloud game related information, each spectator 110 obtains first video content from the live broadcast server 120, and the obtained first video content is displayed in a live broadcast interface for any spectator 110, wherein the first video content is generated in a target virtual scene. The viewer side 110 determines the subject object in the first video content. And responding to the viewing operation of the first account on the prop trading interface, and displaying the prop trading interface of the live broadcast room in the live broadcast interface. And displaying the virtual prop associated with the main body object in the first video content in a prop trading interface, wherein the virtual prop is a prop used in the target virtual scene. The audience terminal 110 further receives a trial operation of the first account on the virtual prop, and the live broadcast server terminal 120 obtains a trial request of the first account on the first virtual prop, wherein the trial request carries an identifier of the first virtual prop; the live broadcast server 120 acquires second video content rendered by the first virtual item according to the identifier of the first virtual item; the second video content is transmitted to the viewer end 110. The viewer end 110 displays the second video content in the live interface
The audience 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, and tablet computers, the live broadcast server 120 and the cloud game server 140 may be implemented by independent servers or a server cluster formed by a plurality of servers, and the anchor server 130 may be, but is not limited to, various personal computers, notebook computers, smart phones, and tablet computers.
Fig. 2 is a flow chart illustrating a live interaction method according to an exemplary embodiment, where the live interaction method is used in the viewer side 110 in the application scene shown in fig. 2, and includes the following steps.
In step S210, in the live interface, the first video content generated in the target virtual scene is presented.
Wherein, the target virtual scene can be a virtual scene used by the anchor. Specifically, for any viewer, a first video content is obtained from a live broadcast server, where the first video content may be video data generated by a main broadcast operating on a subject object in a target virtual scene. And the audience terminal displays the first video content in the live interface.
In step S220, in response to the viewing operation of the first account on the item transaction interface, an item transaction interface in the live broadcast room is displayed in the live broadcast interface.
Wherein the first account is an account of a viewer user watching a live broadcast at a viewer end.
Specifically, an entrance of a prop trading interface of the live broadcast room can be set in the live broadcast interface. And when the audience detects the triggering operation of the first account on the entrance, receiving the viewing operation of the prop trading interface, and displaying the prop trading interface in the live broadcast room at the audience.
In step S230, in the item transaction interface, a virtual item associated with the subject object in the first video content is displayed, where the virtual item is an item used in the target virtual scene.
The virtual props are props used in the target virtual scene.
Specifically, in some embodiments, the anchor terminal may send cloud game related information to the live broadcast server terminal, and the cloud game related information may include a game character selected by the anchor terminal, so that a main object in the first video content may be determined according to the game character. The live broadcast server can obtain the virtual props associated with the subject objects from the cloud game server, and the live broadcast server sends the virtual props associated with the subject objects to the audience, and the audience obtains the virtual props associated with the subject objects. In some embodiments, the spectator end may identify the first video content itself, determine the subject object in the first video content, and thereby obtain the virtual prop associated with the subject object from the live broadcast service end according to the subject object in the first video content. And after the audience terminal obtains the virtual prop associated with the main object, the obtained virtual prop is displayed in a prop trading interface.
According to the live broadcast interaction method, the first video content generated in the target virtual scene is displayed in the live broadcast interface, the item transaction interface of the live broadcast room is displayed in the live broadcast interface in response to the viewing operation of the first account on the item transaction interface, so that the virtual item associated with the main body object in the first video content is displayed in the item transaction interface of the live broadcast room, the association between the video content in the live broadcast interface and the virtual item is realized, the interactivity between audiences and game items can be improved, and the audiences viewing the live broadcast room are guided to trade the virtual item.
In an exemplary embodiment, as shown in fig. 3, the method further comprises the steps of:
in step S310, in response to the trial operation of the virtual item by the first account, a first virtual item is determined among the virtual items.
In step S320, a second video content rendered by using the first virtual item is obtained.
In step S330, the second video content is presented in the live interface.
Wherein the first virtual item may be a virtual item selected by the first account.
Specifically, the audience end displays the virtual props in a prop trading interface of a live broadcast room, and the audience can select the displayed virtual props, so that the audience end responds to the trial operation of the virtual props by the first account and determines the first virtual props in the virtual props. The first virtual item may be provided with an identification. And the audience terminal sends a trial request for the first virtual prop to the live broadcast server terminal, wherein the trial request carries the identifier of the first virtual prop. The live broadcast server can obtain second video content rendered by the first virtual item from the live broadcast server according to the identification of the first virtual item. The live broadcast server can also obtain second video content rendered by the first virtual prop from the cloud game server according to the identifier of the first virtual prop. And the audience terminal acquires the second video content from the live broadcast server terminal and displays the second video content in the live broadcast interface.
It should be noted that the second video content is shown on the audience a that triggers the trial operation, while the video content shown on the audience B that does not trigger the trial operation is identical to the current video content of the anchor, and if the virtual item tried in the second video content is different from the virtual item currently tried in the anchor, the second video content is different from the current video content of the anchor. The current video content of the anchor terminal is sent to the anchor terminal by the cloud game service terminal, the second video content is sent to the live broadcast service terminal by the cloud game service terminal, and the audience terminal pulls the second video content from the live broadcast service terminal. The cloud game server can comprise at least two cloud game hosts. The anchor current video content and the second video content may be provided by different cloud game hosts.
In this embodiment, through responding to the operation of trying the virtual props by the first account, obtain the second video content that adopts first virtual props to render and obtain to show the second video content in the live broadcast interface, demonstrate the effect of trying of virtual props to spectator through the second video content directly perceivedly, make spectator can learn the effect of trying of virtual props before purchasing the virtual props, improve spectator's purchase experience.
In an exemplary embodiment, after presenting the first video content generated in the target virtual scene, the method further comprises: and when detecting that the control result of the second account on the main body object in the target virtual scene meets the preset condition, displaying a reminding mark corresponding to the control result in the live broadcast interface.
The second account is an anchor user account for providing live video content by utilizing an anchor end to carry out live broadcasting. Specifically, the second account controls the subject object in the target virtual scene, and if the control effect of the second account on the subject object meets the preset condition, it is indicated that the control performance of the second account is wonderful. The live broadcast server can detect the control result of the second account, and if the control result meets the preset condition, the live broadcast server can send the reminding mark corresponding to the control result to the audience, so that the audience can display the reminding mark corresponding to the control result in a live broadcast interface.
In some embodiments, the reminding mark may adopt a static special effect prompting mode such as a bubble and a red dot, and the reminding mark may also adopt a dynamic special effect prompting mode such as a highlight.
In some embodiments, the preset condition may be a predefined highlight in the game. The cloud game server can identify the wonderful moment, and when the wonderful moment is identified by the cloud game server, the reminding message of the wonderful moment is sent to the live broadcast server so that the reminding mark is sent to the audience through the live broadcast server. In other embodiments, the live broadcast server may identify the highlight in the game through a voice recognition or image recognition technology, and send a reminder to the spectator when the highlight is identified by the live broadcast server.
In the embodiment, the reminding mark corresponding to the control result is displayed in the live interface, so that the audience is reminded of having the wonderful moment, a user can conveniently check the wonderful moment video, and the user experience is improved.
In an exemplary embodiment, the method further comprises: and displaying the third video content in the live interface in response to the viewing operation of the reminding mark.
And the third video content comprises a video clip generated when the control result meets the preset condition. Specifically, the viewer can click the reminding mark, so that the viewer can respond to the viewing operation of the reminding mark and send a viewing request to the live broadcast server. And the live broadcast server responds to the viewing request and sends the third video content to the audience, so that the audience displays the third video content in a live broadcast interface. In some embodiments, the third video content of the live broadcast server may be a video clip that is delivered to the live broadcast server when the cloud game server detects that the manipulation result of the second account satisfies the preset condition.
In the embodiment, the third video content is displayed in the live broadcast interface by responding to the viewing operation of the reminding mark, so that a user can conveniently view a video at a wonderful moment, and the operation cost of the user is reduced.
In an exemplary embodiment, the manipulation result is generated when the second account adopts the second virtual prop; while presenting the third video content, the method further comprises: and displaying the detail information of the second virtual prop in a live interface.
Specifically, the viewer can click the reminding mark, so that the viewer end responds to the viewing operation of the reminding mark and sends a viewing request to the live broadcast server end. And the live broadcast server responds to the viewing request and sends the third video content and the detail information of the second virtual item to the audience so that the audience can display the third video content and the detail information of the second virtual item in a live broadcast interface. In some embodiments, the third video content of the live broadcast server may be a video clip that is delivered to the live broadcast server when the cloud game server detects that the control result of the second account meets the preset condition. The detail information of the second virtual prop may be detail information which is issued to a live broadcast server when the cloud game server detects that the control result of the second account meets the preset condition.
In some embodiments, the details of the second virtual prop may be a gain introduction to the second account using the second virtual prop. The gain introduction can introduce the gain of the second virtual prop used by the second account in the wonderful moment in detail, so that the user can clearly see the actual combat value of the second virtual prop.
In this embodiment, through responding to the operation of looking over the warning mark, show the detail information of the virtual stage property of second in the live broadcast interface, the user of being convenient for knows the detail information of the virtual stage property of second, not only can promote user's purchase enthusiasm, improves the experience of purchasing the virtual stage property.
Fig. 4 is a flow chart illustrating a live interaction method according to an exemplary embodiment, where the live interaction method is used in the viewer side 110 in the application scene shown in fig. 4, and includes the following steps.
In step S402, in the live interface, the first video content generated in the target virtual scene is presented.
In step S404, a subject object in the first video content is determined.
In step S406, a virtual item associated with the subject object is obtained, where the virtual item is an item used in the target virtual scene.
In step S408, in response to the viewing operation of the first account on the item transaction interface, the virtual item is displayed in the item transaction interface of the live broadcast.
In step S410, when it is detected that the control result of the second account on the subject object in the target virtual scene meets the preset condition, a prompt mark corresponding to the control result is displayed in the live interface.
And the control result is generated when the second account adopts the second virtual prop.
In step S412, in response to the viewing operation of the reminder mark, the third video content and the detail information of the second virtual item are displayed in the live interface.
The third video content comprises a video clip generated when the control result meets the preset condition.
In step S414, in response to the trial operation of the first account on the virtual item, determining a first virtual item in the virtual item;
in step S416, a second video content rendered by using the first virtual item is obtained.
In step S418, the second video content is presented in the live interface.
Fig. 5 is a flowchart illustrating a live interaction method according to an exemplary embodiment, where as shown in fig. 5, the live interaction method is used in the live service 120 in the application scenario, and includes the following steps.
In step S510, a first video content generated in the target virtual scene is sent to the viewer, where the first video content is used for displaying in a live interface of the viewer.
In step S520, a subject object in the first video content is determined.
In step S530, a virtual item associated with the subject object is obtained, where the virtual item is an item used in the target virtual scene.
In step S540, the virtual item is sent to the audience, and the virtual item is used for displaying in an item transaction interface of the live broadcast room.
In an exemplary embodiment, obtaining the virtual prop associated with the subject object includes: and searching the corresponding relation between the main body object and the virtual prop according to the main body object to obtain the virtual prop corresponding to the main body object.
In an exemplary embodiment, as shown in fig. 6, the method further comprises:
in step S610, a trial request of the first account for the first virtual item is obtained, and the selection request carries an identifier of the first virtual item.
In step S620, according to the identifier of the first virtual item, a second video content rendered by using the first virtual item is obtained.
In step S630, the second video content is transmitted to the viewer.
In an exemplary embodiment, after transmitting the first video content generated in the target virtual scene, the method further includes: and when detecting that the control result of the second account on the main body object in the target virtual scene meets the preset condition, sending the control result to the audience, wherein the control result is used for indicating the audience to display the reminding mark in the live broadcast interface.
In an exemplary embodiment, after sending the manipulation result to the viewer, the method further includes: and sending third video content to the audience, wherein the third video content comprises a video clip generated when the control result meets the preset condition.
In an exemplary embodiment, the manipulation result is generated when the second account adopts the second virtual prop; while transmitting the third video content to the viewer side, the method further comprises: and sending the detail information of the second virtual prop to the audience.
Fig. 7 is a flowchart illustrating a live interaction method according to an exemplary embodiment, where as shown in fig. 7, the live interaction method is used in the live service 120 in the application scenario, and includes the following steps.
In step S702, a first video content generated in the target virtual scene is sent to the viewer, where the first video content is used for displaying in a live interface of the client.
In step S704, a subject object in the first video content is determined.
In step S706, a virtual item associated with the subject object is acquired.
The virtual props are props used in the target virtual scene. Specifically, according to the main body object, the corresponding relation between the main body object and the virtual prop is searched, and the virtual prop corresponding to the main body object is obtained.
In step S708, the virtual item is sent to the audience, and the virtual item is used for displaying in an item transaction interface of the live broadcast room.
In step S710, a trial request of the first account for the first virtual item is obtained, and the selection request carries an identifier of the first virtual item.
In step S712, according to the identifier of the first virtual item, a second video content obtained by rendering using the first virtual item is obtained.
In step S714, the second video content is transmitted to the viewer.
In step S716, when it is detected that the manipulation result of the second account on the subject object in the target virtual scene meets the preset condition, the manipulation result is sent to the viewer.
And the control result is used for indicating the audience to display the reminding mark in the live broadcast interface. The control result is generated when the second account adopts the second virtual prop.
In step S718, the third video content and the detail information of the second virtual item are sent to the audience.
The third video content comprises a video clip generated when the control result meets the preset condition.
Regarding the server-side live broadcast interaction method in the above embodiment, the specific manner of each step has been described in detail in the embodiment of the viewer-side live broadcast interaction method, and will not be elaborated here.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.
It is understood that the same/similar parts between the embodiments of the method described above in this specification can be referred to each other, and each embodiment focuses on the differences from the other embodiments, and it is sufficient that the relevant points are referred to the descriptions of the other method embodiments.
Fig. 8 is a block diagram illustrating a live interaction device 800, according to an example embodiment. Referring to fig. 8, the apparatus 800 includes a first content presentation module 802, a trading interface presentation module 804, and a virtual item presentation module 806.
A first content presentation module 802 configured to perform presentation of first video content generated in a target virtual scene in a live interface; a transaction interface display module 804 configured to perform a viewing operation of the prop transaction interface in response to the first account, and display the prop transaction interface of the live broadcast room in the live broadcast interface; a virtual item presentation module 806 configured to perform presenting, in an item trading interface, a virtual item associated with a subject object in the first video content, the virtual item being an item used in the target virtual scene.
In an exemplary embodiment, the apparatus 800 further comprises: a virtual prop determination module configured to perform a trial operation on a virtual prop in response to a first account, determining a first virtual prop among the virtual props; the video content acquisition module is configured to execute the acquisition of second video content obtained by rendering by adopting the first virtual prop; and the second content presentation module is configured to execute presentation of the second video content in the live interface.
In an exemplary embodiment, the apparatus 800 further comprises: and the prompt mark display module is configured to display a prompt mark corresponding to the control result in the live broadcast interface when the control result of the second account on the main body object in the target virtual scene is detected to meet the preset condition.
In an exemplary embodiment, the apparatus 800 further comprises: and the third content display module is configured to execute a viewing operation responding to the reminding mark and display third video content in the live broadcast interface, wherein the third video content comprises a video clip generated when the control result meets a preset condition.
In an exemplary embodiment, the manipulation result is generated when the second account adopts the second virtual prop; the apparatus 800 further comprises: and the prop information display module is configured to display the detail information of the second virtual prop in the live broadcast interface.
Fig. 9 is a block diagram illustrating a live interaction device 900, according to an example embodiment. Referring to fig. 9, the apparatus 900 includes a first content transmitting module 902, a subject object determining module 904, a virtual item acquiring module 906, and a virtual item transmitting module 908.
A first content sending module 902, configured to execute sending a first video content generated in a target virtual scene to a viewer, where the first video content is used for displaying in a live interface of the viewer; a subject object determination module 904 configured to perform determining a subject object in the first video content; a virtual item obtaining module 906, configured to perform obtaining of a virtual item associated with the subject object, where the virtual item is an item used in a target virtual scene; and a virtual item sending module 908 configured to send the virtual item to the audience, where the virtual item is used for displaying in an item trading interface of the live broadcast room.
In an exemplary embodiment, the virtual item obtaining module 906 is configured to perform searching in the corresponding relationship between the main object and the virtual item according to the main object, so as to obtain the virtual item corresponding to the main object.
In an exemplary embodiment, the apparatus 900 further comprises: the request acquisition module is configured to execute a selection request for acquiring the first virtual item from the first account, wherein the selection request carries an identifier of the first virtual item; the second content acquisition module is configured to execute the second video content obtained by rendering by adopting the first virtual item according to the identifier of the first virtual item; and the first content sending module is configured to execute sending the second video content to the audience.
In an exemplary embodiment, the apparatus 900 further comprises: and the control result sending module is configured to send a control result to the audience terminal when detecting that the control result of the second account on the main object in the target virtual scene meets a preset condition, wherein the control result is used for indicating the audience terminal to display the reminding mark in the live broadcast interface.
In an exemplary embodiment, the apparatus 900 further comprises: and the second content sending module is configured to execute sending of third video content to the audience, wherein the third video content comprises a video clip generated when the control result meets the preset condition.
In an exemplary embodiment, the manipulation result is generated when the second account adopts the second virtual prop; the apparatus 900 further comprises: and the item information sending module is configured to execute sending of detail information of the second virtual item to the audience.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 10 is a block diagram illustrating an electronic device 1000 for live interaction in accordance with an example embodiment. For example, the electronic device 1000 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
Referring to fig. 10, electronic device 1000 may include one or more of the following components: processing component 1002, memory 1004, power component 1006, multimedia component 1008, audio component 1010, interface to input/output (I/O) 1012, sensor component 1014, and communications component 1016.
The processing component 1002 generally controls the overall operation of the electronic device 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1002 may include one or more processors 1020 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 1002 may include one or more modules that facilitate interaction between processing component 1002 and other components. For example, the processing component 1002 may include a multimedia module to facilitate interaction between the multimedia component 1008 and the processing component 1002.
The memory 1004 is configured to store various types of data to support operation at the electronic device 1000. Examples of such data include instructions for any application or method operating on the electronic device 1000, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1004 may be implemented by any type or combination of volatile or non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, optical disk, or graphene memory.
The power supply component 1006 provides power to the various components of the electronic device 1000. The power components 1006 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 1000.
The multimedia component 1008 includes a screen that provides an output interface between the electronic device 1000 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1008 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 1000 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1010 is configured to output and/or input audio signals. For example, the audio component 1010 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1004 or transmitted via the communication component 1016. In some embodiments, audio component 1010 also includes a speaker for outputting audio signals.
I/O interface 1012 provides an interface between processing component 1002 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1014 includes one or more sensors for providing various aspects of status evaluation for the electronic device 1000. For example, the sensor assembly 1014 may detect an open/closed state of the electronic device 1000, the relative positioning of components, such as a display and keypad of the electronic device 1000, the sensor assembly 1014 may also detect a change in position of the electronic device 1000 or components of the electronic device 1000, the presence or absence of user contact with the electronic device 1000, orientation or acceleration/deceleration of the device 1000, and a change in temperature of the electronic device 1000. The sensor assembly 1014 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1016 is configured to facilitate wired or wireless communication between the electronic device 1000 and other devices. The electronic device 1000 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 1016 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1016 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 1004 comprising instructions, executable by the processor 1020 of the electronic device 1000 to perform the above-described method is also provided. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which includes instructions executable by the processor 1020 of the electronic device 1000 to perform the above-described method.
Fig. 11 is a block diagram illustrating an electronic device 1100 for live interaction in accordance with an example embodiment. For example, the electronic device 1100 may be a server. Referring to fig. 11, electronic device 1100 includes a processing component 1120 that further includes one or more processors, and memory resources, represented by memory 1122, for storing instructions, such as application programs, that are executable by processing component 1120. The application programs stored in memory 1122 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1120 is configured to execute instructions to perform the above-described methods.
The electronic device 1100 may further include: the power component 1124 is configured to perform power management of the electronic device 1100, the wired or wireless network interface 1126 is configured to connect the electronic device 1100 to a network, and the input/output (I/O) interface 1128. The electronic device 1100 may operate based on an operating system, such as Windows Server, mac OS X, unix, linux, freeBSD, or the like, stored in the memory 1122.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as memory 1122 comprising instructions, executable by a processor of electronic device 1100 to perform the above-described method is also provided. The storage medium may be a computer-readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which includes instructions executable by a processor of the electronic device 1100 to perform the above-described method.
It should be noted that the descriptions of the above-mentioned apparatus, the electronic device, the computer-readable storage medium, the computer program product, and the like according to the method embodiments may also include other embodiments, and specific implementations may refer to the descriptions of the related method embodiments, which are not described in detail herein.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (22)

1. A live interactive method, comprising:
displaying first video content generated in a target virtual scene in a live interface;
responding to the viewing operation of the first account on the prop trading interface, and displaying the prop trading interface of the live broadcast room in the live broadcast interface;
displaying a virtual prop associated with a main object in the first video content in the prop trading interface, wherein the virtual prop is a prop used by the main object in the target virtual scene, and the virtual prop is a prop obtained by searching in a corresponding relation between the main object and the virtual prop according to the main object; the subject object is an object currently operated in the target virtual scene by a second account of the anchor.
2. The live interaction method of claim 1, further comprising:
in response to a trial operation of the virtual prop by the first account, determining a first virtual prop in the virtual props;
acquiring second video content obtained by rendering by using the first virtual prop;
and displaying the second video content in the live interface.
3. The live interaction method as claimed in claim 1, wherein after said presenting the first video content generated in the target virtual scene, the method further comprises:
and when detecting that the control result of the second account on the main object in the target virtual scene meets a preset condition, displaying a reminding mark corresponding to the control result in the live broadcast interface.
4. The live interaction method of claim 3, further comprising:
and responding to the viewing operation of the reminding mark, and displaying third video content in the live broadcast interface, wherein the third video content comprises a video clip generated when the control result meets the preset condition.
5. The live interaction method according to claim 4, wherein the manipulation result is generated when the second account adopts a second virtual item; while presenting the third video content, the method further comprises:
and displaying the detail information of the second virtual prop in the live broadcast interface.
6. A live interaction method, comprising:
sending first video content generated in a target virtual scene to a viewer end, wherein the first video content is used for displaying in a live interface of the viewer end;
determining a subject object in the first video content, wherein the subject object is an object currently operated in the target virtual scene by a second account of an anchor;
searching in the corresponding relation between the main body object and the virtual prop according to the main body object to obtain the virtual prop corresponding to the main body object, wherein the virtual prop is a prop used by the main body object in the target virtual scene;
and sending the virtual prop to the audience, wherein the virtual prop is used for being displayed in a prop trading interface of a live broadcast room.
7. The live interaction method of claim 6, further comprising:
acquiring a selection request of a first account for a first virtual item, wherein the selection request carries an identifier of the first virtual item;
acquiring second video content obtained by rendering by using the first virtual prop according to the identifier of the first virtual prop;
and sending the second video content to the audience.
8. The live interaction method as claimed in claim 6, wherein after said transmitting the first video content generated in the target virtual scene, the method further comprises:
and when detecting that the control result of the second account on the main object in the target virtual scene meets a preset condition, sending the control result to the audience, wherein the control result is used for indicating the audience to display a reminding mark in a live broadcast interface.
9. The live interaction method of claim 8, wherein after the sending of the manipulation result to the viewer end, the method further comprises:
and sending third video content to the audience, wherein the third video content comprises a video clip generated when the control result meets the preset condition.
10. A live interaction method according to claim 9, wherein the manipulation result is generated when the second account adopts a second virtual item; while transmitting the third video content to the viewer side, the method further comprises:
and sending the detail information of the second virtual prop to the audience.
11. A live interaction device, comprising:
the first content display module is configured to display first video content generated in a target virtual scene in a live interface;
the transaction interface display module is configured to execute viewing operation of the prop transaction interface in response to the first account, and display the prop transaction interface of the live broadcast room in the live broadcast interface;
a virtual item display module configured to display a virtual item associated with a main object in the first video content in the item transaction interface, where the virtual item is an item used by the main object in the target virtual scene, and the virtual item is an item searched for in a corresponding relationship between the main object and the virtual item according to the main object; the subject object is an object currently operated in the target virtual scene by a second account of the anchor terminal.
12. The live interaction device of claim 11, wherein the device further comprises:
a virtual prop determination module configured to perform a trial operation on the virtual props in response to the first account, determining a first virtual prop among the virtual props;
the video content acquisition module is configured to execute acquisition of second video content rendered by the first virtual prop;
a second content presentation module configured to perform presentation of the second video content in the live interface.
13. The live interaction device of claim 11, wherein the device further comprises:
and the prompt mark display module is configured to display a prompt mark corresponding to the control result in the live broadcast interface when the control result of the second account on the main body object in the target virtual scene is detected to meet a preset condition.
14. The live interaction device of claim 13, wherein the device further comprises:
and the third content display module is configured to execute a viewing operation responding to the reminding mark, and display third video content in the live broadcast interface, wherein the third video content comprises a video clip generated when the control result meets the preset condition.
15. A live interaction device according to claim 14, wherein the manipulation result is generated when a second virtual item is adopted by the second account; the device further comprises:
and the prop information display module is configured to display the detail information of the second virtual prop in the live broadcast interface.
16. A live interaction device, comprising:
the first content sending module is configured to execute sending of first video content generated in a target virtual scene to a viewer, wherein the first video content is used for displaying in a live interface of the viewer;
a subject object determination module configured to perform a determination of a subject object in the first video content, the subject object being an object that a second account of an anchor currently operates in the target virtual scene;
a virtual prop obtaining module configured to perform searching in a corresponding relationship between a main object and a virtual prop according to the main object, and obtain a virtual prop corresponding to the main object, where the virtual prop is a prop used by the main object in the target virtual scene;
and the virtual item sending module is configured to send the virtual item to the audience, and the virtual item is used for being displayed in an item trading interface of a live broadcast room.
17. The live interaction device of claim 16, wherein the device further comprises:
the system comprises a request acquisition module, a first virtual item selection module and a second virtual item selection module, wherein the request acquisition module is configured to execute a selection request for acquiring a first account for a first virtual item, and the selection request carries an identifier of the first virtual item;
the second content acquisition module is configured to execute the step of acquiring second video content obtained by rendering by using the first virtual prop according to the identifier of the first virtual prop;
and the first content sending module is configured to execute sending of the second video content to the audience.
18. The live interaction device of claim 16, wherein the device further comprises:
and the control result sending module is configured to send a control result to the audience terminal when detecting that a control result of the second account on the main object in the target virtual scene meets a preset condition, wherein the control result is used for indicating the audience terminal to display a reminding mark in a live broadcast interface.
19. The live interaction device of claim 18, wherein the device further comprises:
and the second content sending module is configured to execute sending of third video content to the audience, wherein the third video content comprises a video clip generated when the manipulation result meets the preset condition.
20. A live interaction device according to claim 19, wherein the manipulation result is generated when a second virtual item is adopted by the second account; the device further comprises:
and the item information sending module is configured to execute sending of the detail information of the second virtual item to the audience.
21. An electronic device, comprising:
a processor; a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the live interaction method of any of claims 1-10.
22. A computer-readable storage medium having instructions thereon that, when executed by a processor of an electronic device, enable the electronic device to perform the live interaction method of any of claims 1-10.
CN202111114147.XA 2021-09-23 2021-09-23 Live broadcast interaction method and device, electronic equipment, storage medium and program product Active CN113727135B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111114147.XA CN113727135B (en) 2021-09-23 2021-09-23 Live broadcast interaction method and device, electronic equipment, storage medium and program product
PCT/CN2022/077169 WO2023045235A1 (en) 2021-09-23 2022-02-22 Live stream interaction method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111114147.XA CN113727135B (en) 2021-09-23 2021-09-23 Live broadcast interaction method and device, electronic equipment, storage medium and program product

Publications (2)

Publication Number Publication Date
CN113727135A CN113727135A (en) 2021-11-30
CN113727135B true CN113727135B (en) 2023-01-20

Family

ID=78684774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111114147.XA Active CN113727135B (en) 2021-09-23 2021-09-23 Live broadcast interaction method and device, electronic equipment, storage medium and program product

Country Status (2)

Country Link
CN (1) CN113727135B (en)
WO (1) WO2023045235A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727135B (en) * 2021-09-23 2023-01-20 北京达佳互联信息技术有限公司 Live broadcast interaction method and device, electronic equipment, storage medium and program product
CN115174950A (en) * 2022-07-01 2022-10-11 网易(杭州)网络有限公司 Interaction control method and device in live broadcast, storage medium and electronic equipment

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10632372B2 (en) * 2015-06-30 2020-04-28 Amazon Technologies, Inc. Game content interface in a spectating system
CN108924576A (en) * 2018-07-10 2018-11-30 武汉斗鱼网络科技有限公司 A kind of video labeling method, device, equipment and medium
CN109246441B (en) * 2018-09-30 2021-03-16 武汉斗鱼网络科技有限公司 Method, storage medium, device and system for automatically generating wonderful moment video
US11050977B2 (en) * 2019-06-18 2021-06-29 Tmrw Foundation Ip & Holding Sarl Immersive interactive remote participation in live entertainment
CN110354507A (en) * 2019-07-12 2019-10-22 网易(杭州)网络有限公司 Interaction control method and device in game live streaming
CN112399200B (en) * 2019-08-13 2023-01-06 腾讯科技(深圳)有限公司 Method, device and storage medium for recommending information in live broadcast
CN110856001B (en) * 2019-09-04 2021-12-28 广州方硅信息技术有限公司 Game interaction method, live broadcast system, electronic equipment and storage device
CN111782101B (en) * 2020-07-08 2022-02-25 网易(杭州)网络有限公司 Display control method of live broadcast room, electronic device and storage medium
CN112675537B (en) * 2020-12-25 2024-06-25 网易(杭州)网络有限公司 Game prop interaction method and system in live broadcast
CN113132787A (en) * 2021-03-15 2021-07-16 北京城市网邻信息技术有限公司 Live content display method and device, electronic equipment and storage medium
CN113727135B (en) * 2021-09-23 2023-01-20 北京达佳互联信息技术有限公司 Live broadcast interaction method and device, electronic equipment, storage medium and program product

Also Published As

Publication number Publication date
WO2023045235A1 (en) 2023-03-30
CN113727135A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN110662083B (en) Data processing method and device, electronic equipment and storage medium
US20220303605A1 (en) Method for switching live-streaming rooms and electronic device
CN111970533B (en) Interaction method and device for live broadcast room and electronic equipment
CN106028166B (en) Live broadcast room switching method and device in live broadcast process
CN107483973B (en) Method and device for executing activity in live broadcast room
CN105450736B (en) Method and device for connecting with virtual reality
CN106534994B (en) Live broadcast interaction method and device
CN113727135B (en) Live broadcast interaction method and device, electronic equipment, storage medium and program product
CN112492339B (en) Live broadcast method, device, server, terminal and storage medium
CN110415083B (en) Article transaction method, device, terminal, server and storage medium
CN112118477B (en) Virtual gift display method, device, equipment and storage medium
CN112738544B (en) Live broadcast room interaction method and device, electronic equipment and storage medium
CN109309843A (en) Video distribution method, terminal and server
CN114025180A (en) Game operation synchronization system, method, device, equipment and storage medium
CN111815779A (en) Object display method and device, positioning method and device and electronic equipment
CN109754275B (en) Data object information providing method and device and electronic equipment
CN113365086A (en) Live data interaction method and device, electronic equipment, server and storage medium
CN113518240A (en) Live broadcast interaction method, virtual resource configuration method, virtual resource processing method and device
CN108346179B (en) AR equipment display method and device
CN111405310B (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN114025181A (en) Information display method and device, electronic equipment and storage medium
CN114302160B (en) Information display method, device, computer equipment and medium
CN110636318A (en) Message display method, message display device, client device, server and storage medium
CN114268823A (en) Video playing method and device, electronic equipment and storage medium
CN113988021A (en) Content interaction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant