CN115225923B - Method and device for rendering gift special effects, electronic equipment and live broadcast server - Google Patents

Method and device for rendering gift special effects, electronic equipment and live broadcast server Download PDF

Info

Publication number
CN115225923B
CN115225923B CN202210653955.1A CN202210653955A CN115225923B CN 115225923 B CN115225923 B CN 115225923B CN 202210653955 A CN202210653955 A CN 202210653955A CN 115225923 B CN115225923 B CN 115225923B
Authority
CN
China
Prior art keywords
gift
special effect
scene
virtual
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210653955.1A
Other languages
Chinese (zh)
Other versions
CN115225923A (en
Inventor
庄宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202210653955.1A priority Critical patent/CN115225923B/en
Publication of CN115225923A publication Critical patent/CN115225923A/en
Application granted granted Critical
Publication of CN115225923B publication Critical patent/CN115225923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a method and a device for rendering gift special effects, electronic equipment and a live broadcast server, wherein the method comprises the following steps: constructing a first virtual scene of a virtual live broadcasting room; acquiring sectional special effect data of a target gift; the segmented special effect data includes: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; rendering and displaying the first special effect data in the first virtual scene; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In the mode, the gift special effect rendering mode is combined with the scene of the live broadcasting room, scene conversion and subsequent special effect rendering are carried out based on the specified conditions, and the gift special effect rendering mode is combined with the content depth of the live broadcasting room, so that the thought of watching live broadcasting content of a viewer is not suddenly interrupted, and the immersion of watching live broadcasting of the viewer is improved.

Description

Method and device for rendering gift special effects, electronic equipment and live broadcast server
Technical Field
The invention relates to the technical field of live broadcasting, in particular to a method and a device for rendering a gift special effect, electronic equipment and a live broadcasting server.
Background
In a live scene, a viewer can watch live contents of a host after entering a live broadcasting room, and can send information and gifts to the host at the same time so as to realize interaction between the viewer and the host. In the related art, after a spectator sends a gift to a host, a section of gift special effect is usually played in a live broadcast room, the gift special effect is displayed on the live broadcast content in a covering manner, the gift special effect and the live broadcast content in the live broadcast room are mutually split, the spectator is easily interrupted to watch the live broadcast content, and the immersion of the spectator watching the live broadcast is influenced.
Disclosure of Invention
Accordingly, the invention aims to provide a method and a device for rendering a gift effect, electronic equipment and a live broadcast server, so as to improve the display effect of a virtual gift effect and the immersion of watching live broadcast by a spectator.
In a first aspect, an embodiment of the present invention provides a method for rendering a gift effect, where the method is applied to running a terminal device; the method comprises the following steps: constructing a first virtual scene of a virtual live broadcasting room; acquiring sectional special effect data of a target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; rendering and displaying the first special effect data in the first virtual scene; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
The step of obtaining the segmented special effect data of the target gift includes: acquiring gift special effect information of a target gift; generating special effect segmentation information of the target gift based on the gift special effect information and a preset special effect prerendering template, so that the live broadcast server obtains segmented special effect data of the target gift based on the special effect segmentation information, and returning the segmented special effect data to the terminal equipment.
The gift special effect information at least comprises gift special effect duration and special effect image data; the step of generating the special effect segmentation information of the target gift based on the gift special effect information and the preset special effect prerendering template comprises the following steps: determining a time period of the gift special effect duration based on the special effect prerendering template; extracting lens information from the special effect image data based on the belonging time period; determining gift effect deduction rhythm data of the target gift based on the shot information; and generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect prerendering template.
The lens information includes: shot switching times, single shot shortest duration and single shot longest duration; the step of determining the gift effect deduction rhythm data of the target gift based on the shot information includes: and carrying out weighted addition on the shot switching times, the single shot shortest time length and the single shot longest time length based on preset weighted parameters to obtain the gift effect deduction rhythm score of the target gift.
The step of generating the special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect prerendering template comprises the following steps: determining the number of segments corresponding to the gift effect deduction rhythm data based on the special effect prerendering template; determining a segmentation time stamp based on the segmentation number, the single shot shortest time length and the single shot longest time length in the shot information, and generating special effect segmentation information of the target gift.
After the step of generating the specific segmentation information of the target gift based on the gift specific information and the preset specific prerendering template, the method further comprises the following steps: determining a specified condition based on the special effect prerendering template and the special effect style of the target gift; wherein the specified conditions include: in the segmented special effect data, designating triggering conditions of the special effect data of the segment; the trigger condition includes one or more of the following: the method comprises the steps that a main broadcasting object in a virtual living broadcasting room executes a preset gesture, the virtual living broadcasting room receives appointed information, the virtual living broadcasting room completes an appointed task, and a user side sending a gift sending instruction executes appointed behaviors.
Before the step of obtaining the segmented special effect data of the target gift, the method further comprises the following steps: acquiring gift special effect information of a target gift; determining scene configuration information of a target gift based on the gift special effect information; wherein, the scene configuration information includes: special effect style, special effect playing duration and scene style of the first virtual scene; rendering a preset scene basic model and illumination information based on scene configuration information to obtain prerendering data of a general scene of a target gift; the prerendering data is used for displaying the general scene.
The step of determining scene configuration information of the target gift based on the gift special effect information includes: acquiring a gift identification of a target gift from the gift special effect information, and determining a special effect style of the target gift based on the gift identification; acquiring gift special effect duration of a target gift from the gift special effect information, and determining special effect playing duration of the target gift based on the gift special effect duration and a preset special effect prerendering template; and acquiring live broadcasting room information of the virtual live broadcasting room, extracting scene identification of the first virtual scene from the live broadcasting room information, and determining scene style of the first virtual scene based on the scene identification.
After the step of rendering the preset scene basic model and the illumination information based on the scene configuration information to obtain the prerendered data of the general scene of the target gift, the method further comprises: pre-rendering data of a general scene of a plurality of candidate gifts in the virtual live broadcasting room are stored in the terminal equipment; wherein the candidate gift is a gift supported by the virtual living broadcast room; the candidate gift comprises the target gift; and receiving the deliverable gifts supported by the appointed user side, and deleting the prerendered data of the general scenes of the gifts except the deliverable gifts in the terminal equipment.
The step of rendering and displaying the first special effect data in the first virtual scene includes: rendering and displaying a target object corresponding to the target gift in the first virtual scene; and controlling the target object to move in the first virtual scene, and controlling the target object to execute preset interaction operation with the anchor object in the virtual living room to obtain an interaction result.
The step of controlling the virtual live broadcasting room to switch from the first virtual scene to the general scene of the target gift in response to the specified condition being triggered, and rendering and displaying the second special effect data in the general scene, comprises the following steps: in response to the specified condition being triggered, stitching the first virtual scene and the general scene; controlling the virtual camera to move so as to display a general scene in the virtual live broadcasting room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; and after the general scene is displayed, rendering and displaying the second special effect data in the general scene.
The step of rendering and displaying the second special effect data in the general scene comprises the following steps: rendering and displaying an interaction result corresponding to the first special effect data in the general scene; and responding to the appointed operation of the main broadcasting object in the virtual living broadcasting room aiming at the interaction result, and displaying the operation result of the appointed operation in the general scene.
In a second aspect, an embodiment of the present invention provides a method for rendering a gift effect, where the method is applied to a live broadcast server; the method comprises the following steps: receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to terminal equipment, generating special effect segmentation information of the target gift based on the gift special effect information through the terminal equipment, and returning the special effect segmentation information to a live broadcast server; based on the special effect segmentation information, obtaining segmented special effect data of the target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
Before the step of receiving the gift sending instruction for the target gift, obtaining the gift special effect information of the target gift, and sending the gift special effect information to the terminal device, the method further comprises: acquiring information of a live broadcasting room of a virtual live broadcasting room in an on-stream state; the live broadcasting room information at least comprises a scene identifier of the first virtual scene and a gift identifier of a gift supported by the virtual live broadcasting room; and providing the live broadcasting room information to the terminal equipment.
Before the step of receiving the gift sending instruction for the target gift, obtaining the gift special effect information of the target gift, and sending the gift special effect information to the terminal device, the method further comprises: and acquiring the operation of calling out the gift sending panel from the designated user side, acquiring the deliverable gift supported by the designated user side from the gift sending panel, and providing the deliverable gift for the terminal equipment.
The method further comprises the following steps: and after the first special effect data is displayed, receiving information data from the anchor terminal, determining whether the specified condition is triggered based on the information data, and if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment to indicate that the specified condition is triggered.
In a third aspect, an embodiment of the present invention provides a rendering device for special effects of gifts, where the device is disposed in a terminal device; the device comprises: the first acquisition module is used for constructing a first virtual scene of the virtual live broadcasting room; acquiring sectional special effect data of a target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; the first display module is used for rendering and displaying first special effect data in the first virtual scene; and the second display module is used for responding to the appointed condition and triggering, controlling the virtual live broadcasting room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying second special effect data in the general scene.
In a fourth aspect, the embodiment of the present invention provides another rendering device for special effects of gifts, where the device is disposed in a live broadcast server; the device comprises: the information return module is used for receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to the terminal equipment, generating special effect segmentation information of the target gift based on the gift special effect information through the terminal equipment, and returning the special effect segmentation information to the live broadcast server; the data acquisition module is used for acquiring segmented special effect data of the target gift based on the special effect segmentation information; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; the data display module is used for returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
In a fifth aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory stores machine-executable instructions executable by the processor, and the processor executes the machine-executable instructions to implement the method for rendering the gift effect.
In a sixth aspect, an embodiment of the present invention provides a live broadcast server, including a processor and a memory, where the memory stores machine executable instructions executable by the processor, and the processor executes the machine executable instructions to implement the method for rendering a gift effect described above.
In a seventh aspect, embodiments of the present invention provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a method of rendering a gift effect as described above.
The embodiment of the invention has the following beneficial effects:
the method, the device, the electronic equipment and the live broadcast server for rendering the gift special effect build a first virtual scene of the virtual live broadcast room, and acquire segmented special effect data of the target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; rendering and displaying the first special effect data in the first virtual scene; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In the method, the first virtual scene is rendered and presents the front gift effect, the appointed condition is triggered, the virtual living broadcast room is controlled to be switched to the general scene of the target gift from the first virtual scene, and the second gift effect rendering and presenting are carried out in the general scene.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for rendering gift effects according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method for rendering gift effects according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of prerendered multi-terminal interaction of a method for rendering gift effects according to an embodiment of the present invention;
FIG. 4 is a multi-terminal interaction schematic diagram of segmentation and effect assembly of a gift effect rendering method according to an embodiment of the present invention;
fig. 5 is a multi-terminal interaction schematic diagram of completing the rendering of the gift effect according to the rendering method of the gift effect provided by the embodiment of the present invention;
fig. 6 is a schematic structural diagram of a rendering device for special effects of gifts according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another rendering device for gift effects according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device or a live server according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In a live broadcast scene, a host broadcast is live broadcast in a virtual live broadcast room, and audience users watch live broadcast contents of the host broadcast at user clients. To increase interactivity between the anchor and the user, the user may select a specific gift to be presented to the anchor, and after the user presents the gift, a section of animation is usually played in the live broadcast room as a gift effect display. In the related art, the display process of the gift can shield live broadcast content of the host, the association degree with the live broadcast content is low, the live broadcast content watched by the audience is easy to break, the immersion of watching the live broadcast is influenced, other audience users can display and shield the gift effect, and therefore the user gift sending will is caused to be low, and the payment willingness of the user and the overall revenue condition of the platform are further influenced.
Based on the above, the method, the device, the electronic equipment and the live broadcast server for rendering the gift special effect provided by the embodiment of the invention can be applied to a live broadcast room scene, and particularly can be applied to a virtual live broadcast room scene.
For the sake of understanding the present embodiment, first, a detailed description is given of a method for rendering a gift effect disclosed in the present embodiment, and as shown in fig. 1, the method is applied to a terminal device. Here, the terminal device may be operated with a host UE (virtual Engine) instance, or may be operated with other rendering engines; the anchor terminal UE instance is a rendering engine running on an anchor client terminal or a cloud server, a plurality of special effect rendering templates and rendering logic are prestored in the engine, and the rendering logic can be mobilized to render the corresponding rendering templates so as to realize the rendering and the presentation of virtual scenes and effects; the method for rendering the gift special effect comprises the following steps:
102, constructing a first virtual scene of a virtual living room, and acquiring segmented special effect data of a target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift;
In the virtual live broadcasting room, it can be understood that the background of the host broadcasting in front of a green screen is actually a virtual scene rendered by an engine of the UE, the virtual scene can be a simulation environment of the real world, various prop elements such as weather environment, articles, buildings and the like can be included in the virtual scene, and the elements in the scene can trigger interaction through different logic events theoretically. It should be noted that, before the gift special effect display, the first virtual scene is rendered and displayed in the virtual living room.
It will be appreciated that in a virtual living room, a spectator user may select a virtual item to send to the host as a virtual gift by calling out a gift box. One virtual gift may correspond to one gift ID (e.g., a gift ID), so that a virtual gift having the same tag information may be found from the gift material library through the gift ID, and it may be understood that a virtual gift to be sent to the host selected by the viewer user is the target gift.
Here, the target gift display scene includes a first virtual scene and a general scene of the target gift. The first virtual scene is a virtual scene which can be seen by a viewer user after entering the virtual live room, and a host of the live room is usually located in the first virtual scene; the general scene of the target gift is taken as the gift special effect bearing scene after the first virtual scene, and can be understood as a scene checkpoint sequence containing a plurality of parameter configuration items, and the sequence can obtain different types of general scenes through different scene configuration information. The general scene may be further understood that, for a target gift, the specified condition may be switched from the first virtual scene to the general scene after the specified condition is triggered, regardless of the first virtual scene. The general scenes corresponding to different gifts may be the same or different.
The segmented special effect data of the target gift includes: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the gift. In the initial state, the virtual live broadcasting room already renders and displays a first virtual scene, and first special effect data is rendered in the first virtual scene, wherein the first special effect data can be specifically virtual objects in the first virtual scene, such as virtual characters, virtual animals, virtual articles and the like; the first special effect data also comprises movement, gesture, language, interaction data with live broadcast and the like of the virtual object. The first special effect data may also be a local scene special effect added in the first virtual scene, such as a cloud special effect, a snowing special effect, and the like, where the first special effect data further includes data such as a display area, a display duration, and the like of the local special effect.
In this embodiment, the first special effect data is used to display a part of gift special effects in the current scene of the virtual live broadcast room, so as to achieve the purpose of combining the display of the gift special effects with the scene in the live broadcast room. On the basis, the scene of the virtual living room is converted from the first virtual scene to the general scene of the target gift by triggering the appointed condition.
The second special effect data is used for displaying the gift special effect in the general scene, the general scene can be rendered in advance, after the scene conversion is triggered, the general scene is displayed, and then the subsequent gift special effect is displayed based on the second special effect data; the second special effect data may include data such as movement and gesture of the virtual object, and may also include interaction data between the virtual object and the anchor.
In the step, the initial special effect data of the gift is subjected to sectional processing to obtain the sectional special effect data of the target gift, so that preparation is made for sectional rendering work of the special effect of the subsequent target gift.
Step 104, rendering and displaying first special effect data in the first virtual scene;
rendering and displaying a gift effect in a first virtual scene according to first special effect data in the segmented special effect data of the target gift, specifically, rendering and displaying a target object corresponding to the target gift in the first virtual scene, wherein the target object is an object capable of controlling movement in the virtual scene and can be a virtual character, a virtual animal, a virtual plant and the like; the control target object moves in the first virtual scene, and based on the first special effect data, the control target object and the main broadcasting object in the virtual living broadcasting room execute preset interaction operation, so as to obtain an interaction result, for example: the virtual character interacts with the host, such as hugging, dancing, photographing, etc.
In the step, according to the segmented special effect data of the acquired target gift, the first special effect data is rendered and displayed in the first virtual scene, and the front effect in the gift special effect is displayed. The first special effect data can comprise interaction with the anchor, and can provide the user with more immersive gift interaction effect, so that the influence of direct coverage of the gift on the live broadcast content on the live broadcast effect is avoided, and meanwhile, the display effect of the special effect of the virtual gift is improved.
And step 106, responding to the designated condition to trigger, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
Here, the above specified condition may be triggered by a limb action of the anchor, for example: extending hands and lowering heads; or may be a behavioral or business logic trigger of the viewer user, such as sending a bullet screen containing keywords, etc. When the specified condition is triggered, the virtual living room is controlled to be switched from the first virtual scene to the general scene of the target gift. The mode of splicing the first virtual scene and the general scene by the terminal equipment can be in various modes, in one mode, the virtual camera can be controlled to move, so that the general scene is displayed in the virtual living broadcast room, and in the moving process of the virtual camera, the preset scene connection special effect can be rendered and displayed; in addition, the connection can be performed through the gesture action of the anchor and other business logic, for example, after the anchor executes the appointed action, the universal scene is directly cut into the live broadcasting room, and the scene switching effect of the instant switching is created.
And rendering and displaying the second special effect data in the general scene after the general scene is displayed. In one manner, an interaction result corresponding to the first special effect data is rendered and displayed in the general scene, and an operation result of the specified operation is displayed in the general scene in response to the specified operation of the main broadcasting object in the virtual living broadcasting room for the interaction result, for example: when the appointed condition is that the host makes gesture action of stretching hands upwards, the general scene is a virtual wall, and when the host stretches hands upwards to hang a photo in the first virtual scene, the appointed condition is triggered, the scene is switched to a pink virtual wall, and a picture that one pair of hands hangs the photo on the wall appears.
In the step, the virtual live broadcasting room is controlled to be switched from the first virtual scene to the general scene of the target gift in response to the designated condition being triggered, and the second special effect data is rendered and displayed in the general scene. In the mode, after the appointed condition is triggered, the interactive result corresponding to the first special effect data can be rendered and displayed in the general scene, and the association with the related behavior of the anchor terminal is supported, so that the interactive effect of the gift with more immersion can be provided for the user, the display effect of the special effect of the virtual gift is improved, and the immersion of the audience watching live broadcast is improved.
The gift special effect rendering method is applied to terminal equipment; the method comprises the following steps: constructing a first virtual scene of a virtual live broadcasting room; acquiring sectional special effect data of a target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; rendering and displaying the first special effect data in the first virtual scene; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In the mode, the segmented special effect data of the target gift is acquired, the front gift special effect is rendered and displayed in the first virtual scene, after the appointed condition is triggered, the virtual live broadcasting room is controlled to be switched to the general scene of the target gift from the first virtual scene, and the second effect rendering and displaying are performed in the general scene.
In this embodiment, before the segmented special effect data of the target gift is acquired, some pre-rendering operations are required to obtain the rendering data of the general scene, so that the excessive data rendering pressure in the live broadcast process is avoided, and specific implementation modes are described below.
Acquiring gift special effect information of a target gift; determining scene configuration information of a target gift based on the gift special effect information; wherein, the scene configuration information includes: special effect style, special effect playing duration and scene style of the first virtual scene; rendering a preset scene basic model and illumination information based on scene configuration information to obtain prerendering data of a general scene of a target gift; wherein the prerendered data is used for displaying a general scene.
After the virtual live broadcasting room is opened, the prerendered data of the general scene of each gift supported by the virtual live broadcasting room can be obtained through the mode. Taking a target gift as an example, first, gift special effect information of the target gift is acquired.
In one embodiment, the live broadcast server transmits the gift special effect information to the terminal device, and the terminal device acquires the gift special effect information transmitted by the live broadcast server, where the gift special effect information of the target gift may include: the gift identification of the gift, the specific time of the gift, the specific image data and the like, wherein the specific identification of the gift is the unique identification for searching the gift and can be a gift ID, a gift name and the like; for the specific duration of the gift, it can be understood that the whole specific of the gift is formed by splicing a plurality of specific effects with the same or different time periods, and the specific duration of the gift is the duration of a specific interval; the special effect image data is the image data information corresponding to the specific duration of the appointed gift.
Then, determining scene configuration information of the target gift based on the gift special effect information; the scene configuration information includes: the special effect style, the special effect playing duration and the scene style of the first virtual scene. Notably, before the audience user triggers a specific gift, the terminal device can acquire all the gift identifications supported by the living broadcast room in advance, and quickly determine scene configuration information of the target gift based on a pre-stored universal pre-rendering template and rendering logic. Specifically, the determination manner of the scene configuration information may be implemented by:
1) And classifying the styles of the special effects of the gifts. And acquiring the gift identification of the target gift from the gift special effect information, and determining the special effect style of the target gift based on the gift identification. The terminal device classifies all the styles of the special effects of the gift according to the section of the gift in advance according to the gift identification, and the terminal device comprises: a plurality of intervals such as warmth, dynamic sense and outdoors, so as to achieve the purpose of quickly obtaining the special effect style of the gift according to the gift identification;
2) And determining the special effect playing time. The method comprises the steps of obtaining gift special effect duration of a target gift from gift special effect information, and determining special effect playing duration of the target gift based on the gift special effect duration and a preset special effect prerendering template. The specific time length of the gift is the playing time length of the original specific gift corresponding to the target gift, the playing time length is matched with different time length intervals defined by a pre-designed specific pre-rendering template, and the matched time length intervals are used for determining the final specific playing time length.
3) The first virtual scene style is classified. And acquiring live broadcasting room information of the virtual live broadcasting room, extracting scene identification of the first virtual scene from the live broadcasting room information, and determining scene style of the first virtual scene based on the scene identification. Based on a first virtual scene prerendering judgment logic, classifying in advance according to scene identification, classifying scene styles for gift special effect display according to the scene identification, including: the method comprises the following steps of obtaining a first virtual scene style according to a scene identifier.
Therefore, after receiving the gift special effect information of the target gift, the scene configuration information of the target gift can be rapidly determined only according to the corresponding data, namely, the gift special effect style can be rapidly obtained according to the target gift identification; acquiring the playing time length of an original gift special effect corresponding to the gift according to the gift identifier, and matching the playing time length with different time length intervals defined by a pre-designed universal scene pre-rendering template to determine the final special effect playing time length; and receiving the live broadcasting room information sent by the live broadcasting server, extracting the scene identification of the first virtual scene from the live broadcasting room information, and obtaining the first virtual scene style of the target gift according to the target scene identification. Thus, after classifying and combining according to the three dimensions in the above target scene configuration information, the general scene configuration information of the target gift can be determined, for example: the gift special effect is warm, the playing time is 30 seconds, the real scene style and the like.
And finally, rendering the preset scene basic model and the illumination information according to the acquired target scene configuration information to obtain prerendered data of the general scene of the target gift, wherein the prerendered data are used for displaying the general scene of the target gift. A scene base model is here understood to be a model without textures, colors, which provides information about the scene shape, structure, etc. of the scene. The illumination information is used to render brightness, contrast, darkness, shading, etc. of colors in the scene.
In addition, pre-rendering data of the general scenes of a plurality of candidate gifts in the virtual live broadcasting room are stored in the terminal equipment; wherein the candidate gift is a gift supported by the virtual living broadcast room; the alternative gift includes a target gift; and deleting pre-rendering data of a general scene of the gift except the deliverable gift in the terminal equipment once the deliverable gift supported by the designated user side is received.
That is, pre-rendering data of a general scene of a plurality of candidate gifts including a target gift in a virtual living broadcasting room may be stored in the terminal device, in order to reduce performance pressure of a main broadcasting end so as to be ready for a specific display of a subsequent gift, after receiving a giftable gift supported by a designated user end, the terminal device may delete pre-rendering data of a general scene of a gift other than the giftable gift. In one embodiment, when the audience user opens the gift panel to screen the gift, the terminal device acquires the related gift information and screens the prerendered data of the multiple gift general scenes, and only prerendered data of the general scenes matching the gift in the current panel is reserved.
In the above manner, the scene configuration information of the target gift is determined according to the special effect information of the target gift, and the prerendered data for displaying the general scene of the target gift is obtained based on the scene configuration information of the target gift, so that preparation work is made for rendering the display scene of the gift.
The following embodiments provide specific implementations for obtaining segmented special effect data for a target gift.
Acquiring gift special effect information of a target gift; based on the gift special effect information and a preset special effect prerendering template, generating special effect segmentation information of the target gift, so that the live broadcast server obtains segmented special effect data of the target gift based on the special effect segmentation information, and returning the segmented special effect data to the terminal equipment.
The special effect prerendering template comprises a processing mode of special effect information of the gift and generates corresponding special effect segmentation information; the special effect segmentation information indicates how to segment the initial special effect data of the target gift, namely, the special effect segmentation information contains data information related to segmentation, the live broadcast server carries out segmentation processing on the initial special effect data of the target gift based on the special effect segmentation information, a plurality of segments of segmented special effect data of the target gift can be obtained, and the segmented special effect data is returned to the terminal equipment for storage.
In one form, the gift effect information includes at least a gift effect duration and effect image data. Determining a time period to which the gift special effect duration belongs based on the special effect prerendering template; extracting lens information from the special effect image data based on the belonging time period; determining gift effect deduction rhythm data of the target gift based on the shot information; and generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect prerendering template.
Specifically, firstly, dividing the special effect duration of the gift according to a special effect prerendering template in the terminal equipment, and determining the time period of the special effect duration of the gift, wherein the time period is divided into 3 intervals which are smaller than 5s, 5s-10s and more than 10 s.
Then, extracting lens information from special effect image data corresponding to the belonged time period of the gift special effect duration; the shot information may include a shot type included in the special effect image data, and duration information of each shot, and the like. And determining gift effect deduction rhythm data of the target gift according to the shot information, wherein the gift effect deduction rhythm data are data for evaluating the playing rhythm of the special effect of the gift, the score of the gift affects the segmentation quantity of the special effect of the gift, and generally, the higher the score is, the more the segmentation quantity is. The present effect deduction rhythm data score is related to the shot switching times and the shot duration, in a specific implementation manner, the shot information comprises shot switching times, a single shot shortest duration and a single shot longest duration, and the shot switching times, the single shot shortest duration and the single shot longest duration can be weighted and added based on preset weighting parameters to obtain the present effect deduction rhythm score of the target present. The present embodiment may provide a gift effect deduction rhythm data formula for calculating a target gift, i.e., m=a×x+b×y+c×z; wherein M is the rhythm data of gift effect deduction; a. b and c are weighting parameters; x is the number of times of lens switching; y is the shortest duration of a single lens; z is the longest duration of a single lens.
Finally, determining the number of segments corresponding to the gift effect deduction rhythm data based on the special effect prerendering template; determining a segmentation time stamp based on the segmentation number, the single shot shortest time length and the single shot longest time length in the shot information, and generating special effect segmentation information of the target gift. For example, when the score of the gift effect deduction rhythm data obtained by calculation falls in the score interval section A, determining that the gift effect needs to be divided into 2 sections according to the segmentation rule defined in the special effect prerendering template, adding a time stamp between the single-lens shortest duration Y and the single-lens longest duration Z judgment sections to obtain two sections of gift effect information.
In the above manner, according to the gift special effect information and the special effect pre-rendering template preset in the terminal device, the target gift initial special effect data is segmented by means of the segmentation rule defined by the special effect pre-rendering template, and finally the segmented special effect data of the target gift is obtained.
In addition, a specified condition for triggering scene change needs to be determined in advance. Specifically, determining a specified condition based on a special effect prerendering template and a special effect style of a target gift; wherein the specified condition includes: in the segmented special effect data, designating triggering conditions of the special effect data of the segment; the triggering condition includes one or more of the following: the method comprises the steps that a main broadcasting object in a virtual living broadcasting room executes a preset gesture, the virtual living broadcasting room receives appointed information, the virtual living broadcasting room completes an appointed task, and a user side sending a gift sending instruction executes appointed behaviors.
Specifically, the special effect prerendering template can include specified conditions corresponding to various special effect styles and related parameters for generating the specified conditions; based on the special effect style of the target gift, the corresponding specified conditions can be determined from the special effect prerendering target. The specified condition includes a trigger condition of the specific effect data of the specified segment in the segmented specific effect data, for example, a specified condition of switching from the aforementioned first virtual scene to the general scene of the target gift. In addition, there may be multiple sub-effects in the first effect data or the second effect data, each of which also requires setting of a specified condition for triggering, for example, an interactive operation performed by the anchor.
In a specific implementation manner, the triggering condition may control the virtual live broadcasting room to switch from the first virtual scene to the general scene of the target gift, so as to complete the deduction of the subsequent special effects, and the triggering condition may specifically include: 1) The main broadcasting object in the virtual living broadcasting room executes preset gestures, such as main broadcasting clapping palm, extending hand and other limb key points; 2) The virtual living room receives the appointed information or the virtual living room completes the appointed task and other business logic triggers, for example: audience users in the anchor room need to send barrages with a certain specific keyword or complete tasks of 100 endorsements for the anchor point; 3) The user side sending the gift sending command performs the specified actions, for example: if the user needs to control the mobile phone to incline a certain angle or make a certain expression, the special effect of the subsequent segmentation can be triggered. The trigger condition may be one or more.
According to the method for acquiring the sectional special effect data of the target gift, the initial special effect data of the gift is subjected to sectional processing, the sectional special effect data of the target gift is acquired, the sectional special effect data is returned to the terminal equipment, and preparation is made for sectional rendering work of the special effect of the subsequent target gift.
The following embodiments provide specific implementations for rendering and displaying first effect data in a first virtual scene.
Rendering and displaying a target object corresponding to the target gift in the first virtual scene; and controlling the target object to move in the first virtual scene, and controlling the target object and the main broadcasting object in the virtual living broadcasting room to execute preset interaction operation to obtain an interaction result.
Specifically, according to the first special effect data in the segmented special effect data, the terminal device generates and renders the gift special effect of the first virtual scene, and the audience user can see the corresponding gift effect in the first virtual scene. And controlling the target object to move in the first virtual scene, and controlling the target object to execute preset interaction operation with the anchor object in the virtual living room to obtain corresponding interaction results, such as hugging, dancing and the like of the target object and the anchor object.
In the method, first special effect data is rendered and displayed in a first virtual scene according to segmented special effect data of the acquired target gift, and a front effect in the gift special effect is displayed. The first special effect data can comprise interaction with the host, can provide more immersive gift interaction effect for the user, avoid the influence of the direct coverage of the gift on the live content on the live effect, and simultaneously improve the display effect of the special effect of the virtual gift,
the following embodiments provide specific implementations for rendering and displaying second effect data in a generic scene.
In response to the specified condition being triggered, stitching the first virtual scene and the general scene; controlling the virtual camera to move so as to display a general scene in the virtual live broadcasting room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; and after the general scene is displayed, rendering and displaying the second special effect data in the general scene.
Specifically, after receiving a message that a specified condition is triggered, the terminal device acquires a general scene of a target gift that is rendered in advance, and splices a first virtual scene that is being broadcast with the general scene, and in an initial state, the virtual camera shoots the first virtual scene. The movement of the virtual camera and the display of the special effects can be controlled through time point information in the second special effect data.
Further, after the general scene is displayed, the second special effect data is rendered and displayed in the general scene. The method for rendering and displaying the second special effect data in the general scene can specifically include: rendering and displaying an interaction result corresponding to the first special effect data in the general scene; and responding to the appointed operation of the main broadcasting object in the virtual living broadcasting room aiming at the interaction result, and displaying the operation result of the appointed operation in the general scene.
The interaction result may be a virtual item, e.g., a photograph; and the anchor object performs specified operations such as moving, editing, hanging on the wall and the like on the interaction result, and simultaneously displays the operation result of the specified operation in the general scene.
And rendering and displaying an interaction result corresponding to the first special effect data in the general scene, and displaying an operation result aiming at the appointed operation of the interaction result in the general scene based on the second special effect data, so that rendering and presentation of a complete special effect in the virtual scene are completed. For example: in the first virtual scene, the target object approaches the anchor object and takes a picture, and at the moment, the interaction result is a photo of the group photo; in the general scene, the appointed operation is that a host performs gesture action of extending hands upwards, and the general scene is a virtual wall, so that when the host hangs a photo by extending hands upwards in the general scene, a picture that one hand hangs the photo on the wall appears.
In the above manner, in response to the specified condition being triggered, the virtual live room is controlled to switch from the first virtual scene to the general scene of the target gift, and the second special effect data is rendered and displayed in the general scene. In the mode, after the appointed condition is triggered, the interactive result corresponding to the first special effect data can be rendered and displayed in the general scene, and the association with the related behavior of the anchor terminal is supported, so that the interactive effect of the gift with more immersion can be provided for the user, the display effect of the special effect of the virtual gift is improved, and the immersion of the audience watching live broadcast is improved.
The following description is continued with the live broadcast server as a main body for the method for rendering the gift special effect of the embodiment. A flowchart of a method of rendering a gift effect as shown in fig. 2; the method is applied to a live broadcast server; the method comprises the following steps:
step S202, receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to terminal equipment, generating special effect segment information of the target gift based on the gift special effect information through the terminal equipment, and returning the special effect segment information to a live broadcast server;
specifically, after entering a living broadcast room, a user of a spectator can call out a gift delivery panel to screen gifts, when the user clicks a certain gift in the living broadcast room, after triggering a specific gift delivery behavior, a client side of a living broadcast platform sends a gift ID to a server side of the living broadcast platform to acquire gift special effect information required to be rendered and forwards the gift special effect information to a terminal device, after acquiring the gift special effect information, the terminal device generates special effect segmentation information of a target gift according to the gift special effect information and a special effect prerendering template preset by the terminal device, the special effect segmentation information comprises data such as segmentation number, and the special effect segmentation information is returned to the living broadcast server.
Step S204, based on the special effect segmentation information, obtaining segmented special effect data of the target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift;
after acquiring the special effect segmentation information, the live broadcast server segments the initial special effect data of the target gift to obtain first special effect data rendered in the first virtual scene and second special effect data rendered in the general scene of the target gift.
Step S206, returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
The live broadcast server returns the segmented gift special effect data (namely the segmented special effect data) to the terminal equipment, the terminal equipment generates and renders the gift special effect of the first virtual scene, the user end of the audience can normally see the first part of the special effect, when the specified condition is triggered, the live broadcast server controls the virtual live broadcast room to be switched from the first virtual scene to the general scene of the target gift, and the next segmented gift special effect is rendered and presented, so that the rendering and the presentation of a complete special effect in the virtual scene are completed.
The gift effect rendering method comprises the steps of receiving a gift sending instruction aiming at a target gift, obtaining gift effect information of the target gift, sending the gift effect information to terminal equipment, generating special effect segmentation information of the target gift based on the gift effect information through the terminal equipment, and returning the special effect segmentation information to a live broadcast server; obtaining segmented special effect data of the target gift based on the special effect segmentation information; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In the mode, the live broadcast server side and the terminal equipment are subjected to interaction of multiple special effect information, the method is used for rendering and presenting the gift effect, the gift special effect rendering and live broadcast contents in the live broadcast room are mutually fused through the information interaction process, display of the live broadcast contents shielded by the gift effect is avoided, and immersion of watching live broadcast by audiences is improved.
Before receiving a gift sending instruction for a target gift, the live broadcasting server can acquire live broadcasting room information of a virtual live broadcasting room in a broadcasting state; the live broadcasting room information at least comprises a scene identifier of a first virtual scene and a gift identifier of a gift supported by the virtual live broadcasting room; and providing the live broadcasting room information to the terminal equipment.
Specifically, when the live broadcasting platform starts a live broadcasting room, the live broadcasting server sends information such as relevant information of the live broadcasting, gift IDs supported by the affiliated rooms, first scene identifiers and the like to the live broadcasting platform server, and the live broadcasting platform server obtains information of the live broadcasting room currently in virtual broadcasting, obtains scene identifiers comprising the first virtual scenes and gift identifier information of the gift supported by the virtual live broadcasting room and sends the information to the terminal equipment.
Before receiving the gift sending instruction for the target gift, the live broadcast server can also acquire the operation of the gift sending panel of the appointed user terminal, acquire the deliverable gift supported by the appointed user terminal from the gift sending panel, and provide the deliverable gift for the terminal equipment.
The user enters a living broadcast room in virtual playing and exhales a gift sending panel; the live broadcast platform client acquires all gift information contained in the current panel and sends the gift information to the live broadcast platform server; the user terminal obtains all gift identification information contained in the current panel and sends the gift identification information to the live broadcast platform service terminal, and the live broadcast platform service terminal forwards the gift information to the terminal equipment after receiving the gift information.
And after the first special effect data is displayed, receiving information data from the anchor terminal, determining whether the specified condition is triggered based on the information data, and if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment to indicate that the specified condition is triggered.
Specifically, after the special effect data in the first virtual scene is displayed, real-time information data from the anchor terminal is received, whether a specified condition is triggered is determined according to the information data, when the specified condition is triggered, a condition identifier of the corresponding specified condition is sent to the terminal equipment to prompt the anchor terminal UE, for example, a camera acquisition picture of an anchor is acquired to perform real-time detection, if the anchor is detected to make a specified gesture, the condition identifier is sent to the terminal equipment to perform rendering of accepting the special effect.
The following provides a specific implementation manner of the method for rendering the gift special effect in a virtual live scene.
In a virtual live broadcast scene, the virtual live broadcast system comprises a main broadcasting end and a user end, wherein the main broadcasting end completes virtual broadcasting through a virtual broadcasting function of a live broadcast platform by the main broadcasting end; the audience user enters the virtual living broadcast room in the open broadcast through the user terminal.
In the embodiment, a user sends out a small Xiong Liwu in a virtual live broadcasting room, then sees a small bear in the live broadcasting room (a first virtual scene) to stretch out of a brain from a scene window outdoors, walks to the side of a main broadcasting body to hold a main broadcasting, and forms a group photo with the main broadcasting, and then the main broadcasting hangs the newly shot group photo on a general photo wall (a general scene) as a case, so that a method for rendering a special effect of a gift is described.
In the embodiment, the rendering and the presentation of the gift special effect in the virtual living broadcasting room are finished mainly through the data intercommunication of the living broadcasting platform client-living broadcasting server-terminal equipment, and the three multi-terminal interactive processes of pre-rendering multi-terminal interaction, segmentation and effect assembly multi-terminal interaction and multi-terminal interaction for finishing the rendering of the gift special effect are specifically divided.
Before a specific gift is sent out by a viewer user in a living room, the living platform needs to prepare for a user gift-sending process and prerendering a general gift special effect receiving scene. For easy understanding, fig. 3 provides a schematic diagram of prerendered multi-terminal interaction in the rendering method of gift special effects in this case, where the process is interactively implemented by a live platform client, a live server and a terminal device, and includes the following steps:
step S302, a live broadcast platform client transmits live broadcast room information to a live broadcast server;
the live broadcast server can collect information of a live broadcast room currently in virtual playing and mainly comprises scene identification, gift identification supported by a room to which the live broadcast server belongs and related information of a host; and the live broadcasting platform client transmits the live broadcasting room information to the live broadcasting server.
Step S304, the living broadcast server acquires gift special effect information by analyzing the living broadcast room information and sends the gift special effect information to the terminal equipment, and the anchor terminal UE determines scene configuration information of a target gift and prerendered data of a general scene of the gift.
After receiving the data, the live broadcast server finds out the corresponding gift special effect information by analyzing the gift identification, and sends the data to the terminal equipment; after receiving the data, the terminal equipment combines the general scene prerendering judging logic for accepting special effects to analyze and classify the types of the gift special effects to obtain configuration parameters, and renders basic model elements and illumination information of the general accepting scenes according to the configuration parameters, but does not show the basic model elements and illumination information to a user for subsequent processes.
After the audience user clicks the gift to send, the gift effect needs to be segmented and assembled, and for convenience of understanding, fig. 4 provides a multi-terminal interaction schematic diagram of segmentation and effect assembly in the rendering method of the gift effect in this embodiment, which includes the following steps:
step S402, a live broadcast platform client sends a target gift identifier to a live broadcast server, and the live broadcast server acquires gift special effect information to be rendered based on the target gift identifier;
step S404, the live broadcast server acquires gift special effect information to be rendered and forwards the gift special effect information to the terminal equipment;
step S406, the terminal equipment generates special effect segmentation information of a target gift based on the gift special effect information and a preset special effect prerendering template, and sends the special effect segmentation information to the live broadcast server;
In the above embodiment, the number of segments in the special effect segment information is two.
Step S408, the live broadcast server performs segmentation processing on the target gift based on the special effect segmentation information to obtain segmented special effect data, and the segmented special effect data is returned to the terminal equipment.
After the live broadcast server obtains the segmented special effect data, the rendering of the gift special effect is completed under the cooperation of the terminal equipment. For easy understanding, fig. 5 provides a multi-terminal interaction diagram for completing the rendering of the gift effect in the rendering method of the gift effect according to the present embodiment, including the following steps:
step S502, rendering and displaying first special effect data in a first virtual scene based on the first special effect data of the segmented special effect data in the terminal equipment;
in one example, the target object in the first special effect data is a little bear, the picture rendered by the first virtual scene is that the little bear is seen in the virtual living room to stretch out of the brain from the scene window outdoors, walks to the side of the anchor, and holds the anchor and is in a group with the anchor.
Step S504, after the first special effect data is displayed, the anchor terminal sends information data to the live broadcast server;
step S506, the live broadcast server receives information data from the anchor terminal and determines whether a specified condition is triggered or not based on the information data; if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment;
In the case, a hanging gesture is made for the anchor under the specified condition, a second sectional special effect is triggered, and the photo is hung on the photo wall.
In step S508, the terminal device receives the condition identifier, and controls the virtual live broadcasting room to switch from the first virtual scene to the general scene of the target gift, and to render the second special effect data in the general scene of the target gift.
In this case, the conversion mode is to switch from the first virtual scene of the original live broadcast to the general scene by the lens movement of the virtual camera, and meanwhile, an effect that the lens sweeps from right to left is required, so that the general scene needs to be seamlessly connected to the left side of the original scene.
Corresponding to the above method embodiment, referring to fig. 6, a schematic diagram of a rendering method apparatus for special effects of gifts is shown, where the apparatus is set to run with a terminal device; the device comprises:
a first obtaining module 602, configured to construct a first virtual scene of a virtual live room; acquiring sectional special effect data of a target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift;
a first display module 604 for rendering and displaying the first special effects data in the first virtual scene;
And a second display module 606, configured to control the virtual living room to switch from the first virtual scene to the general scene of the target gift in response to the specified condition being triggered, and render and display the second special effect data in the general scene.
The gift special effect rendering device is applied to running terminal equipment; the method comprises the following steps: constructing a first virtual scene of a virtual live broadcasting room; acquiring sectional special effect data of a target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; rendering and displaying the first special effect data in the first virtual scene; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In the mode, the segmented special effect data of the target gift is acquired, the front gift special effect is rendered and displayed in the first virtual scene, after the appointed condition is triggered, the virtual live broadcasting room is controlled to be switched to the general scene of the target gift from the first virtual scene, and the second effect rendering and displaying are performed in the general scene.
The first acquisition module is further used for acquiring gift special effect information of the target gift; generating special effect segmentation information of the target gift based on the gift special effect information and a preset special effect prerendering template, so that the live broadcast server obtains segmented special effect data of the target gift based on the special effect segmentation information, and returning the segmented special effect data to the terminal equipment.
The gift special effect information at least comprises a gift special effect duration and special effect image data, and the first acquisition module is further used for determining a time period of the gift special effect duration based on a special effect prerendering template; extracting lens information from the special effect image data based on the belonging time period; determining gift effect deduction rhythm data of the target gift based on the shot information; and generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect prerendering template.
The lens information includes: the first obtaining module is further configured to perform weighted addition on the shot switching times, the single shot shortest duration and the single shot longest duration based on preset weighting parameters, so as to obtain a gift effect deduction rhythm score of the target gift.
The first obtaining module is further configured to determine a number of segments corresponding to the gift effect deduction rhythm data based on the special effect prerendering template; determining a segmentation time stamp based on the segmentation number, the single shot shortest time length and the single shot longest time length in the shot information, and generating special effect segmentation information of the target gift.
The device further comprises a first determining module, wherein the first determining module is used for determining specified conditions based on the special effect prerendering template and the special effect style of the target gift, and the specified conditions comprise: in the segmented special effect data, designating triggering conditions of the special effect data of the segment; the trigger condition includes one or more of the following: the method comprises the steps that a main broadcasting object in a virtual living broadcasting room executes a preset gesture, the virtual living broadcasting room receives appointed information, the virtual living broadcasting room completes an appointed task, and a user side sending a gift sending instruction executes appointed behaviors.
The device also comprises a second acquisition module for acquiring gift special effect information of the target gift; determining scene configuration information of a target gift based on the gift special effect information; wherein, the scene configuration information includes: special effect style, special effect playing duration and scene style of the first virtual scene; rendering a preset scene basic model and illumination information based on scene configuration information to obtain prerendering data of a general scene of a target gift; wherein the prerendered data is used for displaying the general scene.
The second acquisition module is further configured to acquire a gift identifier of the target gift from the gift special effect information, and determine a special effect style of the target gift based on the gift identifier; acquiring gift special effect duration of a target gift from the gift special effect information, and determining special effect playing duration of the target gift based on the gift special effect duration and a preset special effect prerendering template; and acquiring live broadcasting room information of the virtual live broadcasting room, extracting scene identification of the first virtual scene from the live broadcasting room information, and determining scene style of the first virtual scene based on the scene identification.
The device also comprises a first deleting module, a second deleting module and a first deleting module, wherein the first deleting module is used for storing prerendered data of a general scene of a plurality of candidate gifts in the virtual live broadcasting room in the terminal equipment; wherein the candidate gift is a gift supported by the virtual living broadcast room; the alternative gift includes a target gift; and receiving the deliverable gifts supported by the appointed user side, and deleting the prerendered data of the general scenes of the gifts except the deliverable gifts in the terminal equipment.
The first display module is further configured to render and display a target object corresponding to the target gift in the first virtual scene; and controlling the target object to move in the first virtual scene, and controlling the target object to execute preset interaction operation with the anchor object in the virtual living room to obtain an interaction result.
The second display module is further configured to splice the first virtual scene and the general scene in response to the specified condition being triggered; controlling the virtual camera to move so as to display a general scene in the virtual live broadcasting room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; and after the general scene is displayed, rendering and displaying the second special effect data in the general scene.
The second display module is further used for rendering and displaying an interaction result corresponding to the first special effect data in the general scene; and responding to the appointed operation of the main broadcasting object in the virtual living broadcasting room aiming at the interaction result, and displaying the operation result of the appointed operation in the general scene.
Corresponding to the above method embodiment, referring to fig. 7, a schematic diagram of a rendering device of a gift effect is shown, where the device is disposed on a live broadcast server; the device comprises:
the information return module 702 is configured to receive a gift sending instruction for a target gift, obtain gift special effect information of the target gift, send the gift special effect information to a terminal device, generate special effect segment information of the target gift based on the gift special effect information through the terminal device, and return the special effect segment information to the live broadcast server;
The data obtaining module 704 is configured to obtain segmented special effect data of the target gift based on the special effect segmentation information; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift;
the data display module 706 is configured to return the segmented special effect data to the terminal device, so that the first special effect data is rendered and displayed in the first virtual scene through the terminal device; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
The gift effect rendering device receives a gift sending instruction aiming at a target gift, acquires gift effect information of the target gift, sends the gift effect information to terminal equipment, generates effect segmentation information of the target gift based on the gift effect information through the terminal equipment, and returns the effect segmentation information to the live broadcast server; based on the special effect segmentation information, obtaining segmented special effect data of the target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In the mode, the live broadcast server side and the terminal equipment are subjected to interaction of multiple special effect information, the method is used for rendering and presenting the gift effect, the gift special effect rendering and live broadcast contents in the live broadcast room are mutually fused through the information interaction process, display of the live broadcast contents shielded by the gift effect is avoided, and immersion of watching live broadcast by audiences is improved.
The device also comprises an information providing module for acquiring the information of the live broadcasting room of the virtual live broadcasting room in the on-air state; the live broadcasting room information at least comprises a scene identifier of a first virtual scene and a gift identifier of a gift supported by the virtual live broadcasting room; and providing the live broadcasting room information to the terminal equipment.
The device also comprises a gift providing module, which is used for acquiring the operation of the gift sending panel of the appointed user terminal, acquiring the deliverable gift supported by the appointed user terminal from the gift sending panel and providing the deliverable gift for the terminal equipment.
The device further comprises a condition triggering module, wherein the condition triggering module is used for receiving the information data from the anchor terminal after the first special effect data is displayed, determining whether the specified condition is triggered or not based on the information data, and sending a condition identifier of the specified condition to the terminal equipment to indicate that the specified condition is triggered if the specified condition is triggered.
The embodiment also provides an electronic device, which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to realize the rendering method of the gift special effect.
The embodiment also provides a live broadcast server, which comprises a processor and a memory, wherein the memory stores machine executable instructions which can be executed by the processor, and the processor executes the machine executable instructions to realize the rendering method of the gift special effect.
Referring to fig. 8, the electronic device or the live server includes a processor 100 and a memory 101, the memory 101 storing machine executable instructions that can be executed by the processor 100, and the processor 100 executing the machine executable instructions to implement the method for rendering the gift special effects.
Further, the electronic device shown in fig. 8 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The memory 101 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 8, but not only one bus or type of bus.
The processor 100 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 100 or by instructions in the form of software. The processor 100 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and, in combination with its hardware, performs the steps of the method of the previous embodiment.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: constructing a first virtual scene of a virtual live broadcasting room; acquiring sectional special effect data of a target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; rendering and displaying the first special effect data in the first virtual scene; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
In the method, the first virtual scene is rendered and presents the front gift effect, the appointed condition is triggered, the virtual living broadcast room is controlled to be switched to the general scene of the target gift from the first virtual scene, and the second gift effect rendering and presenting are carried out in the general scene.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: acquiring gift special effect information of a target gift; generating special effect segmentation information of the target gift based on the gift special effect information and a preset special effect prerendering template, so that the live broadcast server obtains segmented special effect data of the target gift based on the special effect segmentation information, and returning the segmented special effect data to the terminal equipment.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: the gift special effect information at least comprises a gift special effect duration and special effect image data; determining a time period of the gift special effect duration based on the special effect prerendering template; extracting lens information from the special effect image data based on the belonging time period; determining gift effect deduction rhythm data of the target gift based on the shot information; and generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect prerendering template.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: the lens information includes: shot switching times, single shot shortest duration and single shot longest duration; and carrying out weighted addition on the shot switching times, the single shot shortest time length and the single shot longest time length based on preset weighted parameters to obtain the gift effect deduction rhythm score of the target gift.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: determining the number of segments corresponding to the gift effect deduction rhythm data based on the special effect prerendering template; determining a segmentation time stamp based on the segmentation number, the single shot shortest time length and the single shot longest time length in the shot information, and generating special effect segmentation information of the target gift.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: determining a specified condition based on the special effect prerendering template and the special effect style of the target gift; wherein the specified conditions include: in the segmented special effect data, designating triggering conditions of the special effect data of the segment; the trigger condition includes one or more of the following: the method comprises the steps that a main broadcasting object in a virtual living broadcasting room executes a preset gesture, the virtual living broadcasting room receives appointed information, the virtual living broadcasting room completes an appointed task, and a user side sending a gift sending instruction executes appointed behaviors.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: acquiring gift special effect information of a target gift; determining scene configuration information of a target gift based on the gift special effect information; wherein, the scene configuration information includes: special effect style, special effect playing duration and scene style of the first virtual scene; rendering a preset scene basic model and illumination information based on scene configuration information to obtain prerendering data of a general scene of a target gift; wherein the prerendered data is used for displaying the general scene.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: acquiring a gift identification of a target gift from the gift special effect information, and determining a special effect style of the target gift based on the gift identification; acquiring gift special effect duration of a target gift from the gift special effect information, and determining special effect playing duration of the target gift based on the gift special effect duration and a preset special effect prerendering template; and acquiring live broadcasting room information of the virtual live broadcasting room, extracting scene identification of the first virtual scene from the live broadcasting room information, and determining scene style of the first virtual scene based on the scene identification.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: pre-rendering data of a general scene of a plurality of candidate gifts in the virtual live broadcasting room are stored in the terminal equipment; wherein the candidate gift is a gift supported by the virtual living broadcast room; the alternative gift includes a target gift; and receiving the deliverable gifts supported by the appointed user side, and deleting the prerendered data of the general scenes of the gifts except the deliverable gifts in the terminal equipment.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: rendering and displaying a target object corresponding to the target gift in the first virtual scene; and controlling the target object to move in the first virtual scene, and controlling the target object to execute preset interaction operation with the anchor object in the virtual living room to obtain an interaction result.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: in response to the specified condition being triggered, stitching the first virtual scene and the general scene; controlling the virtual camera to move so as to display a general scene in the virtual live broadcasting room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; and after the general scene is displayed, rendering and displaying the second special effect data in the general scene.
The processor in the electronic device may implement the following operations of the method for rendering the gift effect by executing machine executable instructions: rendering and displaying an interaction result corresponding to the first special effect data in the general scene; and responding to the appointed operation of the main broadcasting object in the virtual living broadcasting room aiming at the interaction result, and displaying the operation result of the appointed operation in the general scene.
The processor in the live broadcast server may implement the following operations of the gift effect rendering method by executing machine executable instructions: receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to terminal equipment, generating special effect segmentation information of the target gift based on the gift special effect information through the terminal equipment, and returning the special effect segmentation information to a live broadcast server; based on the special effect segmentation information, obtaining segmented special effect data of the target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
In the mode, the live broadcast server side and the terminal equipment are subjected to interaction of multiple special effect information, the method is used for rendering and presenting the gift effect, the gift special effect rendering and live broadcast contents in the live broadcast room are mutually fused through the information interaction process, display of the live broadcast contents shielded by the gift effect is avoided, and immersion of watching live broadcast by audiences is improved.
The processor in the live broadcast server may implement the following operations of the gift effect rendering method by executing machine executable instructions: acquiring information of a live broadcasting room of a virtual live broadcasting room in an on-stream state; the live broadcasting room information at least comprises a scene identifier of a first virtual scene and a gift identifier of a gift supported by the virtual live broadcasting room; and providing the live broadcasting room information to the terminal equipment.
The processor in the live broadcast server may implement the following operations of the gift effect rendering method by executing machine executable instructions: and acquiring the operation of calling out the gift sending panel from the designated user side, acquiring the deliverable gift supported by the designated user side from the gift sending panel, and providing the deliverable gift for the terminal equipment.
The processor in the live broadcast server may implement the following operations of the gift effect rendering method by executing machine executable instructions: and after the first special effect data is displayed, receiving information data from the anchor terminal, determining whether the specified condition is triggered based on the information data, and if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment to indicate that the specified condition is triggered.
The embodiment also provides a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the method for rendering gift effects described above.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: constructing a first virtual scene of a virtual live broadcasting room; acquiring sectional special effect data of a target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; rendering and displaying the first special effect data in the first virtual scene; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
In the method, the first virtual scene is rendered and presents the front gift effect, the appointed condition is triggered, the virtual living broadcast room is controlled to be switched to the general scene of the target gift from the first virtual scene, and the second gift effect rendering and presenting are carried out in the general scene.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: acquiring gift special effect information of a target gift; generating special effect segmentation information of the target gift based on the gift special effect information and a preset special effect prerendering template, so that the live broadcast server obtains segmented special effect data of the target gift based on the special effect segmentation information, and returning the segmented special effect data to the terminal equipment.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: the gift special effect information at least comprises a gift special effect duration and special effect image data; determining a time period of the gift special effect duration based on the special effect prerendering template; extracting lens information from the special effect image data based on the belonging time period; determining gift effect deduction rhythm data of the target gift based on the shot information; and generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect prerendering template.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: the lens information includes: shot switching times, single shot shortest duration and single shot longest duration; and carrying out weighted addition on the shot switching times, the single shot shortest time length and the single shot longest time length based on preset weighted parameters to obtain the gift effect deduction rhythm score of the target gift.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: determining the number of segments corresponding to the gift effect deduction rhythm data based on the special effect prerendering template; determining a segmentation time stamp based on the segmentation number, the single shot shortest time length and the single shot longest time length in the shot information, and generating special effect segmentation information of the target gift.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: determining a specified condition based on the special effect prerendering template and the special effect style of the target gift; wherein the specified conditions include: in the segmented special effect data, designating triggering conditions of the special effect data of the segment; the trigger condition includes one or more of the following: the method comprises the steps that a main broadcasting object in a virtual living broadcasting room executes a preset gesture, the virtual living broadcasting room receives appointed information, the virtual living broadcasting room completes an appointed task, and a user side sending a gift sending instruction executes appointed behaviors.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: acquiring gift special effect information of a target gift; determining scene configuration information of a target gift based on the gift special effect information; wherein, the scene configuration information includes: special effect style, special effect playing duration and scene style of the first virtual scene; rendering a preset scene basic model and illumination information based on scene configuration information to obtain prerendering data of a general scene of a target gift; wherein the prerendered data is used for displaying the general scene.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: acquiring a gift identification of a target gift from the gift special effect information, and determining a special effect style of the target gift based on the gift identification; acquiring gift special effect duration of a target gift from the gift special effect information, and determining special effect playing duration of the target gift based on the gift special effect duration and a preset special effect prerendering template; and acquiring live broadcasting room information of the virtual live broadcasting room, extracting scene identification of the first virtual scene from the live broadcasting room information, and determining scene style of the first virtual scene based on the scene identification.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: pre-rendering data of a general scene of a plurality of candidate gifts in the virtual live broadcasting room are stored in the terminal equipment; wherein the candidate gift is a gift supported by the virtual living broadcast room; the alternative gift includes a target gift; and receiving the deliverable gifts supported by the appointed user side, and deleting the prerendered data of the general scenes of the gifts except the deliverable gifts in the terminal equipment.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: rendering and displaying a target object corresponding to the target gift in the first virtual scene; and controlling the target object to move in the first virtual scene, and controlling the target object to execute preset interaction operation with the anchor object in the virtual living room to obtain an interaction result.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: in response to the specified condition being triggered, stitching the first virtual scene and the general scene; controlling the virtual camera to move so as to display a general scene in the virtual live broadcasting room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; and after the general scene is displayed, rendering and displaying the second special effect data in the general scene.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: rendering and displaying an interaction result corresponding to the first special effect data in the general scene; and responding to the appointed operation of the main broadcasting object in the virtual living broadcasting room aiming at the interaction result, and displaying the operation result of the appointed operation in the general scene.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to terminal equipment, generating special effect segmentation information of the target gift based on the gift special effect information through the terminal equipment, and returning the special effect segmentation information to a live broadcast server; based on the special effect segmentation information, obtaining segmented special effect data of the target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift; returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and responding to the designated condition to be triggered, controlling the virtual live room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
In the mode, the live broadcast server side and the terminal equipment are subjected to interaction of multiple special effect information, the method is used for rendering and presenting the gift effect, the gift special effect rendering and live broadcast contents in the live broadcast room are mutually fused through the information interaction process, display of the live broadcast contents shielded by the gift effect is avoided, and immersion of watching live broadcast by audiences is improved.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: acquiring information of a live broadcasting room of a virtual live broadcasting room in an on-stream state; the live broadcasting room information at least comprises a scene identifier of a first virtual scene and a gift identifier of a gift supported by the virtual live broadcasting room; and providing the live broadcasting room information to the terminal equipment.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: and acquiring the operation of calling out the gift sending panel from the designated user side, acquiring the deliverable gift supported by the designated user side from the gift sending panel, and providing the deliverable gift for the terminal equipment.
The machine-executable instructions stored in the machine-readable storage medium may implement the following operations in the gift-effect rendering method by executing the machine-executable instructions: and after the first special effect data is displayed, receiving information data from the anchor terminal, determining whether the specified condition is triggered based on the information data, and if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment to indicate that the specified condition is triggered.
The method, the device, the electronic device and the live broadcast server for rendering the gift special effects provided by the embodiment of the invention comprise a computer readable storage medium storing program codes, wherein the instructions included in the program codes can be used for executing the method described in the method embodiment, and specific implementation can be referred to the method embodiment and will not be repeated here.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood by those skilled in the art in specific cases.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention for illustrating the technical solution of the present invention, but not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the foregoing examples, it will be understood by those skilled in the art that the present invention is not limited thereto: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (20)

1. The method for rendering the gift special effect is characterized by being applied to terminal equipment; the method comprises the following steps:
constructing a first virtual scene of a virtual live broadcasting room; acquiring sectional special effect data of a target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift;
Rendering and displaying the first special effect data in the first virtual scene;
in response to a specified condition being triggered, controlling the virtual living room to switch from the first virtual scene to a general scene of the target gift, and rendering and displaying the second special effect data in the general scene;
the step of controlling the virtual living room to switch from the first virtual scene to the general scene of the target gift in response to the specified condition being triggered, and rendering and displaying the second special effect data in the general scene, includes:
in response to a specified condition being triggered, stitching the first virtual scene and the generic scene; wherein the specified conditions include: the triggering condition of the special effect data of the designated section is in the segmented special effect data; the triggering condition includes one or more of the following: the main broadcasting object in the virtual living broadcasting room executes a preset gesture, the virtual living broadcasting room receives appointed information, the virtual living broadcasting room completes an appointed task, and a user side sending the gift sending instruction executes an appointed action;
controlling a virtual camera to move so as to display the general scene in the virtual living room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera;
And rendering and displaying the second special effect data in the general scene after the general scene is displayed.
2. The method of claim 1, wherein the step of obtaining segmented special effects data of the target gift comprises:
acquiring gift special effect information of the target gift;
generating special effect segmentation information of the target gift based on the gift special effect information and a preset special effect prerendering template, so that a live broadcast server obtains segmented special effect data of the target gift based on the special effect segmentation information, and returning the segmented special effect data to the terminal equipment.
3. The method of claim 2, wherein the gift effect information includes at least a gift effect duration and effect image data;
the step of generating the special effect segmentation information of the target gift based on the gift special effect information and a preset special effect prerendering template comprises the following steps:
determining a time period of the gift special effect duration based on the special effect prerendering template; extracting lens information from the special effect image data based on the belonging time period;
determining gift effect deduction rhythm data of the target gift based on the shot information;
And generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect prerendering template.
4. A method according to claim 3, wherein the shot information comprises: shot switching times, single shot shortest duration and single shot longest duration;
the step of determining gift effect deduction rhythm data of the target gift based on the shot information includes:
and carrying out weighted addition on the shot switching times, the single shot shortest time length and the single shot longest time length based on preset weighted parameters to obtain the gift effect deduction rhythm score of the target gift.
5. The method of claim 3, wherein the step of generating the special effect segment information of the target gift based on the gift effect deduction rhythm data and the special effect prerendering template comprises:
determining the number of segments corresponding to the gift effect deduction rhythm data based on the special effect prerendering template;
and determining the segmentation time stamp based on the segmentation number, the single shot shortest time length and the single shot longest time length in the shot information, and generating special effect segmentation information of the target gift.
6. The method of claim 2, wherein after the step of generating the special effect segmentation information of the target gift based on the gift special effect information and a preset special effect prerendering template, the method further comprises:
and determining the specified condition based on the special effect prerendering template and the special effect style of the target gift.
7. The method of claim 1, wherein prior to the step of obtaining segmented special effects data for the target gift, the method further comprises:
acquiring gift special effect information of the target gift;
determining scene configuration information of the target gift based on the gift special effect information; wherein, the scene configuration information includes: special effect style, special effect playing duration and scene style of the first virtual scene;
rendering a preset scene basic model and illumination information based on the scene configuration information to obtain prerendered data of a general scene of the target gift; wherein the prerendering data is used for displaying the general scene.
8. The method of claim 7, wherein the step of determining scene configuration information of the target gift based on the gift special effect information, comprises:
Acquiring a gift identification of the target gift from the gift special effect information, and determining the special effect style of the target gift based on the gift identification;
acquiring the gift special effect duration of the target gift from the gift special effect information, and determining the special effect playing duration of the target gift based on the gift special effect duration and a preset special effect prerendering template;
and acquiring live broadcasting room information of the virtual live broadcasting room, extracting scene identification of the first virtual scene from the live broadcasting room information, and determining scene style of the first virtual scene based on the scene identification.
9. The method of claim 7, wherein after the step of rendering the preset scene base model and the illumination information based on the scene configuration information to obtain the pre-rendering data of the general scene of the target gift, the method further comprises:
storing prerendered data of a general scene of a plurality of candidate gifts in the virtual live broadcasting room in the terminal equipment; wherein the candidate gift is a gift supported by the virtual living room; the alternative gift includes the target gift;
and receiving the deliverable gifts supported by the appointed user side, and deleting the prerendered data of the general scenes of the gifts except the deliverable gifts in the terminal equipment.
10. The method of claim 1, wherein the step of rendering and displaying the first effect data in the first virtual scene comprises:
rendering and displaying a target object corresponding to the target gift in the first virtual scene;
and controlling the target object to move in the first virtual scene, and controlling the target object and the main broadcasting object in the virtual living broadcasting room to execute preset interaction operation to obtain an interaction result.
11. The method of claim 1, wherein the step of rendering and displaying the second special effects data in the generic scene comprises:
rendering and displaying an interaction result corresponding to the first special effect data in the general scene;
and responding to the appointed operation of the main broadcasting object in the virtual living broadcasting room aiming at the interaction result, and displaying the operation result of the appointed operation in the general scene.
12. The method for rendering the gift special effect is characterized by being applied to a live broadcast server; the method comprises the following steps:
receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to a terminal device, generating special effect segmentation information of the target gift based on the gift special effect information through the terminal device, and returning the special effect segmentation information to the live broadcast server;
Based on the special effect segmentation information, obtaining segmented special effect data of the target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift;
returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; in response to a specified condition being triggered, controlling the virtual living room to switch from the first virtual scene to a general scene of the target gift, and rendering and displaying the second special effect data in the general scene;
the step of controlling the virtual living room to switch from the first virtual scene to the general scene of the target gift in response to the specified condition being triggered, and rendering and displaying the second special effect data in the general scene, includes:
in response to a specified condition being triggered, stitching the first virtual scene and the generic scene; wherein the specified conditions include: the triggering condition of the special effect data of the designated section is in the segmented special effect data; the triggering condition includes one or more of the following: the main broadcasting object in the virtual living broadcasting room executes a preset gesture, the virtual living broadcasting room receives appointed information, the virtual living broadcasting room completes an appointed task, and a user side sending the gift sending instruction executes an appointed action;
Controlling a virtual camera to move so as to display the general scene in the virtual living room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera;
and rendering and displaying the second special effect data in the general scene after the general scene is displayed.
13. The method of claim 12, wherein the step of receiving a gift sending instruction for a target gift, acquiring gift effect information of the target gift, and transmitting the gift effect information to a terminal device, the method further comprises:
acquiring the information of a live broadcasting room in an on-air state; the live broadcasting room information at least comprises a scene identifier of the first virtual scene and a gift identifier of a gift supported by the virtual live broadcasting room;
and providing the live broadcasting room information to the terminal equipment.
14. The method of claim 12, wherein the step of receiving a gift sending instruction for a target gift, acquiring gift effect information of the target gift, and transmitting the gift effect information to a terminal device, the method further comprises:
and acquiring the operation of calling out a gift sending panel from the appointed user terminal, acquiring the deliverable gift supported by the appointed user terminal from the gift sending panel, and providing the deliverable gift for the terminal equipment.
15. The method according to claim 12, wherein the method further comprises:
and after the first special effect data is displayed, receiving information data from a main broadcasting end, determining whether the specified condition is triggered based on the information data, and if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment so as to indicate that the specified condition is triggered.
16. A device for rendering a gift effect, the device being disposed in a terminal device, the device comprising:
the first acquisition module is used for constructing a first virtual scene of the virtual live broadcasting room; acquiring sectional special effect data of a target gift; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift;
the first display module is used for rendering and displaying the first special effect data in the first virtual scene;
the second display module is used for responding to the appointed condition and triggering, controlling the virtual live broadcasting room to switch from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene;
The second display module is further used for responding to the appointed condition to be triggered and splicing the first virtual scene and the general scene; controlling a virtual camera to move so as to display the general scene in the virtual living room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; rendering and displaying the second special effect data in the general scene after displaying the general scene, wherein the specified conditions comprise: the triggering condition of the special effect data of the designated section is in the segmented special effect data; the triggering condition includes one or more of the following: and executing a preset gesture by the main broadcasting object in the virtual living broadcasting room, receiving the appointed information by the virtual living broadcasting room, completing the appointed task by the virtual living broadcasting room, and executing the appointed behavior by the user side sending the gift sending instruction.
17. A device for rendering a gift effect, the device being disposed on a live broadcast server, the device comprising:
the information return module is used for receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to a terminal device, generating special effect segmentation information of the target gift based on the gift special effect information through the terminal device, and returning the special effect segmentation information to the live broadcast server;
The data acquisition module is used for acquiring the segmented special effect data of the target gift based on the special effect segmentation information; wherein the segmented special effect data comprises: first special effect data rendered in the first virtual scene, and second special effect data rendered in the general scene of the target gift;
the data display module is used for returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; in response to a specified condition being triggered, controlling the virtual living room to switch from the first virtual scene to a general scene of the target gift, and rendering and displaying the second special effect data in the general scene;
the data display module is further used for responding to the appointed condition to be triggered and splicing the first virtual scene and the general scene; controlling a virtual camera to move so as to display the general scene in the virtual living room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; rendering and displaying the second special effect data in the general scene after displaying the general scene, wherein the specified conditions comprise: the triggering condition of the special effect data of the designated section is in the segmented special effect data; the triggering condition includes one or more of the following: and executing a preset gesture by the main broadcasting object in the virtual living broadcasting room, receiving the appointed information by the virtual living broadcasting room, completing the appointed task by the virtual living broadcasting room, and executing the appointed behavior by the user side sending the gift sending instruction.
18. An electronic device comprising a processor and a memory, the memory storing machine-executable instructions executable by the processor, the processor executing the machine-executable instructions to implement the method of rendering gift effects of any one of claims 1-11.
19. A live server comprising a processor and a memory, the memory storing machine-executable instructions executable by the processor, the processor executing the machine-executable instructions to implement the method of rendering gift effects of any of claims 12-15.
20. A machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the method of rendering the gift effect of any one of claims 1-11 or the method of rendering the gift effect of any one of claims 12-15.
CN202210653955.1A 2022-06-09 2022-06-09 Method and device for rendering gift special effects, electronic equipment and live broadcast server Active CN115225923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210653955.1A CN115225923B (en) 2022-06-09 2022-06-09 Method and device for rendering gift special effects, electronic equipment and live broadcast server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210653955.1A CN115225923B (en) 2022-06-09 2022-06-09 Method and device for rendering gift special effects, electronic equipment and live broadcast server

Publications (2)

Publication Number Publication Date
CN115225923A CN115225923A (en) 2022-10-21
CN115225923B true CN115225923B (en) 2024-03-22

Family

ID=83608157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210653955.1A Active CN115225923B (en) 2022-06-09 2022-06-09 Method and device for rendering gift special effects, electronic equipment and live broadcast server

Country Status (1)

Country Link
CN (1) CN115225923B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116456131B (en) * 2023-03-13 2023-12-19 北京达佳互联信息技术有限公司 Special effect rendering method and device, electronic equipment and storage medium
CN117119259B (en) * 2023-09-07 2024-03-08 北京优贝在线网络科技有限公司 Scene analysis-based special effect self-synthesis system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109218796A (en) * 2017-06-30 2019-01-15 武汉斗鱼网络科技有限公司 A kind of method and apparatus showing virtual present special efficacy
CN111225231A (en) * 2020-02-25 2020-06-02 广州华多网络科技有限公司 Virtual gift display method, device, equipment and storage medium
CN111277854A (en) * 2020-03-04 2020-06-12 网易(杭州)网络有限公司 Display method and device of virtual live broadcast room, electronic equipment and storage medium
CN111314730A (en) * 2020-02-25 2020-06-19 广州华多网络科技有限公司 Virtual resource searching method, device, equipment and storage medium for live video
CN112533002A (en) * 2020-11-17 2021-03-19 南京邮电大学 Dynamic image fusion method and system for VR panoramic live broadcast
CN113329234A (en) * 2021-05-28 2021-08-31 腾讯科技(深圳)有限公司 Live broadcast interaction method and related equipment
CN113395533A (en) * 2021-05-24 2021-09-14 广州博冠信息科技有限公司 Virtual gift special effect display method and device, computer equipment and storage medium
CN113840156A (en) * 2021-09-22 2021-12-24 广州方硅信息技术有限公司 Live broadcast interaction method and device based on virtual gift and computer equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109218796A (en) * 2017-06-30 2019-01-15 武汉斗鱼网络科技有限公司 A kind of method and apparatus showing virtual present special efficacy
CN111225231A (en) * 2020-02-25 2020-06-02 广州华多网络科技有限公司 Virtual gift display method, device, equipment and storage medium
CN111314730A (en) * 2020-02-25 2020-06-19 广州华多网络科技有限公司 Virtual resource searching method, device, equipment and storage medium for live video
CN111277854A (en) * 2020-03-04 2020-06-12 网易(杭州)网络有限公司 Display method and device of virtual live broadcast room, electronic equipment and storage medium
CN112533002A (en) * 2020-11-17 2021-03-19 南京邮电大学 Dynamic image fusion method and system for VR panoramic live broadcast
CN113395533A (en) * 2021-05-24 2021-09-14 广州博冠信息科技有限公司 Virtual gift special effect display method and device, computer equipment and storage medium
CN113329234A (en) * 2021-05-28 2021-08-31 腾讯科技(深圳)有限公司 Live broadcast interaction method and related equipment
CN113840156A (en) * 2021-09-22 2021-12-24 广州方硅信息技术有限公司 Live broadcast interaction method and device based on virtual gift and computer equipment

Also Published As

Publication number Publication date
CN115225923A (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN115225923B (en) Method and device for rendering gift special effects, electronic equipment and live broadcast server
CN106385591B (en) Video processing method and video processing device
CN109688451B (en) Method and system for providing camera effect
CN113905251A (en) Virtual object control method and device, electronic equipment and readable storage medium
CN111246232A (en) Live broadcast interaction method and device, electronic equipment and storage medium
WO2023279705A1 (en) Live streaming method, apparatus, and system, computer device, storage medium, and program
CN113840049A (en) Image processing method, video flow scene switching method, device, equipment and medium
CN110119700B (en) Avatar control method, avatar control device and electronic equipment
WO2021114710A1 (en) Live streaming video interaction method and apparatus, and computer device
US11836764B2 (en) Media collection navigation with opt-out interstitial
CN108134945B (en) AR service processing method, AR service processing device and terminal
WO2018142756A1 (en) Information processing device and information processing method
CN109345637B (en) Interaction method and device based on augmented reality
CN113069759B (en) Scene processing method and device in game and electronic equipment
JP6730461B2 (en) Information processing system and information processing apparatus
WO2024077909A1 (en) Video-based interaction method and apparatus, computer device, and storage medium
CN113573090A (en) Content display method, device and system in game live broadcast and storage medium
CN113760161A (en) Data generation method, data generation device, image processing method, image processing device, equipment and storage medium
CN111225287A (en) Bullet screen processing method and device, electronic equipment and storage medium
CN113082700A (en) Information interaction method and device and electronic equipment
CN112150602A (en) Model image rendering method and device, storage medium and electronic equipment
CN107770580B (en) Video image processing method and device and terminal equipment
CN115237314B (en) Information recommendation method and device and electronic equipment
CN112887623B (en) Image generation method and device and electronic equipment
CN115190321B (en) Live broadcast room switching method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant