CN112218108B - Live broadcast rendering method and device, electronic equipment and storage medium - Google Patents

Live broadcast rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112218108B
CN112218108B CN202010987760.1A CN202010987760A CN112218108B CN 112218108 B CN112218108 B CN 112218108B CN 202010987760 A CN202010987760 A CN 202010987760A CN 112218108 B CN112218108 B CN 112218108B
Authority
CN
China
Prior art keywords
rendering
rendered
video
effect
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010987760.1A
Other languages
Chinese (zh)
Other versions
CN112218108A (en
Inventor
杨呈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202010987760.1A priority Critical patent/CN112218108B/en
Publication of CN112218108A publication Critical patent/CN112218108A/en
Application granted granted Critical
Publication of CN112218108B publication Critical patent/CN112218108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a live broadcast rendering method, a live broadcast rendering device, electronic equipment and a storage medium, and relates to the technical field of live broadcast, wherein an object to be rendered corresponding to a video rendering request in a video frame to be rendered is obtained by responding to the received video rendering request; then, rendering the object to be rendered by using the obtained video rendering parameters to obtain a target rendering result; fusing the target rendering result with the video frame to be rendered to obtain a live video frame; therefore, only the designated object in the video frame can be rendered, the whole video frame does not need to be rendered, and the flexibility in the rendering operation is improved.

Description

Live broadcast rendering method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of live broadcast, in particular to a live broadcast rendering method and device, electronic equipment and a storage medium.
Background
In a scene such as a live webcast, a live video frame may be rendered, for example, a live special effect is added to the live video frame, so as to increase the picture content during live webcasting.
However, some rendering schemes can only render for the entire picture content of a live video frame, and the rendering flexibility is poor.
Disclosure of Invention
An object of the present application is to provide a live broadcast rendering method, apparatus, electronic device, and storage medium, which can improve flexibility in rendering operations.
In order to achieve the purpose, the technical scheme adopted by the application is as follows:
in a first aspect, the present application provides a live rendering method, including:
responding to a received video rendering request, and acquiring an object to be rendered corresponding to the video rendering request in a video frame to be rendered;
rendering the object to be rendered by using the obtained video rendering parameters to obtain a target rendering result;
and fusing the target rendering result and the video frame to be rendered to obtain a live video frame.
In a second aspect, the present application provides a live broadcast rendering apparatus, the apparatus comprising:
the processing module is used for responding to the received video rendering request and acquiring an object to be rendered corresponding to the video rendering request in a video frame to be rendered;
the rendering module is used for rendering the object to be rendered by using the acquired video rendering parameters to obtain a target rendering result;
and the processing module is further used for fusing the target rendering result and the video frame to be rendered to obtain a live video frame.
In a third aspect, the present application provides an electronic device comprising a memory for storing one or more programs; a processor; the one or more programs, when executed by the processor, implement the live rendering method described above.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the live rendering method described above.
According to the live broadcast rendering method, the live broadcast rendering device, the electronic equipment and the storage medium, the received video rendering request is responded, and the object to be rendered corresponding to the video rendering request in the video frame to be rendered is obtained; then, rendering the object to be rendered by using the obtained video rendering parameters to obtain a target rendering result; fusing the target rendering result with the video frame to be rendered to obtain a live video frame; therefore, only the designated object in the video frame can be rendered, the whole video frame does not need to be rendered, and the flexibility in the rendering operation is improved.
In order to make the aforementioned objects, features and advantages of the present application comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solution of the present application, the drawings required for the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also derive other related drawings based on these drawings without inventive effort.
Fig. 1 is a schematic view illustrating an interactive scene of a live broadcast system provided in the present application;
FIG. 2 is a block diagram of a schematic configuration of an electronic device provided in the present application;
FIG. 3 illustrates an exemplary flow chart of a live rendering method provided herein;
fig. 4 shows a video frame schematic diagram after rendering by using the live broadcast rendering method provided by the present application;
FIG. 5 illustrates an exemplary flow chart of sub-steps of step 203 of FIG. 3;
fig. 6 shows an exemplary structural block diagram of a live rendering apparatus provided in the present application.
In the figure: 100-an electronic device; 101-a memory; 102-a processor; 103-a communication interface; 300-live rendering means; 301-a processing module; 302-rendering module.
Detailed Description
To make the purpose, technical solutions and advantages of the present application clearer, the technical solutions in the present application will be clearly and completely described below with reference to the accompanying drawings in some embodiments of the present application, and it is obvious that the described embodiments are some, but not all embodiments of the present application. The components of the present application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments obtained by a person of ordinary skill in the art based on a part of the embodiments in the present application without any creative effort belong to the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic view illustrating an interactive scene of a live broadcast system provided in the present application, which may be a live broadcast platform for live broadcast such as internet in some embodiments. The live broadcast system can comprise a server, a live broadcast initiating terminal and a live broadcast receiving terminal, wherein the server can be communicated with the live broadcast receiving terminal and the live broadcast initiating terminal respectively, and the server can provide live broadcast service for the live broadcast receiving terminal and the live broadcast initiating terminal. For example, the anchor may provide a live stream online in real time to the viewer through the live initiator and transmit the live stream to the server, and the live receiver may pull the live stream from the server for online viewing or playback.
In some implementations, the live receiver and the live initiator may be used interchangeably. For example, a anchor of a live originator may use the live originator to provide live video services to viewers, or as viewers to view live video provided by other anchors. For another example, a viewer at a live receiver may also use the live receiver to watch live video provided by a concerned anchor, or serve as the anchor to provide live video services to other viewers.
In some embodiments, the live receiver and the live initiator may include, but are not limited to, a mobile device, a tablet computer, a laptop computer, or any combination of two or more thereof. In some embodiments, the mobile device may include, but is not limited to, a wearable device, a smart mobile device, an augmented reality device, and the like, or any combination thereof. In some embodiments, the smart mobile device may include, but is not limited to, a smartphone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, or a point of sale (POS) device, or the like, or any combination thereof.
In addition, in some possible implementations, there may be zero, one, or more live receivers and live initiators, only one of which is shown in fig. 1, accessing the server. The live broadcast receiving end and the live broadcast initiating end can be provided with internet products for providing internet live broadcast services, for example, the internet products can be application programs APP, Web webpages, applets and the like used in a computer or a smart phone and related to the internet live broadcast services.
In some embodiments, the server may be a single physical server or a server group consisting of a plurality of physical servers for performing different data processing functions. The set of servers can be centralized or distributed (e.g., the servers can be a distributed system). In some possible embodiments, such as where the server employs a single physical server, the physical server may be assigned different logical server components based on different live service functions.
It will be appreciated that the live system shown in fig. 1 is only one possible example, and that in other possible embodiments of the present application, the live system may also include only some of the components shown in fig. 1 or may also include other components.
In a live broadcast scene shown in fig. 1, for example, in the process of performing video live broadcast, a main broadcast on the live broadcast initiating end side may select to add some rendering special effects in a live broadcast picture in combination with some live broadcast requirements, so as to enrich the picture content during live broadcast and improve the viewing experience of a viewer.
For example, in some rendering schemes, a live broadcast initiating terminal may respond to a rendering request of a anchor broadcast, render a picture content of a live broadcast video frame, send the rendered live broadcast video frame to a server, and pull the rendered live broadcast video frame from the server by a live broadcast receiving terminal and play the live broadcast video frame, so that a viewer at the live broadcast receiving terminal can view the rendered live broadcast picture.
However, in the above-mentioned scheme in which the live broadcast initiator renders the screen content of the live broadcast video frame, rendering objects of some rendering schemes are the entire screen content of the live broadcast video frame, and cannot render specific objects in the live broadcast video frame, which results in poor rendering flexibility.
Based on this, in order to improve at least some of the defects in the rendering scheme, one possible implementation manner provided by the present application is: responding to a received video rendering request, and acquiring an object to be rendered corresponding to the video rendering request in a video frame to be rendered; then, rendering the object to be rendered by using the obtained video rendering parameters to obtain a target rendering result; fusing the target rendering result with the video frame to be rendered to obtain a live video frame; therefore, only the designated object in the video frame can be rendered, the whole video frame does not need to be rendered, and the flexibility in the rendering operation is improved.
Referring to fig. 2, fig. 2 shows a schematic block diagram of an electronic device 100 provided in the present application, and in some embodiments, the electronic device 100 may serve as a live broadcast initiator in fig. 1, and may also serve as a server in fig. 1.
Additionally, in some embodiments, electronic device 100 may include memory 101, processor 102, and communication interface 103, with memory 101, processor 102, and communication interface 103 being electrically connected to one another, directly or indirectly, to enable the transfer or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The memory 101 may be configured to store software programs and modules, such as program instructions/modules corresponding to the live broadcast rendering apparatus provided in the present application, and the processor 102 executes the software programs and modules stored in the memory 101 to execute various functional applications and data processing, thereby executing the steps of the live broadcast rendering method provided in the present application. The communication interface 103 may be used for communicating signaling or data with other node devices.
The Memory 101 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Programmable Read-Only Memory (EEPROM), and the like.
The processor 102 may be an integrated circuit chip having signal processing capabilities. The Processor 102 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
It will be appreciated that the configuration shown in fig. 2 is merely illustrative and that electronic device 100 may include more or fewer components than shown in fig. 2 or may have a different configuration than shown in fig. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
The electronic device 100 shown in fig. 2 is taken as an exemplary execution subject, and a live rendering method provided by the present application is exemplarily described below; it is understood that, in some embodiments, the electronic device 100 may serve as the live broadcast initiating terminal in fig. 1, and execute the live broadcast rendering method provided by the present application by receiving rendering parameters input by the anchor broadcast on the live broadcast initiating terminal side; of course, in some other embodiments, the electronic device 100 may also serve as the server in fig. 1, and execute the live rendering method provided by the present application by receiving the rendering parameter sent by the live initiator.
Referring to fig. 3, fig. 3 shows an exemplary flowchart of a live rendering method provided in the present application, and in some embodiments, the live rendering method may include the following steps:
step 201, in response to the received video rendering request, acquiring an object to be rendered corresponding to the video rendering request in the video frame to be rendered.
And step 203, rendering the object to be rendered by using the obtained video rendering parameters to obtain a target rendering result.
And step 205, fusing the target rendering result with the video frame to be rendered to obtain a live video frame.
In some embodiments, taking the electronic device as the live broadcast initiating terminal in fig. 1 as an example, the display screen of the live broadcast initiating terminal may provide a rendering operation interface, and the rendering operation interface may configure a plurality of rendering operation options, such as configuring a rendered object, a rendered effect, and a rendered parameter, and so on; in the process of carrying out video live broadcast by using the live broadcast initiating terminal, the anchor can generate a video rendering request aiming at a specified rendering object (such as a character object in a live broadcast picture) by operating a rendering operation interface of the live broadcast initiating terminal, and configure corresponding video rendering parameters such as rendering effect, rendering duration, rendering position and the like.
Correspondingly, when a video rendering request input by a main broadcast is received in the process of video live broadcast, the live broadcast initiating end can respond to the video rendering request and take a video frame in a live broadcast code stream generated in the process of video live broadcast as a video frame to be rendered so as to obtain an object to be rendered corresponding to the video rendering request in the video frame to be rendered.
For example, assuming that a video rendering request input by the anchor is to render a character object in a video frame, the live broadcast initiator may identify the character object included in the video frame by using a scheme such as image recognition, so as to use the character object as an object to be rendered; or, assuming that the video rendering request input by the anchor is to render the background area in the video frame, the live broadcast initiating end may identify the background area in the video frame and use the identified background area as the object to be rendered.
In some embodiments, in the process of executing step 201, the live broadcast initiating end may perform image segmentation on the object to be rendered in the video frame to be rendered by using a scheme such as image segmentation, so as to obtain the object to be rendered and a remaining image area; in other possible embodiments of the present application, the live broadcast initiating end may also not perform image segmentation on the video frame to be rendered, but adopt an image extraction scheme to extract the target object to be rendered from the video frame to be rendered, and retain the complete video frame to be rendered.
Next, for the object to be rendered obtained by the live broadcast initiating terminal, the live broadcast initiating terminal may render the object to be rendered based on, for example, the video rendering parameter received by the anchor broadcast, so as to obtain a target rendering result, where the target rendering result is a rendering result corresponding to the object to be rendered, the rendering result being generated by the live broadcast initiating terminal in response to the video rendering request; in some embodiments, the video rendering parameters received by the live broadcast initiating end may be used to indicate operation contents for rendering the object to be rendered, such as a configured number of special effects, a special effect speed, a special effect size, a special effect offset, and the like.
Then, based on the target rendering result obtained by the live broadcast initiating terminal, the live broadcast initiating terminal may fuse the target rendering result with the video frame to be rendered, so as to obtain the live broadcast video frame which is rendered only for the object to be rendered as shown in fig. 4.
In some embodiments, in the process of executing step 205, the live broadcast initiating end may fuse the obtained target rendering result with the remaining image area after the object to be rendered is segmented, or may fuse the obtained target rendering result with the stored complete video frame to be rendered, which is not limited in this application.
Therefore, based on the scheme provided by the application, the received video rendering request is responded, and the object to be rendered corresponding to the video rendering request in the video frame to be rendered is obtained; then, rendering the object to be rendered by using the obtained video rendering parameters to obtain a target rendering result; fusing the target rendering result with the video frame to be rendered to obtain a live video frame; therefore, only the designated object in the video frame can be rendered, the whole video frame does not need to be rendered, and the flexibility in the rendering operation is improved.
It is understood that, in some possible scenarios, based on the difference of the content of the anchor live broadcast, the anchor may set a different number of rendering effects for the object to be rendered, for example, only one rendering effect may be set, or a plurality of rendering effects may be set.
For example, in the above scenario in which the live broadcast initiator in fig. 1 is used as the electronic device for executing the live broadcast rendering method of the present application, the live broadcast initiator may directly receive the video rendering parameters input by the anchor to execute the live broadcast rendering method of the present application.
For example, in a scenario in which the server in fig. 1 is used as an electronic device for executing the live rendering method of the present application, since the server needs to forward the video rendering parameters input by the anchor through the live initiator, in order to increase the generality of video rendering parameter acquisition, the live initiator may store the received video rendering parameters in a preset configuration file template after receiving the video rendering parameters input by the anchor, so as to generate a parameter configuration file.
Thus, when the live broadcast sending end in fig. 1 serves as an execution subject of the live broadcast rendering method provided by the present application, the live broadcast sending end may parse the parameter configuration file before executing step 203 to obtain a video rendering parameter for the object to be rendered; in this way, the live broadcast initiator performs step 203 based on the video rendering parameters obtained through parsing.
When the server in fig. 1 is used as an execution subject of the live rendering method provided by the present application, the live broadcast sending end may send a parameter configuration file to the server after generating the parameter configuration file; before executing step 203, the server may parse the parameter configuration file sent by the live broadcast initiating end to obtain the video rendering parameter for the object to be rendered, thereby executing step 203.
In some possible scenes, a rendering effect library may be preset at a live broadcast initiating end, and a plurality of function implementation modules of rendering effects may be stored in the rendering effect library in advance, for example, a plurality of function implementation modules of rendering effects such as "random shaking", "multi-ghost split", "channel split", "particle special effect" may be stored in the preset rendering effect library; the live broadcast initiating terminal can call and operate the function implementation module corresponding to each rendering effect respectively so as to implement various rendering effects.
In some embodiments, the anchor may configure at least one rendering effect based on some personalized selections in configuring the video rendering parameters.
Based on this, in order to improve that the live broadcast initiating end can meet different rendering requirements when rendering an object to be rendered, on the basis of fig. 3, please refer to fig. 5, fig. 5 shows an exemplary flowchart of the sub-step of step 203 in fig. 3, as a possible implementation, step 203 may include the following sub-steps:
step 203-1, configuring a rendering effect object set corresponding to the video rendering parameter from a preset rendering effect library.
And 203-3, rendering the object to be rendered based on the rendering effect object set to obtain a target rendering result.
In some embodiments, during the process of executing step 203, the live broadcast initiating end may first analyze the video rendering parameter, and configure a rendering effect object set corresponding to the video rendering parameter from a preset rendering effect library based on the video rendering parameter; for example, at least one rendering effect corresponding to the video rendering parameter is added to a pre-constructed queue, so as to obtain a rendering effect object set corresponding to the video rendering request, where the rendering effect object set includes at least one rendering effect.
Next, the live broadcast initiating end may render the object to be rendered based on the rendering effect object set; for example, the live broadcast initiating end may invoke a rendering engine shader to render each rendering effect in the set of rendering effect objects to the object to be rendered based on the set of rendering effect objects, so as to obtain a target rendering result.
It can be understood that, since the set of rendering effect objects includes at least one rendering effect, that is: the number of rendering effects included in the set of rendering effect objects is greater than or equal to one; then in some possible scenarios, when the set of rendering effect objects includes only one rendering effect, the live video frame may display only one rendering effect; in some other scenarios, when the set of rendering effects includes multiple rendering effects, the live video frame may display multiple rendering effects including "random shake", "multi-shadow split", and the like, as shown in fig. 4.
It should be noted that, in some possible scenarios, in order to improve the universality of the anchor in different rendering requirements, only the function implementation module of the rendering effect may be stored in the preset rendering effect library, and implementation parameters of the rendering effect, such as the rendering size, the rendering position, the rendering speed, and the like, are not stored; rendering parameters for different rendering effects may be input by the anchor together in the process of inputting the video rendering request.
Thus, in some embodiments, the video rendering parameters acquired by the live broadcast initiating terminal may include video rendering contents and display parameters corresponding to each video rendering content; for example, the video rendering content may indicate a corresponding rendering effect, and the display parameter may be used to indicate an implementation parameter of the corresponding rendering effect, such as a rendering size, a rendering position, a rendering speed, and the like.
Based on this, in some embodiments, during the process of executing step 203-1, based on the video rendering content in the obtained video rendering parameters, the live broadcast initiating end may determine all rendering effect objects corresponding to the video rendering content from a preset rendering effect library to obtain an initial rendering effect set; namely: the live broadcast initiator may first add the rendering effect template indicated in the video rendering parameters to the initial rendering effect set.
Next, the live broadcast initiating end may configure, based on the display parameter corresponding to each video rendering content included in the video rendering parameters, the corresponding display parameter for all rendering effect objects in the initial rendering effect set, so as to obtain a rendering effect object set.
For example, it is assumed that the initial rendering effect set includes a rendering effect of "random shaking", and the display parameters corresponding to the "random shaking" included in the video rendering parameters are "number: 0.07, speed: 0.8', the live broadcast initiating end can fill the display parameter quantity in the function implementation module corresponding to the random shaking: 0.07, speed: 0.8 ", so that the live initiator, in performing step 203-3, can determine the content based on the display parameters" number: 0.07, speed: 0.8 "achieve the rendering effect of" random shaking ".
In addition, it can be understood that the implementation manner provided by the present application is an example that the live broadcast initiating terminal performs rendering on one object to be rendered, in some possible scenarios of the present application, in combination with personalized configuration of the anchor, there may also be a plurality of objects to be rendered, and the video rendering parameters obtained by the live broadcast initiating terminal may include video rendering content corresponding to each object to be rendered.
For example, as shown in fig. 4, the anchor may configure some rendering effects not only for the character object in fig. 4, but also for the background area in fig. 4; alternatively, in some other possible scenarios in the present application, the anchor may also configure some rendering effects for some decorative pendants of the video frame.
For this reason, in a scene where a plurality of objects to be rendered exist, for example, the live broadcast initiator may perform layered rendering for different objects to be rendered.
For example, when there are multiple objects to be rendered, the live broadcast initiating terminal obtains the rendering effects corresponding to all video rendering contents from the preset rendering effect library to obtain an initial rendering effect object set during the process of executing step 203-1.
For example, assuming that the plurality of objects to be rendered include an object a and an object B, the rendering effect corresponding to the object a includes effect 1 and effect 2, and the rendering effect included in the object B includes effect 3, effect 4 and effect 5, the initial rendering effect object set may be expressed as { effect 1; effect 2; effect 3; effect 4; effect 5 }.
Next, in order to distinguish the objects to be rendered for which different rendering effects respectively aim at, the live broadcast initiating end may add a sentry identification pair corresponding to each object to be rendered in the initial rendering effect object set to obtain a rendering effect object set.
In some embodiments, each sentry identification pair may include a start sentry identification that may be used to indicate a start rendering effect of the corresponding object to be rendered and a stop sentry identification that may be used to indicate a stop rendering effect of the corresponding object to be rendered.
For example, referring to the foregoing example, after the live broadcast initiating end adds the sentry identifier pair corresponding to each of the object a and the object B in the initial rendering effect object set, the obtained rendering effect object set may be represented as: { initial a effect 1; effect 2; ending A and starting B effect 3; effect 4; effect 5; combining B }; wherein the set of rendering effect objects represents: effects 1 and 2 are used to render object a, and effects 3, 4, and 5 are used to render object B.
In this way, in the process of executing step 203-3, the live broadcast initiating terminal may render each corresponding object to be rendered based on all the sentry mark pairs in the rendering effect object set, so as to obtain a target rendering result.
For example, referring to the rendering effect object set { start a effect 1; effect 2; ending A and starting B effect 3; effect 4; effect 5; in combination with B }, when the live broadcast initiating terminal traverses to the "start a" in the process of traversing the rendering effect object set, the live broadcast initiating terminal can render the object a in the video frame based on the subsequently obtained effect 1 and effect 2 until traversing to the "end a", and complete rendering for the object a in the video frame; then, when the live broadcast initiating end traverses to the 'start B', the live broadcast initiating end can render the object B in the video frame based on the subsequently acquired effect 3, effect 4 and effect 5 until traversing to the 'end B', and rendering aiming at the object B in the video frame is completed; in this way, by performing the above-mentioned rendering operation on each video frame in the video bitstream, a cyclic rendering for the video bitstream is achieved.
In addition, in some possible scenes, with the improvement of live video quality and the like, the data volume of a video frame is generally large, and a live broadcast initiating end needs to consume certain processing resources and processing time in the process of rendering an object to be rendered, which is acquired from the video frame.
Based on this, in order to improve the speed of the live broadcast initiating terminal when rendering the object to be rendered, in other possible scenes, the anchor can also input a scaling ratio when inputting the video rendering parameters, so that the live broadcast initiating terminal can perform certain scaling on the object to be rendered when rendering the object to be rendered, and processing resources and processing time consumed when the live broadcast initiating terminal renders the object to be rendered are reduced.
In contrast, as another possible implementation manner, in the process of executing step 203, the live broadcast initiating end may further perform reduction processing on the object to be rendered according to the scaling in the video rendering parameter based on the scaling in the video rendering parameter, so as to reduce the data amount of the object to be rendered.
And then, the live broadcast initiating end can render the reduced object to be rendered according to the display parameters in the video rendering parameters to obtain an initial rendering result.
Then, the live broadcast initiating end may perform amplification processing on the initial rendering result according to a scaling ratio, that is: and reducing the reduced rendering object to the size before reduction to obtain a target rendering result.
Moreover, it should be noted that, in the foregoing embodiment provided by the present application, the live broadcast initiating end in fig. 1 is taken as an example to be described as an example execution subject, and in some other possible embodiments of the present application, the live broadcast rendering scheme provided by the present application may also be executed by the server in fig. 1, which is not limited by the present application.
In addition, based on the same inventive concept as the live rendering method provided in the present application, please refer to fig. 6, and fig. 6 shows an exemplary structural block diagram of a live rendering apparatus 300 provided in the present application, where the live rendering apparatus 300 may include a processing module 301 and a rendering module 302.
The processing module 301 is configured to, in response to the received video rendering request, obtain an object to be rendered corresponding to the video rendering request in a video frame to be rendered;
the rendering module 302 is configured to render the object to be rendered by using the obtained video rendering parameter, so as to obtain a target rendering result;
the processing module 301 is further configured to fuse the target rendering result with the video frame to be rendered to obtain a live video frame.
Optionally, as a possible implementation manner, when the rendering module 302 renders the object to be rendered by using the obtained video rendering parameter and obtains the target rendering result, it is specifically configured to:
configuring a rendering effect object set corresponding to the video rendering parameters from a preset rendering effect library; wherein the set of rendering effect objects comprises at least one rendering effect;
and rendering the object to be rendered based on the rendering effect object set to obtain a target rendering result.
Optionally, as a possible implementation manner, the video rendering parameters include video rendering contents and a display parameter corresponding to each video rendering content;
when configuring a rendering effect object set corresponding to the video rendering parameter from a preset rendering effect library, the rendering module 302 is specifically configured to:
determining all rendering effect objects corresponding to video rendering contents from a preset rendering effect library to obtain an initial rendering effect set;
and configuring corresponding display parameters for all rendering effect objects in the initial rendering effect set to obtain a rendering effect object set.
Optionally, as a possible implementation, there are a plurality of objects to be rendered; the video rendering parameters comprise video rendering contents corresponding to each object to be rendered;
when configuring a rendering effect object set corresponding to the video rendering parameter from a preset rendering effect library, the rendering module 302 is specifically configured to:
acquiring rendering effects corresponding to all video rendering contents from a preset rendering effect library to obtain an initial rendering effect object set;
adding a sentry mark pair corresponding to each object to be rendered in the initial rendering effect object set to obtain a rendering effect object set; each post identifier pair comprises a starting post identifier and a terminating post identifier, the starting post identifier is used for indicating the starting rendering effect of the corresponding object to be rendered, and the terminating post identifier is used for indicating the terminating rendering effect of the corresponding object to be rendered;
the rendering module 302, when rendering the object to be rendered based on the rendering effect object set to obtain the target rendering result, is specifically configured to:
and respectively rendering each corresponding object to be rendered based on all sentry mark pairs in the rendering effect object set to obtain a target rendering result.
Optionally, as a possible implementation manner, when rendering an object to be rendered based on a rendering effect object set and obtaining a target rendering result, the rendering module 302 is specifically configured to:
and based on the rendering effect object set, calling a rendering engine loader to render the object to be rendered so as to obtain a target rendering result.
Optionally, as a possible implementation manner, when the rendering module 302 renders the object to be rendered by using the obtained video rendering parameter and obtains the target rendering result, it is specifically configured to:
carrying out reduction processing on the object to be rendered according to the scaling in the video rendering parameters;
rendering the reduced object to be rendered according to display parameters in the video rendering parameters to obtain an initial rendering result;
and amplifying the initial rendering result according to the scaling to obtain a target rendering result.
Optionally, as a possible implementation manner, before rendering the object to be rendered by using the obtained video rendering parameters and obtaining the target rendering result, the processing module 301 is further configured to:
and analyzing the acquired parameter configuration file to obtain video rendering parameters for the object to be rendered.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to some embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in some embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to some embodiments of the present application. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
The above description is only a few examples of the present application and is not intended to limit the present application, and those skilled in the art will appreciate that various modifications and variations can be made in the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (8)

1. A live rendering method, the method comprising:
responding to a received video rendering request, and acquiring an object to be rendered corresponding to the video rendering request in a video frame to be rendered;
rendering the object to be rendered by using the obtained video rendering parameters to obtain a target rendering result;
fusing the target rendering result and the video frame to be rendered to obtain a live video frame;
the rendering the object to be rendered by using the obtained video rendering parameters to obtain a target rendering result, including:
configuring a rendering effect object set corresponding to the video rendering parameters from a preset rendering effect library; wherein the set of rendering effect objects comprises at least one rendering effect;
rendering the object to be rendered based on the rendering effect object set to obtain a target rendering result;
a plurality of objects to be rendered exist; the video rendering parameters comprise video rendering contents corresponding to each object to be rendered;
configuring a rendering effect object set corresponding to the video rendering parameter from a preset rendering effect library, including:
acquiring rendering effects corresponding to all the video rendering contents from a preset rendering effect library to obtain an initial rendering effect object set;
adding a sentry post identification pair corresponding to each object to be rendered in the initial rendering effect object set to obtain a rendering effect object set; each sentry mark pair comprises a starting sentry mark and a terminating sentry mark, wherein the starting sentry mark is used for indicating the starting rendering effect of the corresponding object to be rendered, and the terminating sentry mark is used for indicating the terminating rendering effect of the corresponding object to be rendered;
rendering the object to be rendered based on the rendering effect object set to obtain a target rendering result, including:
and respectively rendering each corresponding object to be rendered based on all the sentry post identification pairs in the rendering effect object set to obtain a target rendering result.
2. The method of claim 1, wherein the video rendering parameters comprise video rendering content and a display parameter corresponding to each video rendering content;
configuring a rendering effect object set corresponding to the video rendering parameter from a preset rendering effect library, including:
determining all rendering effect objects corresponding to the video rendering content from a preset rendering effect library to obtain an initial rendering effect set;
and configuring the corresponding display parameters for all rendering effect objects in the initial rendering effect set to obtain a rendering effect object set.
3. The method of claim 1, wherein the rendering the object to be rendered based on the set of rendering effect objects to obtain a target rendering result, comprises:
and calling a rendering engine loader to render the object to be rendered based on the rendering effect object set so as to obtain a target rendering result.
4. The method of claim 1, wherein the rendering the object to be rendered using the obtained video rendering parameters to obtain a target rendering result comprises:
reducing the object to be rendered according to the scaling in the video rendering parameters;
rendering the reduced object to be rendered according to display parameters in the video rendering parameters to obtain an initial rendering result;
and amplifying the initial rendering result according to the scaling to obtain a target rendering result.
5. The method of claim 1, wherein before the rendering the object to be rendered using the obtained video rendering parameters to obtain a target rendering result, the method further comprises:
and analyzing the acquired parameter configuration file to obtain the video rendering parameters for the object to be rendered.
6. A live rendering apparatus, the apparatus comprising:
the processing module is used for responding to the received video rendering request and acquiring an object to be rendered corresponding to the video rendering request in a video frame to be rendered;
the rendering module is used for rendering the object to be rendered by using the acquired video rendering parameters to obtain a target rendering result;
the processing module is further used for fusing the target rendering result with the video frame to be rendered to obtain a live video frame;
the rendering module, when rendering an object to be rendered by using the obtained video rendering parameters and obtaining a target rendering result, is specifically configured to:
configuring a rendering effect object set corresponding to the video rendering parameters from a preset rendering effect library; wherein the set of rendering effect objects comprises at least one rendering effect;
rendering the object to be rendered based on the rendering effect object set to obtain a target rendering result;
a plurality of objects to be rendered exist; the video rendering parameters comprise video rendering contents corresponding to each object to be rendered;
when the rendering module configures a rendering effect object set corresponding to the video rendering parameters from a preset rendering effect library, the rendering module is specifically configured to:
acquiring rendering effects corresponding to all video rendering contents from a preset rendering effect library to obtain an initial rendering effect object set;
adding a sentry mark pair corresponding to each object to be rendered in the initial rendering effect object set to obtain a rendering effect object set; each post identifier pair comprises a starting post identifier and a terminating post identifier, the starting post identifier is used for indicating the starting rendering effect of the corresponding object to be rendered, and the terminating post identifier is used for indicating the terminating rendering effect of the corresponding object to be rendered;
the rendering module, when rendering an object to be rendered based on the rendering effect object set to obtain a target rendering result, is specifically configured to:
and respectively rendering each corresponding object to be rendered based on all sentry mark pairs in the rendering effect object set to obtain a target rendering result.
7. An electronic device, comprising:
a memory for storing one or more programs;
a processor;
the one or more programs, when executed by the processor, implement the method of any of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN202010987760.1A 2020-09-18 2020-09-18 Live broadcast rendering method and device, electronic equipment and storage medium Active CN112218108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010987760.1A CN112218108B (en) 2020-09-18 2020-09-18 Live broadcast rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010987760.1A CN112218108B (en) 2020-09-18 2020-09-18 Live broadcast rendering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112218108A CN112218108A (en) 2021-01-12
CN112218108B true CN112218108B (en) 2022-07-08

Family

ID=74049691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010987760.1A Active CN112218108B (en) 2020-09-18 2020-09-18 Live broadcast rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112218108B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114915837B (en) * 2021-02-10 2023-08-25 武汉斗鱼网络科技有限公司 Triggering method and device for video rendering, electronic equipment and storage medium
CN113490050B (en) * 2021-09-07 2021-12-17 北京市商汤科技开发有限公司 Video processing method and device, computer readable storage medium and computer equipment
CN115442637A (en) * 2022-09-06 2022-12-06 北京字跳网络技术有限公司 Live special effect rendering method, device and equipment, readable storage medium and product
CN117041628B (en) * 2023-10-09 2024-02-02 腾讯科技(深圳)有限公司 Live picture rendering method, system, device, equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504193A (en) * 2016-10-24 2017-03-15 深圳市彬讯科技有限公司 The rendering intent that a kind of multi-user's picture based on high in the clouds is rendered
CN106791786A (en) * 2016-12-29 2017-05-31 北京奇艺世纪科技有限公司 Live broadcasting method and device
CN109963163A (en) * 2017-12-26 2019-07-02 阿里巴巴集团控股有限公司 Internet video live broadcasting method, device and electronic equipment
CN110062176A (en) * 2019-04-12 2019-07-26 北京字节跳动网络技术有限公司 Generate method, apparatus, electronic equipment and the computer readable storage medium of video
CN110475150A (en) * 2019-09-11 2019-11-19 广州华多网络科技有限公司 The rendering method and device of virtual present special efficacy, live broadcast system
CN110582021A (en) * 2019-09-26 2019-12-17 深圳市商汤科技有限公司 Information processing method and device, electronic equipment and storage medium
EP3609189A1 (en) * 2018-08-10 2020-02-12 Nagravision SA Testing rendering of screen objects
CN111429554A (en) * 2020-03-26 2020-07-17 深圳壹账通智能科技有限公司 Motion video data processing method and device, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10108322B2 (en) * 2015-01-02 2018-10-23 Kaltura, Inc. Dynamic video effects for interactive videos
CN107613310B (en) * 2017-09-08 2020-08-04 广州华多网络科技有限公司 Live broadcast method and device and electronic equipment
CN111147880A (en) * 2019-12-30 2020-05-12 广州华多网络科技有限公司 Interaction method, device and system for live video, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504193A (en) * 2016-10-24 2017-03-15 深圳市彬讯科技有限公司 The rendering intent that a kind of multi-user's picture based on high in the clouds is rendered
CN106791786A (en) * 2016-12-29 2017-05-31 北京奇艺世纪科技有限公司 Live broadcasting method and device
CN109963163A (en) * 2017-12-26 2019-07-02 阿里巴巴集团控股有限公司 Internet video live broadcasting method, device and electronic equipment
EP3609189A1 (en) * 2018-08-10 2020-02-12 Nagravision SA Testing rendering of screen objects
CN110062176A (en) * 2019-04-12 2019-07-26 北京字节跳动网络技术有限公司 Generate method, apparatus, electronic equipment and the computer readable storage medium of video
CN110475150A (en) * 2019-09-11 2019-11-19 广州华多网络科技有限公司 The rendering method and device of virtual present special efficacy, live broadcast system
CN110582021A (en) * 2019-09-26 2019-12-17 深圳市商汤科技有限公司 Information processing method and device, electronic equipment and storage medium
CN111429554A (en) * 2020-03-26 2020-07-17 深圳壹账通智能科技有限公司 Motion video data processing method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
集体欢腾与灵光塑造:数字时代流行音乐商业价值研究;李育菁等;《现代传播(中国传媒大学学报)》;20180115(第01期);全文 *

Also Published As

Publication number Publication date
CN112218108A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN112218108B (en) Live broadcast rendering method and device, electronic equipment and storage medium
CN106487781B (en) Resource data processing method, device and system based on live broadcast
US11025967B2 (en) Method for inserting information push into live video streaming, server, and terminal
CN110856008B (en) Live broadcast interaction method, device and system, electronic equipment and storage medium
CN112929678B (en) Live broadcast method, live broadcast device, server side and computer readable storage medium
CN113411642A (en) Screen projection method and device, electronic equipment and storage medium
CN112073753B (en) Method, device, equipment and medium for publishing multimedia data
CN111314773A (en) Screen recording method and device, electronic equipment and computer readable storage medium
CN111970527B (en) Live broadcast data processing method and device
CN110913237A (en) Live broadcast control method and device, live broadcast initiating device and storage medium
CN114697703A (en) Video data generation method and device, electronic equipment and storage medium
CN106792237B (en) Message display method and system
CN112153408B (en) Live broadcast rendering method and device, electronic equipment and storage medium
CN113852834A (en) Content display method, device, equipment and storage medium
CN112770171A (en) Content display method, device, system, equipment and storage medium
CN109348298B (en) Method and equipment for pushing and playing multimedia data stream
US20220394326A1 (en) Method and apparatus for determining object adding mode, electronic device and medium
CN110809172A (en) Interactive special effect display method and device and electronic equipment
CN112468883A (en) Video stream playing method and video stream playing equipment
CN112153409B (en) Live broadcast method and device, live broadcast receiving end and storage medium
CN115671723A (en) Resource processing method, device, equipment and medium
CN112291573B (en) Live stream pushing method and device, electronic equipment and computer readable medium
CN109999490B (en) Method and system for reducing networking cloud application delay
CN111130983B (en) Method and equipment for sending information and generating result information
CN113766255A (en) Video stream merging method and device, electronic equipment and computer medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant