CN116503498A - Picture rendering method and related device - Google Patents

Picture rendering method and related device Download PDF

Info

Publication number
CN116503498A
CN116503498A CN202211192246.4A CN202211192246A CN116503498A CN 116503498 A CN116503498 A CN 116503498A CN 202211192246 A CN202211192246 A CN 202211192246A CN 116503498 A CN116503498 A CN 116503498A
Authority
CN
China
Prior art keywords
rendering
sub
rendered
picture
completion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211192246.4A
Other languages
Chinese (zh)
Inventor
李锐
李想
周鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211192246.4A priority Critical patent/CN116503498A/en
Publication of CN116503498A publication Critical patent/CN116503498A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the application discloses a picture rendering method and a related device, wherein a target rendering machine obtains target sub-content to be rendered, the target sub-content to be rendered is one of N sub-contents to be rendered included in the content to be rendered, when the target sub-content to be rendered is rendered to obtain a rendered sub-picture, a rendering completion identification is generated, and the rendering completion identification is sent to other rendering machines. If N-1 rendering completion identifiers respectively sent by other rendering machines are obtained, the fact that all the N rendering machines complete rendering of the sub-content to be rendered which is responsible for each rendering is described, and a unified completion identifier is generated according to the N rendering completion identifiers. And sending the unified completion identification and the rendering sub-picture to the director so that the director can synthesize the corresponding virtual picture according to the unified completion identification and the rendering sub-picture. Thus, the composition rate of a plurality of rendered sub-pictures belonging to one virtual picture is improved.

Description

Picture rendering method and related device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for rendering a picture.
Background
In general, the higher the resolution of a picture, the better the image quality, and the more detail can be represented, but the longer the time to render the picture, due to the increased information recorded.
In order to increase the rendering speed of the picture to be rendered, in the related art, a picture to be rendered is divided into a plurality of sub-pictures, then the sub-pictures are distributed to a plurality of renderers for rendering respectively, and finally the sub-pictures rendered by the plurality of renderers are synthesized by a director to obtain a rendered picture.
However, in the above manner, it may be difficult for the multicast router to synthesize together a plurality of sub-pictures belonging to one picture to be rendered, resulting in poor effect of the finally rendered picture.
Disclosure of Invention
In order to solve the above technical problems, the present application provides a picture rendering method and a related device, which are used for improving the synthesis rate of synthesizing a plurality of sub-pictures belonging to a picture to be rendered together, and improving the effect of the rendered picture.
The embodiment of the application discloses the following technical scheme:
in a first aspect, an embodiment of the present application provides a method for rendering a picture, where the method includes rendering to obtain a virtual picture by using N sub-contents to be rendered, where the N sub-contents to be rendered are respectively rendered by N renderers, and N is an integer greater than 1 for a target renderer among the N renderers, where the method includes:
Acquiring target sub-content to be rendered in the content to be rendered, wherein the target sub-content to be rendered is one of the N sub-contents to be rendered;
generating a rendering completion mark when rendering the target sub-content to be rendered to obtain a rendering sub-picture;
sending the rendering completion identification to other renderers, wherein the other renderers are N-1 renderers except the target rendering machine in the N renderers;
if N-1 rendering completion identifications are obtained from the other rendering machines, generating unified completion identifications according to the N rendering completion identifications, wherein the unified completion identifications are used for identifying the order in which the virtual pictures are rendered;
and sending the unified completion identification and the rendering sub-picture to the multicast machine so that the multicast machine synthesizes the virtual picture according to the unified completion identification and the rendering sub-picture.
In a second aspect, an embodiment of the present application provides a method for rendering a picture, where the method includes:
receiving M rendering sub-pictures sent by N rendering machines and M unified completion identifiers respectively corresponding to the M rendering sub-pictures, wherein the unified completion identifiers are used for identifying the order in which the virtual pictures are rendered, and N is a positive integer less than or equal to M;
Acquiring N rendering sub-pictures with the same unified completion identifier from the M rendering sub-pictures;
and synthesizing the N rendering sub-pictures to obtain the virtual picture.
In a third aspect, an embodiment of the present application provides a picture rendering device, where the picture rendering device includes N sub-contents to be rendered for rendering to obtain a virtual picture, where the N sub-contents to be rendered respectively perform picture rendering by N renderers, and for a target renderer among the N renderers, the target renderer includes a picture rendering device, N is an integer greater than 1, and the device includes: the device comprises an acquisition unit, a first generation unit, a first transmission unit, a second generation unit and a second transmission unit;
the obtaining unit is configured to obtain a target sub-content to be rendered in the content to be rendered, where the target sub-content to be rendered is one of the N sub-contents to be rendered;
the first generation unit is used for generating a rendering completion mark when rendering the sub-content to be rendered according to the target to obtain a rendering sub-picture;
the first sending unit is configured to send the rendering completion identifier to other renderers, where the other renderers are N-1 renderers except the target rendering machine in the N renderers;
The second generating unit is configured to generate a unified completion identifier according to N rendering completion identifiers if N-1 rendering completion identifiers are obtained from the other renderers, where the unified completion identifier is used to identify an order in which the virtual frames complete rendering;
the second sending unit is used for sending the unified completion identification and the rendering sub-picture to the multicast machine so that the multicast machine synthesizes the virtual picture according to the unified completion identification and the rendering sub-picture.
In a fourth aspect, an embodiment of the present application provides a picture rendering apparatus, including: a receiving unit, an acquiring unit and a synthesizing unit;
the receiving unit is used for receiving M rendering sub-pictures sent by N rendering machines and M unified completion identifiers respectively corresponding to the M rendering sub-pictures, wherein the unified completion identifiers are used for identifying the order in which the virtual pictures are rendered, and N is a positive integer less than or equal to M;
the obtaining unit is used for obtaining N rendering sub-pictures with the same unified completion identifier from the M rendering sub-pictures;
and the synthesis unit is used for synthesizing the N rendering sub-pictures to obtain the virtual picture.
In a fifth aspect, embodiments of the present application provide a frame rendering system, where the system includes a plurality of renderers and a director;
the rendering machine is used for executing the method related to the first aspect;
the jukebox is configured to perform the method of the second aspect.
In another aspect, embodiments of the present application provide a computer device comprising a processor and a memory:
the memory is used for storing a computer program and transmitting the computer program to the processor;
the processor is configured to perform the method of the above aspect according to instructions in the computer program.
In another aspect, embodiments of the present application provide a computer-readable storage medium for storing a computer program for performing the method described in the above aspect.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method described in the above aspect.
According to the technical scheme, the to-be-rendered content corresponding to one virtual picture is divided into N to-be-rendered sub-contents, the N to-be-rendered sub-contents are respectively subjected to picture rendering through N renderers, a target renderer in the N renderers is taken as an example, the target renderer acquires the target to-be-rendered sub-content, wherein the target to-be-rendered sub-content is one of the N to-be-rendered sub-contents included in the to-be-rendered content, and when the target to-be-rendered sub-content is rendered to obtain a rendering sub-picture, a rendering completion identifier is generated, and the rendering completion identifier is sent to other renderers. If N-1 rendering completion identifiers respectively sent by other rendering machines are obtained, the N rendering machines are all used for completing the rendering of the sub-content to be rendered, and then a unified completion identifier is generated according to the N rendering completion identifiers, wherein the unified completion identifier is used for identifying the order in which the virtual pictures are completed to be rendered, or different virtual pictures use different unified completion identifiers. And sending the unified completion identification and the rendering sub-picture to the director so that the director can synthesize the corresponding virtual picture according to the unified completion identification and the rendering sub-picture. Therefore, although the N rendering sub-pictures belonging to one virtual picture are respectively rendered through N rendering machines to obtain N rendering sub-pictures, the N rendering sub-pictures share a unified completion identifier, and the director can acquire the N rendering sub-pictures of the virtual picture according to the unified completion identifier to synthesize the N rendering sub-pictures to obtain the virtual picture, so that the synthesis rate of the rendering sub-pictures belonging to one virtual picture is improved, and the rendering effect of the virtual picture is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scene schematic diagram of a picture rendering method provided in an embodiment of the present application;
FIG. 2 is an interaction diagram of a method for rendering a frame according to an embodiment of the present application;
fig. 3 is a schematic diagram of a transmission path of a rendered sprite according to an embodiment of the present application;
fig. 4 is a schematic diagram of a transmission path of a rendered sprite according to an embodiment of the present application;
fig. 5 is a schematic diagram of a transmission path of a rendered sprite according to an embodiment of the present application;
fig. 6 is a schematic diagram of a data streaming bandwidth of a sending data stream according to an embodiment of the present application;
fig. 7 is a schematic diagram of synthesizing N rendered sprites according to an embodiment of the present application;
fig. 8 is a schematic diagram of obtaining a rendered sub-frame according to an embodiment of the present application;
Fig. 9 is a schematic view of a scene of a picture rendering method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a picture rendering device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a picture rendering device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a frame rendering device according to an embodiment of the present disclosure;
fig. 13 is an application scene schematic diagram of a picture rendering system according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
Because the delay of the signals is not fixed, in the related art, the pilot broadcast machine can continuously receive a plurality of sub-pictures belonging to different pictures, and the sub-pictures obtained by the pilot broadcast machine cannot be aligned in time, so that the situation that the sub-pictures belonging to one picture are difficult to match together occurs, and the effect of the picture obtained by final rendering is poor.
Based on this, the embodiment of the application provides a picture rendering method, which generates a unified completion identifier for a plurality of rendering sub-pictures belonging to a virtual picture, so that a director can acquire a plurality of rendering sub-pictures belonging to different virtual pictures according to the unified completion identifier to synthesize, thereby improving the synthesis rate of a plurality of rendering sub-pictures belonging to a virtual picture and further improving the rendering effect of the virtual picture.
The picture rendering method provided by the embodiment of the application can be applied to picture rendering equipment with data processing capability, such as terminal equipment and a server. The terminal equipment can be mobile phones, computers, intelligent voice interaction equipment, intelligent household appliances, vehicle-mounted terminals, aircrafts and the like, but is not limited to the mobile phones, the computers, the intelligent voice interaction equipment, the intelligent household appliances, the vehicle-mounted terminals, the aircrafts and the like; the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), basic cloud computing services such as big data and artificial intelligent platforms, and the like. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
The picture rendering method provided by the embodiment of the application can be realized based on a cloud computing technology. Cloud computing (closed computing) refers to the delivery and usage mode of an IT infrastructure, meaning that required resources are obtained in an on-demand, easily scalable manner through a network; generalized cloud computing refers to the delivery and usage patterns of services, meaning that the required services are obtained in an on-demand, easily scalable manner over a network. Such services may be IT, software, internet related, or other services. Cloud Computing is a product of fusion of traditional computer and network technology developments such as Grid Computing (Grid Computing), distributed Computing (distributed Computing), parallel Computing (Parallel Computing), utility Computing (Utility Computing), network storage (Network Storage Technologies), virtualization (Virtualization), load balancing (Load balancing), and the like.
With the development of the internet, real-time data flow and diversification of connected devices, and the promotion of demands of search services, social networks, mobile commerce, open collaboration and the like, cloud computing is rapidly developed. Unlike the previous parallel distributed computing, the generation of cloud computing will promote the revolutionary transformation of the whole internet mode and enterprise management mode in concept. The embodiments of the present application may be applied to various scenarios including, but not limited to, cloud technology, artificial intelligence, intelligent transportation, assisted driving, and the like.
In order to facilitate understanding of the image rendering method provided in the embodiments of the present application, an application scenario of the image rendering method is exemplarily described below by taking an execution body of the image rendering method as a server.
Referring to fig. 1, the application scene of the image rendering method provided in the embodiment of the present application is schematically shown. As shown in fig. 1, the application scenario includes a server 110, a server 120 and a server 130, where the three servers may communicate through a network. Server 110 and server 120 act as renderers and server 130 acts as a director.
In the application scenario shown in fig. 1, the content to be rendered corresponding to one virtual picture includes 2 sub-contents to be rendered, and the 2 sub-contents to be rendered are respectively picture-rendered by the server 110 and the server 120, and the server 110 is taken as a target renderer for illustration.
The server 110 obtains 1 of the 2 sub-contents to be rendered, and for convenience of explanation, the sub-contents to be rendered obtained by the server 110 will be referred to as target sub-contents to be rendered hereinafter. The server 110 performs rendering according to the target content to be rendered, generates a rendering completion identifier when a rendering sub-picture is obtained, and sends the rendering completion identifier to the server 120. Similarly, the server 120 may also be used as a target renderer to obtain another content to be rendered for rendering, generate a rendering completion identifier when obtaining a rendering sub-frame, and send the rendering completion identifier to the server 110.
If the server 110 obtains the rendering completion identifier sent by the server 120, which indicates that the server 120 completes rendering, and at this time, both the two renderers complete rendering, the server 110 generates a unified completion identifier according to the obtained 2 rendering completion identifiers. At this time, the same unified completion identifier is shared by the rendered sub-frame rendered by the server 110 and the rendered sub-frame rendered by the server 120.
The server 110 sends the unified completion identification and the rendering sub-picture to the server 130 as the director, and similarly, the server 120 sends the unified completion identification and the rendering sub-picture to the server 130 as the director, and the server 130 synthesizes the corresponding virtual picture according to the unified completion identification and the rendering sub-picture.
Therefore, although the N rendering sub-pictures belonging to one virtual picture are respectively rendered through N rendering machines to obtain N rendering sub-pictures, the N rendering sub-pictures share a unified completion identifier, and the director can acquire the N rendering sub-pictures of the virtual picture according to the unified completion identifier to synthesize the N rendering sub-pictures to obtain the virtual picture, so that the synthesis rate of the rendering sub-pictures belonging to one virtual picture is improved, and the rendering effect of the virtual picture is further improved.
The picture rendering method provided by the embodiment of the application can be executed by a server. However, in other embodiments of the present application, the terminal device may also have a similar function as the server, so as to perform the picture rendering method provided in the embodiments of the present application, or the terminal device and the server may jointly perform the picture rendering method provided in the embodiments of the present application, which is not limited in this embodiment.
In connection with the above description, a picture rendering system provided by the present application will be described below, which includes a plurality of renderers and a director. For ease of description, the following embodiments will be described with the renderer and director as servers.
Referring to fig. 2, the diagram is an interaction diagram of a picture rendering method provided in an embodiment of the present application.
S201: and the target rendering machine acquires target sub-content to be rendered in the content to be rendered.
The renderer can obtain a virtual picture by rendering the virtual element. Along with the increasing details included in the virtual picture, the time required for rendering the virtual picture is longer and longer, in order to shorten the rendering time, the content to be rendered corresponding to one virtual picture is split into a plurality of renderers, so that each renderer renders a part of the content to be rendered to obtain a sub-picture, and finally the director synthesizes the sub-pictures to obtain the virtual picture.
The virtual picture is a picture obtained by rendering the content to be rendered, the content to be rendered is a set of graphics, the graphics are changed into elements in the picture through rendering, the content to be rendered can be split into N sub-contents to be rendered, and N is an integer larger than 1. Each sub-content to be rendered is rendered by different renderers, namely N sub-contents to be rendered are respectively subjected to picture rendering by N renderers. For example, the number of the renderers may be predetermined, and then the number of the content divisions to be rendered is equal to the number of the renderers.
For convenience of explanation, the embodiment of the present application will be described by taking one of N renderers as an example, and hereinafter, for convenience, it will be referred to as a target renderer. Therefore, the target renderer can acquire target sub-content to be rendered, and the target sub-content to be rendered is one of N sub-pictures to be rendered, so that the target renderer renders the target sub-content to be rendered.
S202: and when the target renderer renders the sub-content to be rendered according to the target to obtain a rendered sub-picture, generating a rendering completion mark.
The target rendering machine renders the target to-be-rendered content to obtain rendering sub-pictures, and therefore N rendering machines respectively render the N to-be-rendered content to obtain N rendering sub-pictures, and the subsequent director synthesizes the N rendering sub-pictures to obtain the virtual picture.
When the target renderer renders the sub-content to be rendered according to the target to obtain a rendered sub-picture, generating a rendering completion identifier, wherein the rendering completion identifier is used for identifying that the rendering operation is completed. It should be noted that, the rendering sub-picture can be generated by successfully obtaining the rendering sub-picture according to the target sub-content to be rendered, and the rendering completion identification can also be generated by the rendering failure, because the rendering failure can also prove that the execution of the current rendering operation is completed. As one possible implementation, the rendering completion identification may include two types, a rendering success type and a rendering failure type, respectively.
Specifically, if the target renderer successfully renders the target sub-content to be rendered to obtain the rendered sub-picture, a rendering completion identifier is generated, and the type of the rendering completion identifier is that the rendering is successful. If the target renderer does not obtain the sub-content to be rendered in the process of rendering the sub-content according to the target sub-content to be rendered, the execution failure of the current rendering operation is indicated, a rendering completion identifier is generated, and the type of the rendering completion identifier is the rendering failure. The size of the preset time is not particularly limited in this application, and may be set by those skilled in the art according to actual needs. For example, the preset time may be a time when one rendering is successful, and if no sub-picture is rendered beyond the time, the execution failure of the current rendering operation is indicated. For another example, the preset time may be a time when multiple rendering is successful, and if no sub-picture is rendered beyond the time, it indicates that not only one rendering operation fails, but also retrying the rendering operation fails. Therefore, if the sub-picture is not rendered within the preset time, the rendering completion identification can be generated, and the situation that N rendering completion identifications cannot be collected, so that unified completion identifications cannot be generated, and virtual picture rendering fails is avoided.
S203: the target renderer sends a rendering completion identification to the other renderers.
The other renderers are N-1 renderers except the target rendering machine in the N renderers. It should be noted that, each of the other renderers may generate a rendering completion identifier as the target renderer and transmit the rendering completion identifier to N-1 renderers other than itself.
S204: and if the target rendering machine acquires the N-1 rendering completion identifications from the other rendering machines, generating a unified completion identification according to the N rendering completion identifications.
The target renderer sends the rendering completion identification to other renderers, and similarly, the other renderers also send the rendering completion identification to the target renderer, so that the target renderer can receive N-1 rendering completion identifications sent by N-1 renderers. At this time, the target renderer has N-1 rendering completion identifications and one rendering completion identification generated by itself, and N total rendering completion identifications, so the target renderer can generate a unified completion identification according to the N rendering completion identifications.
Wherein the unified completion identification identifies an order in which the virtual pictures are to be rendered. That is, the N rendering sub-frames included in the virtual frame all share one unified completion identifier, and the unified completion identifiers corresponding to the plurality of rendering sub-frames included in different virtual frames are different, for example, the unified completion identifier shared by the N rendering sub-frames of the first virtual frame is 001, and the unified completion identifier shared by the N rendering sub-frames included in the second virtual frame is 002. For another example, the present application does not specifically limit this according to the time of the last rendered sub-picture in each virtual picture as a unified completion identifier.
The method is that the mode of generating the unified completion identification by the N renderers is consistent, so that the consistency of the unified completion identifications respectively generated by the N renderers is ensured.
S205: the target renderer sends the unified completion identification and rendering sub-pictures to the broadcaster.
It should be noted that, each of the other renderers may also be used as a target renderer to generate a unified completion identifier and a rendered sub-frame, and send the rendered sub-frame and the unified completion identifier to the director.
S206: the guiding and broadcasting machine receives M rendering sub-pictures sent by the N rendering machines and M unified completion identifications corresponding to the M rendering sub-pictures respectively.
And the target rendering machine is used for rendering to obtain the rendering sub-picture, generating a corresponding unified completion mark, and then sending the unified completion mark and the rendering sub-picture to the guiding and broadcasting machine. Similarly, other renderers also transmit the generated unified completion identifier and the rendered sub-picture to the director. Therefore, the director receives N rendering sub-pictures sent by N renderers respectively and N unified completion identifications corresponding to the N rendering sub-pictures respectively.
Because the renderer renders one rendered sub-picture of one virtual picture and then renders one rendered sub-picture of another virtual picture, the director also receives the rendered sub-pictures, namely the director receives M rendered sub-pictures sent by N renderers and M unified completion identifiers respectively corresponding to the M rendered sub-pictures, wherein N is a positive integer less than or equal to M.
S207: the director acquires N rendering sub-pictures with the same unified completion identifier from the M rendering sub-pictures.
For example, the director divides the M rendered sub-frames into a plurality of groups according to the uniform completion identifier, each group of rendered sub-frames includes N, and shares one uniform completion identifier. For another example, after the director obtains a unified completion identifier, N rendering sub-frames corresponding to the unified completion identifier are obtained from the M sub-frames.
S208: and synthesizing N rendering sub-pictures by the guide sowing machine to obtain a virtual picture.
After N sub-pictures sharing a unified completion mark are obtained, the guide player synthesizes the N sub-pictures into a virtual picture. For example, the positions of the N rendered sub-frames may be identified by techniques such as image recognition, or a header file for identifying the positions of the rendered sub-frames in the virtual frame may be acquired while the rendered sub-frames are acquired.
Therefore, even though N rendering sub-pictures belonging to one virtual picture are respectively rendered through N rendering machines, the N rendering sub-pictures share a unified completion mark, and the director can acquire the N rendering sub-pictures of the virtual picture according to the unified completion mark for synthesis to obtain the virtual picture, so that the situation that the rendering sub-pictures cannot be aligned in time due to unfixed delay of signals of the rendering machines is avoided, and therefore the director is difficult to match the sub-pictures belonging to one picture together, and the rendering effect is improved while the rendering speed is improved.
According to the technical scheme, the to-be-rendered content corresponding to one virtual picture is divided into N to-be-rendered sub-contents, the N to-be-rendered sub-contents are respectively subjected to picture rendering through N renderers, a target renderer in the N renderers is taken as an example, the target renderer acquires the target to-be-rendered sub-content, wherein the target to-be-rendered sub-content is one of the N to-be-rendered sub-contents included in the to-be-rendered content, and when the target to-be-rendered sub-content is rendered to obtain a rendering sub-picture, a rendering completion identifier is generated, and the rendering completion identifier is sent to other renderers. If N-1 rendering completion identifiers respectively sent by other rendering machines are obtained, the N rendering machines are all used for completing the rendering of the sub-content to be rendered, and then a unified completion identifier is generated according to the N rendering completion identifiers, wherein the unified completion identifier is used for identifying the order in which the virtual pictures are completed to be rendered, or different virtual pictures use different unified completion identifiers. And sending the unified completion identification and the rendering sub-picture to the director so that the director can synthesize the corresponding virtual picture according to the unified completion identification and the rendering sub-picture. Therefore, although the N rendering sub-pictures belonging to one virtual picture are respectively rendered through N rendering machines to obtain N rendering sub-pictures, the N rendering sub-pictures share a unified completion identifier, and the director can acquire the N rendering sub-pictures of the virtual picture according to the unified completion identifier to synthesize the N rendering sub-pictures to obtain the virtual picture, so that the synthesis rate of the rendering sub-pictures belonging to one virtual picture is improved, and the rendering effect of the virtual picture is further improved.
The method that the target rendering machine obtains the target sub-content to be rendered is not particularly limited, for example, the sub-content to be rendered is divided in advance according to the number N of the rendering machines to obtain N sub-contents to be rendered, and then the N sub-contents to be rendered are respectively distributed to the N rendering machines. For another example, a correspondence between the identification and the clipping region is pre-established, each renderer acquires the content to be rendered, and then determines sub-content to be rendered, each sub-content being responsible for rendering, based on the correspondence. The following is A1-A3.
A1: and acquiring the content to be rendered, a target identifier for identifying a target rendering machine, and a corresponding relation between the identifier and the clipping region.
Each renderer acquires the content to be rendered, and then determines sub-content to be rendered, which is responsible for rendering, according to the identification. Each of the renderers has its corresponding identification by which it can be determined which of the renderers is currently. For example, the identification of the target renderer is a target identification.
The correspondence of the identification and the clipping region is used to describe the clipping region that each renderer should be responsible for. For example, there are currently 9 renderers, the virtual frame should be divided into a rendering sub-frame of 3*3, the content to be rendered is divided into 9 sub-contents to be rendered according to the position of the virtual frame, each of the renderers is responsible for one of the sub-contents to be rendered, and the two constitute a correspondence between the identification and the clipping region. For example, the identifier 1 of the renderer has a correspondence with the sub-content to be rendered in the upper left corner of the virtual picture, and the identifier 9 of the renderer has a correspondence with the sub-content to be rendered in the lower right corner of the virtual picture.
Therefore, after the corresponding relation between the mark and the clipping region is established, the corresponding relation between the content to be rendered and the mark and the clipping region can be preset in each rendering machine, or the corresponding relation between the mark and the clipping region can be issued to each rendering machine, and the like, which is not particularly limited in the application.
A2: and determining a target clipping region corresponding to the target identifier according to the corresponding relation between the identifier and the clipping region.
And determining a target clipping region corresponding to the target identifier, namely a region which the target rendering machine should render, according to the corresponding relation between the identifier and the clipping region.
A3: and acquiring target sub-content to be rendered from the content to be rendered according to the target clipping region.
And after the content which is needed to be rendered by the target rendering machine is obtained, obtaining target sub-content to be rendered required by the target clipping region from the content to be rendered.
Therefore, when more virtual pictures need to be rendered, if one virtual picture is split and allocated to a corresponding rendering machine, more time is needed and errors are easy to occur. If the corresponding relation between the mark and the clipping area is established in advance, each rendering machine acquires the content to be rendered, and then determines the sub-content to be rendered which is responsible for rendering according to the corresponding relation, so that the distribution time can be reduced, the rendering speed can be increased, and the problem that the rendering effect of the virtual picture is poor due to the distribution error can be solved.
As a possible implementation manner, the size of the clipping region that each rendering machine should be responsible can be determined according to the display card performance of each rendering machine, so as to establish a correspondence between the identifier and the clipping region. The display card performance of the N renderers is different, but the size of the respective responsible clipping regions is different, that is, the clipping regions corresponding to the renderers with higher display card performance are larger, so that the time required for rendering the sub-frames respectively responsible for the N renderers by the N renderers is approximately the same, the time for waiting for receiving the rendering completion identifications sent by the other renderers by the target renderers is reduced, and the rendering speed of the virtual frames is further improved.
As a possible implementation manner, the rendering content corresponding to the virtual picture may be determined, and the objects in the rendering content are split according to the number of the renderers, so that each renderer is responsible for rendering the sub-content to be rendered corresponding to one or more objects.
As a possible implementation manner, the number of layers included in the virtual picture may be determined, and the layers of the virtual picture are split according to the number of the renderers, so that each renderer is responsible for rendering sub-content to be rendered corresponding to one layer, multiple layers or part of layers.
As a possible implementation manner, the size information of the virtual picture can be determined, the virtual picture is divided equally according to the number of the renderers, and a plurality of rendering sub-pictures are obtained, so that each rendering machine is responsible for rendering sub-content to be rendered corresponding to one rendering sub-picture.
Further, when the sub-content to be rendered is divided in advance, in addition to determining the sub-content to be rendered corresponding to the sub-picture to be rendered, the sub-content to be rendered corresponding to the picture around the sub-picture to be rendered may also be determined to be the sub-content to be rendered, that is, the sub-content to be rendered includes not only the sub-content to be rendered corresponding to the sub-picture to be rendered but also the sub-content to be rendered corresponding to the picture around the sub-picture to be rendered. For example, the target sub-content to be rendered includes the content to be rendered corresponding to the sub-picture to be rendered and the content to be rendered corresponding to the picture located around the sub-picture to be rendered. And rendering the target sub-content to be rendered by the target rendering machine, wherein the size of the obtained undetermined rendering sub-picture is larger than that of the needed rendering sub-picture. At this time, the size information of the rendered sub-picture can be obtained, and the sub-picture to be rendered is cut according to the size information of the rendered sub-picture, so as to obtain the rendered sub-picture.
Thus, in the rendering process, when determining the pixel value of a pixel, the pixel value of the pixel is often obtained together according to the pixel value of the pixel and the pixel values of the surrounding pixels. If the sub-content to be rendered only includes the content to be rendered corresponding to the sub-picture to be rendered, the edge definition of the rendered sub-picture is lower, and the rendering effect of the virtual picture obtained according to the rendered sub-picture is poorer. By adopting the mode, the sub-content to be rendered not only comprises the content to be rendered corresponding to the sub-picture, but also comprises the sub-content to be rendered corresponding to the surrounding picture, and after the sub-content to be rendered is rendered, the probability of lower edge definition of the sub-picture to be rendered can be reduced, so that the rendering effect of the virtual picture is improved.
As a possible implementation manner, the last time of generating the rendering completion identifier in the N renderers may be used as a unified completion identifier, which is specifically described below in connection with B1-B4.
B1: a timestamp characterizing a rendering completion identification generation time is obtained.
Upon generation of the rendering completion identification, a time stamp characterizing the rendering completion identification generation time may also be obtained. For example, one time generator constantly sends time stamps to N renderers, so that the times of the N renderers are identical. Thus, the target renderer generates the rendering completion flag and takes the currently latest received one time stamp as the generation time of the rendering flag.
B2: and sending the rendering completion identification and a timestamp used for representing the generation time of the rendering completion identification to other renderers.
The target renderer sends the rendering completion identifiers to other renderers and also can send time stamps for identifying the generation time of the rendering completion identifiers, so that the target renderer can receive the time stamps corresponding to the N-1 rendering completion identifiers in addition to the N-1 rendering completion identifiers.
B3: and determining the timestamp with the latest time from the timestamps corresponding to the N rendering completion identifications.
The target renderer receives the N-1 rendering completion identifications and the corresponding time stamps thereof, adds the rendering identifications and the corresponding time stamps thereof generated by the target renderer, and shares the N rendering identifications and the corresponding time stamps thereof, namely the target renderer has the rendering completion identifications and the corresponding time stamps generated by each renderer. And the target rendering machine determines the timestamp with the latest time from the timestamps corresponding to the N rendering completion identifications, and takes the timestamp as a unified completion identification.
As a possible implementation manner, any one of the N timestamps may also be used as a unified completion identifier, so as to ensure that the N rendered subpictures have a unified completion identifier.
B4: and taking the timestamp with the latest time as a unified completion identifier.
Therefore, the time stamps generated by the time generator are continuously distributed to each of the renderers, so that the time consistency of the N renderers can be ensured, and the last time stamp in the N time stamps is used as a unified completion mark, so that the time consistency of the N rendering sub-pictures is ensured. The problem that a plurality of sub-pictures cannot be aligned in time is avoided, and the rendering quality of the virtual picture is improved.
In the related art, a communication link in which the renderer and the director communicate is long, resulting in a long time for the director to display a virtual picture. Referring to fig. 3, a schematic diagram of a transmission path of a rendered sprite according to an embodiment of the present application is shown. In the process that the rendering sub-picture is transmitted to the director, the rendering sub-picture is required to be read into a system memory from a video memory of a video card by the renderer, then is transmitted to the system memory of the director from the system memory by using a remote direct data access (Remote Direct Memory Access, RDMA) technology through a network card of the renderer, and is acquired from the system memory by the director to be displayed in the video memory of the video card.
However, the above manner makes the communication link between the renderer and the director longer, resulting in longer rendering time of the virtual picture. Based on this, in the embodiment of the present application, the rendered sub-picture is not read from the video card into the system memory, but the network card of the renderer acquires the rendered sub-picture from the video memory, and sends the rendered sub-picture to the video memory through the network card of the director, as shown in fig. 4, so as to shorten the transmission path of the rendered sub-picture, increase the transmission speed of the rendered sub-picture, and further increase the rendering speed of the virtual picture.
As a possible implementation manner, if the renderer and the director are two graphics processors (graphics processing unit, GPU) respectively, the network card of the renderer can be used for accelerating the transmission of rendered sub-pictures by using a multiple inter-machine GPU communication technology (GPUDirect RDMA) without using a system memory, so that the data copying times of GPU communication are reduced, and the communication delay is further reduced.
As a possible implementation manner, both the network card of the director and the network of the renderer may be a processor distributed processing unit (Data Processing Unit, DPU), so as to increase the transmission speed of the rendered sub-picture and increase the rendering speed of the virtual picture.
As one possible implementation, rendering the sprite and unifying completion identification may include sending to the director in a header file. The following is a specific description in connection with C1-C3.
C1: and the target rendering machine generates a header file according to the rendering sub-picture and the unified completion identification.
The header file is used for indicating the lead-in planter to synthesize the virtual picture. For example, the header file may include a uniform completion identification and a position of the rendered sprite in the virtual picture. As a possible implementation manner, the position of the rendered sub-picture in the virtual picture may be represented by a two-dimensional array, where a first element in the two-dimensional array represents a horizontal number of pictures of the rendered sub-picture, and a second element in the two-dimensional array represents a vertical number of pictures of the rendered sub-picture. Taking the example that the virtual picture comprises 9 rendering sub-pictures, and the arrangement mode of the 9 rendering sub-pictures is that the three rendering sub-pictures are in the transverse direction and the three rendering sub-pictures are in the longitudinal direction, the (3, 1) represents the first column of the third row, namely the rendering sub-picture is positioned at the left lower corner of the virtual picture.
As a possible implementation manner, the header file may further include one or more combinations of size information of a virtual picture corresponding to the rendered sub-picture, size information of the rendered sub-picture, a data stream where the rendered sub-picture is located, and a custom instruction. The size information of the virtual picture is used for describing the length and width of the virtual picture, the size information of the rendering sub-picture is used for describing the length and width of the virtual picture, the data volume of the rendering sub-picture can identify the identification of the rendering machine which generates the rendering sub-picture, and the user-defined instruction can be a heartbeat detection instruction so that the guiding and broadcasting machine can determine whether the head file is received or not through heartbeat detection.
C2: and the target rendering machine generates a data stream according to the header file and the rendering sub-picture.
And C3: the target renderer sends the data stream to the broadcaster.
As one possible implementation, the data stream is directly transmitted to the network card of the renderer by the high-speed serial computer expansion bus standard socket (peripheral component interconnect express, PCIE), and sent to the director by the GPUDirect RDMA.
After receiving the data stream, the streamer may store the data stream in system memory for subsequent processing. As a possible implementation, in addition to the renderer not having to store the rendered sprites in the system memory, the director also may not have to store the rendered sprites in the system memory, and the following description will proceed with reference to C1-C3, and with reference to C4-C6.
And C4: the director receives M data streams sent by N renderers.
The data streams comprise header files and rendering sub-pictures, and M data streams correspond to the M rendering sub-pictures and the header files corresponding to the M rendering sub-pictures. Taking one of the data streams as an example, the director can separate the data streams to obtain a header file and a rendered sub-picture.
C5: the jukebox stores the header file into system memory.
C6: the director stores the rendered sub-picture into a video memory.
The header file stored in the system memory can control the rendering sub-picture in the video memory to be synthesized into a virtual picture.
As one possible implementation, the network card of the director may separate the data streams to obtain the header file and the rendered sprite. Referring to fig. 5, the network card of the director stores the header file in the system memory and stores the rendered sprite in the video memory, so that the rendered sprite does not need to be stored in the system memory first and then the value video memory is read from the system memory. The copying times of the sub-picture rendering are reduced, and the virtual picture rendering time is shortened.
As a possible implementation, in the process of sending a data stream to a broadcaster, there may be a situation where the data streaming bandwidth is insufficient, where the streaming bandwidth refers to the bandwidth occupied by sending the data stream. The following is a detailed description with reference to fig. 6.
Referring to fig. 6, a schematic diagram of a data streaming bandwidth of a sending data stream according to an embodiment of the present application is shown. In fig. 6, the horizontal axis is time and the vertical axis is streaming bandwidth. Fig. 6 (a) shows a case of a non-cadence burst data stream, if one data stream is transmitted in a large size, the data stream is combined and then the line speed is exceeded, that is, the streaming bandwidth is insufficient in the existing time period and the streaming bandwidth is not used in the existing time period.
Based on this, the embodiment of the application divides the data stream into multiple sub-data streams according to a preset fixed bit rate, and sends the multiple sub-data streams to the broadcasting machine one by one. The fixed bit rate is determined according to the time length of the primary data stream sent to the broadcasting machine by the rendering machine and the size of the primary data stream, so that the sub data streams are uniformly sent in the time length of the primary data stream, and the condition that streaming bandwidth is insufficient in some time periods is avoided. For example, in the case of the hardware packet rhythm data stream in fig. 6 (b), the data stream is split into multiple sub-data streams according to a set fixed bit rate, so that the case of exceeding the line speed does not occur. Thus, by dividing the data stream, the occurrence of insufficient streaming bandwidth due to peak data can be avoided.
As one possible implementation, the director may compose N rendered sprites from the header file, as described below in connection with fig. 7. Referring to fig. 7, a schematic diagram of synthesizing N rendered sub-frames by a director according to an embodiment of the present application is shown.
S701: a plurality of data streams is received.
S702: size information of the virtual picture is determined.
For example, according to the header file included in the data stream, size information of N rendered sub-pictures included in one virtual picture is acquired, thereby determining the size information of the virtual picture. For another example, the director directly obtains the size information of the virtual picture according to the header file in the data stream. For another example, the director obtains the size information of one rendering sub-picture according to the header file in the data stream, and the sizes of the N rendering sub-pictures are the same, and then the size information of the virtual picture is determined according to the size information of one rendering sub-picture and the arrangement mode of the rendering sub-pictures.
S703: the director obtains size information of the rendered sprite included in the ith data stream and a position of the rendered sprite in the virtual sprite.
Similarly, the i-1 data stream and the i+1 data stream also perform the operation of acquiring the size information of the rendered sprite included in the i-1 data stream and the position thereof in the virtual sprite, and acquiring the size information of the rendered sprite included in the i+1 data stream and the position thereof in the virtual sprite.
S704: queue buffering and time stamp alignment.
Because a virtual picture comprises a plurality of rendering sub-pictures and a data stream comprises one rendering sub-picture, all rendering sub-pictures can be put into a queue for buffering, and after N rendering sub-pictures of a virtual picture are obtained according to a timestamp, namely one of the uniform completed identifiers, the N rendering sub-pictures are synthesized together to obtain the virtual picture.
S705: and synthesizing to obtain a virtual picture.
As one possible implementation, data backup may also be performed. The following is a detailed description of D1-D2.
D1: the target renderer acquires the identification of the backup renderer.
It should be noted that, in order to perform disaster recovery backup of multiple data, a backup renderer may be provided for the target renderer, where the backup renderer and the target renderer may perform the above-described picture rendering method, that is, obtain a rendered sub-picture according to sub-content to be rendered by the target. Therefore, when the target renderer renders a problem, the rendering sub-picture rendered by the backup renderer can be used for synthesizing the guide player to obtain a virtual picture.
For example, after setting the backup renderer of the target renderer, a correspondence relationship between the two may be established, so that the target renderer can obtain the identifier of the backup renderer according to the correspondence relationship. Alternatively, the identification of the backup renderer may be built into the target renderer for acquisition by the target renderer. It should be noted that, the backup renderer may be configured for each renderer, and the backup renderer may also be configured for a part of the renderers, which is not limited in this application.
D2: and generating a header file according to the identification of the backup rendering machine, the rendering sub-picture and the unified completion identification.
Therefore, the header file not only can comprise relevant information of the rendering sub-picture and unified completion identification, but also can comprise identification of the backup rendering machine.
After the target renderer generates the data stream according to the above manner and sends the data stream to the director, the header file received by the director includes the identifier of the backup renderer, and when the target renderer renders a problem, the director can synthesize a virtual picture according to the rendered sub-picture obtained by rendering by the backup renderer, which is specifically described below through D3-D6.
Referring to fig. 8, a schematic diagram of acquiring a rendered sprite according to an embodiment of the present application is shown.
D3: if the abnormal data stream is determined from the M data streams.
D4: and acquiring a header file in the abnormal data stream.
The header file includes an identification of the backup renderer.
D5: and acquiring the identification of the backup rendering machine from the header file in the abnormal data stream.
D6: and acquiring a rendering sub-picture rendered by the backup renderer according to the identification of the backup renderer.
The backup rendering machine can adopt the picture rendering method provided by the embodiment of the application to render the sub-content to be rendered according to the target to obtain the rendered sub-picture. Therefore, when the data sent by the target renderer is abnormal, the rendering sub-picture obtained by rendering by the backup renderer can be adopted, so that the rendering effect of the virtual sub-picture is improved.
The embodiment of the application is not particularly limited to obtaining the rendering sub-picture rendered by the backup renderer according to the identifier of the backup renderer by the director. Two examples will be described below.
Mode one: the backup renderer does not send the rendered sub-picture to the director, the director only receives the data stream sent by the target renderer, if the data stream is abnormal, the identifier of the backup renderer is obtained from the header file of the data stream, and the director communicates with the backup renderer so as to obtain the rendered sub-picture from the backup renderer.
Mode two: the backup rendering machine sends the rendered sub-picture to the director, and the director receives the data stream sent by the target rendering machine and the data stream sent by the backup rendering machine, wherein the two data streams are in different queues. The guide machine firstly acquires the data stream sent by the target rendering machine, if the data stream is abnormal, the identification of the backup rendering machine is acquired from the header file of the data stream, and the identification not only can identify the corresponding backup rendering machine, but also can identify the data stream generated by the backup rendering machine, so that the corresponding data stream can be acquired according to the identification of the backup rendering machine, and further the rendering sub-picture is acquired.
It should be noted that, when the backup renderer also sends the data stream to the director, a unified completion identifier needs to be generated according to the rendering completion identifiers generated by the target renderer and other renderers, and the rendering completion identifiers generated by the backup renderer.
D7: if the acquisition is successful, the virtual picture formed by the sub-picture is played and rendered subsequently.
D8: if the acquisition fails, the standby safety picture is used as a rendering sub-picture, and the virtual picture is played later.
The standby safety picture is preset in the guide player and is used for avoiding the situation that data streams respectively generated by the target rendering machine and the backup rendering machine are abnormal.
D9: if the M data streams are normal, the virtual picture is played later.
Therefore, by setting the backup rendering machine for the rendering machine, the two sub-pictures are generated together according to the sub-content to be rendered, so that when the sub-pictures rendered by the rendering machine cannot be obtained, the sub-pictures rendered by the backup rendering machine are synthesized into virtual pictures, and the rendering effect of the virtual pictures is improved.
In order to facilitate further understanding of the technical solution provided by the embodiments of the present application, a whole exemplary description of the picture rendering method is provided below by taking a server as an example of both a renderer and a director related to the picture rendering method provided by the embodiments of the present application.
It should be noted that, the picture rendering method provided in the embodiment of the present application may be applied to a scene that needs to be rendered to obtain a virtual picture, such as virtual production, repair, virtual shooting, and Extended Reality (XR).
The foregoing terms are separately described below.
(1) Virtual film making: the virtual image and the performance of the real actor are fused together, and the special effect-added picture can be visually presented in real time at the shooting site. For example, in a shooting scene, after a performer performs before a green screen, the performer renders the performance picture and the special effect picture of the performer together through virtual production, so that the final special effect picture can be seen in real time, and the background can be replaced at any time, the scene special effect interaction can be realized, and the like.
(2) Repairing: adding elements such as a sticker pattern and the like on the existing picture (such as a photo), and rendering to obtain a beautified picture.
(3) Virtual shooting: in movie shooting, all shots are performed in a virtual scene in a computer according to shooting actions required by a director. The various elements required to take this shot, including scenes, figures, lights, etc., are all integrated into a computer, and then the director can "command" the performance and action of the character on the computer, moving his shot from any angle, according to his own intent.
(4) And (3) augmented reality: combining Reality and Virtual through a computer creates a Virtual environment capable of man-machine interaction, which is also a collective term for various technologies such as augmented Reality (Augmented Reality, AR), virtual Reality (VR), mixed Reality (MR) and the like. By integrating the visual interaction technologies of the three, the method brings the 'immersion' of seamless transition between the virtual world and the real world for the experienter.
It should be noted that, in the present application, the virtual picture is mainly rendered, and the manner in which the virtual picture is combined with the real picture may be that the virtual picture obtained by synthesis is combined with the real picture, or the real picture is split as the content to be virtually rendered, and then rendered to obtain the rendered sub-picture, so that the subsequent synthesis is performed, which is not limited in this application.
The rendering of the virtual picture will be described below.
Refer to fig. 9, which is a schematic view of a scene of a picture rendering method according to an embodiment of the present application. The renderer needs to execute three contents before generating the data stream, as follows:
(1) And obtaining the sub-content to be rendered.
(2) And rendering according to the sub-content to be rendered to obtain a rendered sub-picture.
(3) And generating a header file according to the rendering sub-picture and the unified completion identification.
Specifically, when a rendering sub-picture is rendered according to target sub-content to be rendered, generating a rendering completion identifier, sending the rendering completion identifier to other renderers, and if N-1 rendering completion identifiers are acquired from the other renderers, generating a unified completion identifier according to the N rendering completion identifiers.
The DPU in the renderer acquires the generated data stream and sends the data stream to the DPU of the director via the switch. Wherein the data stream may be compliant with SMPTE ST 2110 protocol standard, transmitting lossless audio and video while containing precision time protocol (Precision Time Protocol, PTP) time information. The sub-content to be rendered may include information defining a transmission content format, size information of a rendered sub-picture, size information of a virtual picture, video resolution, frame rate, color space, and the like. The transmission content format may be rgb (red green blue) color arrangement, and the color space may be a range of rgb colors.
The director separates the data stream to obtain a header file and a rendered sub-picture, synthesizes a virtual picture according to the header file, and displays the virtual picture.
For the above-described picture rendering method, the present application further provides a corresponding picture rendering device, so that the above-described picture rendering method is practically applied and implemented.
Referring to fig. 10, the structure of a frame rendering device according to an embodiment of the present application is shown. The to-be-rendered content including N to-be-rendered sub-contents is used for rendering to obtain a virtual picture, the N to-be-rendered sub-contents are respectively subjected to picture rendering by N renderers, and for a target renderer in the N renderers, the target renderer includes a picture rendering device, N is an integer greater than 1, as shown in fig. 10, the picture rendering device 1000 includes: an acquisition unit 1001, a first generation unit 1002, a first transmission unit 1003, a second generation unit 1004, and a second transmission unit 1005;
the obtaining unit 1001 is configured to obtain a target sub-content to be rendered in the content to be rendered, where the target sub-content to be rendered is one of the N sub-contents to be rendered;
the first generating unit 1002 is configured to generate a rendering completion identifier when rendering the sub-content to be rendered according to the target to obtain a rendered sub-frame;
The first sending unit 1003 is configured to send the rendering completion identifier to other renderers, where the other renderers are N-1 renderers except the target renderer in the N renderers;
the second generating unit 1004 is configured to generate, if N-1 rendering completion identifiers are obtained from the other renderers, a unified completion identifier according to the N rendering completion identifiers, where the unified completion identifier is used to identify an order in which the virtual frame completes rendering;
the second sending unit 1005 is configured to send the unified completion identifier and the rendering sub-frame to a multicast machine, so that the multicast machine synthesizes the virtual frame according to the unified completion identifier and the rendering sub-frame.
As a possible implementation manner, the obtaining unit 1001 is specifically configured to:
acquiring the content to be rendered, a target identifier for identifying the target rendering machine, and a corresponding relation between the identifier and a cutting area;
determining a target clipping region corresponding to the target identifier according to the corresponding relation between the identifier and the clipping region;
and acquiring the target sub-content to be rendered from the content to be rendered according to the target clipping region.
As a possible implementation manner, the correspondence between the identifier and the clipping region is determined according to the graphics card performance of the N renderers, where the clipping region corresponding to the rendering machine with higher graphics card performance is larger.
As a possible implementation manner, the first generating unit 1002 is further configured to:
and in the process of rendering the sub-content according to the target sub-content to be rendered, if the rendering sub-content is not obtained within the preset time, generating a rendering completion mark.
As a possible implementation manner, the obtaining unit 1001 is further configured to:
acquiring a time stamp used for representing the generation time of the rendering completion identification;
the first transmitting unit 1003 is specifically configured to:
sending the rendering completion identification and a timestamp used for representing the generation time of the rendering completion identification to other renderers;
the second generating unit 1004 is specifically configured to:
determining the timestamp with the latest time from the timestamps corresponding to the N rendering completion identifiers;
and taking the timestamp with the latest time as the unified completion identification.
As a possible implementation manner, the target sub-content to be rendered includes content to be rendered corresponding to the rendered sub-picture and content to be rendered corresponding to pictures located around the rendered sub-picture, and the apparatus further includes a rendering unit configured to:
Rendering according to the target sub-content to be rendered to obtain a sub-picture to be rendered;
acquiring size information of the rendered sub-picture;
and cutting the to-be-rendered sub-picture according to the size information of the rendered sub-picture to obtain the rendered sub-picture.
As a possible implementation manner, the second sending unit 1005 is specifically configured to:
and sending the unified completion identification and rendering sub-pictures to the broadcaster by using a communication technology for accelerating the graphics processor among multiple machines.
As a possible implementation manner, the second sending unit 1005 is specifically configured to:
generating a header file according to the rendering sub-picture and the unified completion mark, wherein the header file is used for indicating the director to synthesize to obtain the virtual picture;
generating a data stream according to the header file and the rendered sprite;
and sending the data stream to the director.
As a possible implementation manner, the second sending unit 1005 is specifically configured to:
dividing the data stream into a plurality of sub-data streams according to a fixed bit rate, the fixed bit rate being determined according to a time length and a size of the data stream;
and sending the multiple sub-data streams to the director one by one.
As a possible implementation manner, the apparatus further includes a backup unit, configured to:
the method comprises the steps of obtaining an identification of a backup renderer, wherein the backup renderer is used for rendering sub-content to be rendered according to the target sub-content to be rendered to obtain rendered sub-content;
the generating a header file according to the rendering sub-picture and the unified completion identification includes:
and generating a header file according to the identifier of the backup rendering machine, the rendering sub-picture and the unified completion identifier.
According to the technical scheme, the to-be-rendered content corresponding to one virtual picture is divided into N to-be-rendered sub-contents, the N to-be-rendered sub-contents are respectively subjected to picture rendering through N renderers, a target renderer in the N renderers is taken as an example, the target renderer acquires the target to-be-rendered sub-content, wherein the target to-be-rendered sub-content is one of the N to-be-rendered sub-contents included in the to-be-rendered content, and when the target to-be-rendered sub-content is rendered to obtain a rendering sub-picture, a rendering completion identifier is generated, and the rendering completion identifier is sent to other renderers. If N-1 rendering completion identifiers respectively sent by other rendering machines are obtained, the N rendering machines are all used for completing the rendering of the sub-content to be rendered, and then a unified completion identifier is generated according to the N rendering completion identifiers, wherein the unified completion identifier is used for identifying the order in which the virtual pictures are completed to be rendered, or different virtual pictures use different unified completion identifiers. And sending the unified completion identification and the rendering sub-picture to the director so that the director can synthesize the corresponding virtual picture according to the unified completion identification and the rendering sub-picture. Therefore, although the N rendering sub-pictures belonging to one virtual picture are respectively rendered through N rendering machines to obtain N rendering sub-pictures, the N rendering sub-pictures share a unified completion identifier, and the director can acquire the N rendering sub-pictures of the virtual picture according to the unified completion identifier to synthesize the N rendering sub-pictures to obtain the virtual picture, so that the synthesis rate of the rendering sub-pictures belonging to one virtual picture is improved, and the rendering effect of the virtual picture is further improved.
Referring to fig. 11, the structure of a frame rendering device according to an embodiment of the present application is shown. As shown in fig. 11, the screen rendering apparatus 1100 includes: a receiving unit 1101, an acquiring unit 1102, and a synthesizing unit 1103;
the receiving unit 1101 is configured to receive M rendering sub-frames sent by N renderers, and M unified completion identifiers corresponding to the M rendering sub-frames respectively, where the unified completion identifiers are used to identify an order in which the virtual frames complete rendering, and N is a positive integer less than or equal to M;
the obtaining unit 1102 is configured to obtain N rendering sub-frames with the same uniform completion identifier from the M rendering sub-frames;
the synthesizing unit 1103 is configured to synthesize the N rendered sub-frames to obtain the virtual frame.
As a possible implementation manner, the receiving unit 1101 is specifically configured to:
receiving M data streams sent by N renderers, wherein the data streams comprise a header file and a rendering sub-picture, and the header file is generated according to the rendering sub-picture and the unified completion identification;
storing the header file into a system memory;
and storing the rendered sub-picture into a video memory.
As a possible implementation manner, the header file further includes an identifier of a backup renderer, where the backup renderer is configured to render the sub-content to be rendered according to the target sub-content to be rendered to obtain the rendered sub-content, and the apparatus further includes a standby unit configured to:
if abnormal data streams are determined from the M data streams;
acquiring a header file in the abnormal data stream;
acquiring the identification of the backup rendering machine from the header file in the abnormal data stream;
and acquiring a rendering sub-picture rendered by the backup rendering machine according to the identification of the backup rendering machine.
According to the technical scheme, the to-be-rendered content corresponding to one virtual picture is divided into N to-be-rendered sub-contents, the N to-be-rendered sub-contents are respectively subjected to picture rendering through N renderers, a target renderer in the N renderers is taken as an example, the target renderer acquires the target to-be-rendered sub-content, wherein the target to-be-rendered sub-content is one of the N to-be-rendered sub-contents included in the to-be-rendered content, and when the target to-be-rendered sub-content is rendered to obtain a rendering sub-picture, a rendering completion identifier is generated, and the rendering completion identifier is sent to other renderers. If N-1 rendering completion identifiers respectively sent by other rendering machines are obtained, the N rendering machines are all used for completing the rendering of the sub-content to be rendered, and then a unified completion identifier is generated according to the N rendering completion identifiers, wherein the unified completion identifier is used for identifying the order in which the virtual pictures are completed to be rendered, or different virtual pictures use different unified completion identifiers. And sending the unified completion identification and the rendering sub-picture to the director so that the director can synthesize the corresponding virtual picture according to the unified completion identification and the rendering sub-picture. Therefore, although the N rendering sub-pictures belonging to one virtual picture are respectively rendered through N rendering machines to obtain N rendering sub-pictures, the N rendering sub-pictures share a unified completion identifier, and the director can acquire the N rendering sub-pictures of the virtual picture according to the unified completion identifier to synthesize the N rendering sub-pictures to obtain the virtual picture, so that the synthesis rate of the rendering sub-pictures belonging to one virtual picture is improved, and the rendering effect of the virtual picture is further improved.
For the picture rendering method described above, the application also provides a corresponding picture rendering system, so that the picture rendering method can be practically applied and realized.
Referring to fig. 12, the structure of a frame rendering device according to an embodiment of the present application is shown. As shown in fig. 12, the screen rendering system 1200 includes: including a plurality of renderers 1201 and jukeboxes 1202;
the method comprises the steps that to-be-rendered contents comprising N to-be-rendered sub-contents are used for rendering to obtain a virtual picture, the N to-be-rendered sub-contents are respectively subjected to picture rendering through N renderers, and N is an integer larger than 1 aiming at a target renderer in the N renderers.
The renderer is configured to execute any one of the screen rendering methods executed by the target renderer, for example:
acquiring target sub-content to be rendered in the content to be rendered, wherein the target sub-content to be rendered is one of N sub-contents to be rendered;
generating a rendering completion mark when rendering the target sub-content to be rendered to obtain a rendering sub-picture;
sending a rendering completion identification to other renderers, wherein the other renderers are N-1 renderers except the target rendering machine;
If N-1 rendering completion identifications are obtained from other rendering machines, generating unified completion identifications according to the N rendering completion identifications, wherein the unified completion identifications are used for identifying the order in which virtual pictures are rendered;
and sending the unified completion identification and the rendering sub-picture to the director so that the director synthesizes the virtual picture according to the unified completion identification and the rendering sub-picture.
It should be noted that, the renderer may also execute any of the above-described image rendering methods executed by the target renderer.
A lead seeder for performing any one of the picture rendering methods performed by the aforementioned lead seeder, such as:
receiving M rendering sub-pictures sent by N rendering machines and M unified completion identifiers respectively corresponding to the M rendering sub-pictures, wherein the unified completion identifiers are used for identifying the order in which virtual pictures are rendered, and N is a positive integer less than or equal to M;
acquiring N rendering sub-pictures with the same unified completion identifier from the M rendering sub-pictures;
and synthesizing the N rendering sub-pictures to obtain a virtual picture.
As one possible implementation, the screen rendering system further includes a switch, where it is difficult to connect to multiple renderers because the director generally has only one portal, and the switch has multiple portals that can be connected to multiple renderers so that the renderers send rendered sprites to the director through the switch. For example, the DPU in the rendering machine divides the data stream and sends the data stream to the switch, and the interactive machine forwards the data stream to the director for synthesis to obtain the virtual picture.
As one possible implementation, the renderer sends data to the switch via proprietary NVIDIA BlueField DPU, where NVIDIA BlueField DPU is an on-chip data center infrastructure that can be used to offload, accelerate, isolate various software-defined infrastructure services running on the host CPU, breaking through some bottlenecks in performance, scalability, and security.
As a possible implementation manner, to cope with a scenario with a large amount of data, a bandwidth machine above 25Gbit may be selected as the switch.
According to the technical scheme, the to-be-rendered content corresponding to one virtual picture is divided into N to-be-rendered sub-contents, the N to-be-rendered sub-contents are respectively subjected to picture rendering through N renderers, a target renderer in the N renderers is taken as an example, the target renderer acquires the target to-be-rendered sub-content, wherein the target to-be-rendered sub-content is one of the N to-be-rendered sub-contents included in the to-be-rendered content, and when the target to-be-rendered sub-content is rendered to obtain a rendering sub-picture, a rendering completion identifier is generated, and the rendering completion identifier is sent to other renderers. If N-1 rendering completion identifiers respectively sent by other rendering machines are obtained, the N rendering machines are all used for completing the rendering of the sub-content to be rendered, and then a unified completion identifier is generated according to the N rendering completion identifiers, wherein the unified completion identifier is used for identifying the order in which the virtual pictures are completed to be rendered, or different virtual pictures use different unified completion identifiers. And sending the unified completion identification and the rendering sub-picture to the director so that the director can synthesize the corresponding virtual picture according to the unified completion identification and the rendering sub-picture. Therefore, although the N rendering sub-pictures belonging to one virtual picture are respectively rendered through N rendering machines to obtain N rendering sub-pictures, the N rendering sub-pictures share a unified completion identifier, and the director can acquire the N rendering sub-pictures of the virtual picture according to the unified completion identifier to synthesize the N rendering sub-pictures to obtain the virtual picture, so that the synthesis rate of the rendering sub-pictures belonging to one virtual picture is improved, and the rendering effect of the virtual picture is further improved.
Referring to fig. 13, the application scene of a picture rendering system according to an embodiment of the present application is shown.
The time generator continuously transmits a time stamp for each of the renderers so that the times of the plurality of renderers can be synchronized to generate a unified completion identification from the time stamps. Three of the six renderers serve as backup renderers. According to the picture rendering method provided by the embodiment of the application, each frame in the live picture is rendered by the renderer to obtain a rendered sub-picture and a unified completion identifier, and the rendered sub-picture and the unified completion identifier are sent to the director through the switch.
The director receives all the audio and the rendered sub-pictures, is used for synthesizing virtual pictures and sends the virtual pictures to the pusher. The plug-in machine encodes the received virtual pictures and uploads the virtual pictures to the cloud end, so that other terminal equipment (such as a notebook computer, a personal computer, a mobile phone, a tablet personal computer, a palm computer and the like) can acquire live virtual pictures from the cloud end.
Therefore, the disaster recovery backup and picture rendering scheme is realized in a live broadcast environment, the low-delay and high-resolution synthesis can be realized, meanwhile, the available effect is high, the transverse expansion of equipment is supported, and the better effect is achieved.
The embodiment of the application further provides a computer device, which is the computer device described above, the computer device may be a server or a terminal device, the foregoing picture rendering device may be built in the server or the terminal device, and the computer device provided in the embodiment of the application will be described from the perspective of hardware materialization. Fig. 14 is a schematic structural diagram of a server, and fig. 15 is a schematic structural diagram of a terminal device.
Referring to fig. 14, which is a schematic diagram of a server structure provided in an embodiment of the present application, the server 1400 may vary considerably in configuration or performance, and may include one or more processors 1422, such as a central processing unit (Central Processing Units, CPU), a memory 1432, one or more application programs 1442, or a storage medium 1430 (e.g., one or more mass storage devices) for data 1444. Wherein the memory 1432 and storage medium 1430 can be transitory or persistent storage. The program stored in the storage medium 1430 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, a processor 1422 may be provided in communication with a storage medium 1430 to execute a series of instructions operations on the storage medium 1430 on the server 1400.
Server 1400 may also include one or more power sources 1426, one or more wired or wireless network interfaces 1450, anOne or more input/output interfaces 1458, and/or one or more operating systems 1441, e.g., windows Server TM ,Mac OS X TM ,Unix TM ,Linux TM ,FreeBSD TM Etc.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 14.
Wherein, the CPU 1422 is configured to perform the following steps:
acquiring target sub-content to be rendered in the content to be rendered, wherein the target sub-content to be rendered is one of the N sub-contents to be rendered;
generating a rendering completion mark when rendering the target sub-content to be rendered to obtain a rendering sub-picture;
sending the rendering completion identification to other renderers, wherein the other renderers are N-1 renderers except the target rendering machine in the N renderers;
if N-1 rendering completion identifications are obtained from the other rendering machines, generating unified completion identifications according to the N rendering completion identifications, wherein the unified completion identifications are used for identifying the order in which the virtual pictures are rendered;
and sending the unified completion identification and the rendering sub-picture to the multicast machine so that the multicast machine synthesizes the virtual picture according to the unified completion identification and the rendering sub-picture, wherein the content to be rendered corresponding to the virtual picture comprises N sub-contents to be rendered, and the N sub-contents to be rendered are respectively subjected to picture rendering by the N rendering machines.
Optionally, the CPU 1422 may further perform method steps of any specific implementation of the image rendering method in the embodiments of the present application.
Referring to fig. 15, the structure of a terminal device provided in an embodiment of the present application is shown schematically. Fig. 15 is a block diagram illustrating a part of a structure of a smart phone related to a terminal device provided in an embodiment of the present application, where the smart phone includes: radio Frequency (RF) circuitry 1510, memory 1520, input unit 1530, display unit 1540, sensor 1550, audio circuitry 1560, wireless fidelity (WiFi) module 1570, processor 1580, power supply 1590, and the like. Those skilled in the art will appreciate that the smartphone structure shown in fig. 15 is not limiting of the smartphone and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes each component of the smart phone in detail with reference to fig. 15:
the RF circuit 1510 may be used for receiving and transmitting signals during a message or a call, and particularly, after receiving downlink information of a base station, the signal is processed by the processor 1580; in addition, the data of the design uplink is sent to the base station.
The memory 1520 may be used to store software programs and modules, and the processor 1580 implements various functional applications and data processing of the smartphone by running the software programs and modules stored in the memory 1520.
The input unit 1530 may be used to receive input numerical or character information and generate key signal inputs related to user settings and function control of the smart phone. In particular, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, may collect touch operations on or near the user and drive the corresponding connection device according to a predetermined program. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 1540 may be used to display information input by a user or information provided to the user and various menus of the smart phone. The display unit 1540 may include a display panel 1541, and optionally, the display panel 1541 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The smartphone may also include at least one sensor 1550, such as a light sensor, a motion sensor, and other sensors. Other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the smart phone are not described in detail herein.
Audio circuitry 1560, speaker 1561, and microphone 1562 may provide an audio interface between a user and a smart phone. The audio circuit 1560 may transmit the received electrical signal converted from audio data to the speaker 1561, and be converted into a sound signal by the speaker 1561 for output; on the other hand, the microphone 1562 converts the collected sound signals into electrical signals, which are received by the audio circuit 1560 for conversion into audio data, which is processed by the audio data output processor 1580 for transmission to, for example, another smart phone via the RF circuit 1510 or for output to the memory 1520 for further processing.
Processor 1580 is a control center of the smartphone, connects various parts of the entire smartphone with various interfaces and lines, performs various functions of the smartphone and processes data by running or executing software programs and/or modules stored in memory 1520, and invoking data stored in memory 1520. In the alternative, processor 1580 may include one or more processing units.
The smart phone also includes a power source 1590 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 1580 via a power management system, such as to provide for managing charging, discharging, and power consumption.
Although not shown, the smart phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In an embodiment of the present application, the memory 1520 included in the smart phone may store program codes and transmit the program codes to the processor.
The processor 1580 included in the smart phone may execute the image rendering method provided in the above embodiment according to the instruction in the program code.
The embodiment of the application also provides a computer readable storage medium for storing a computer program for executing the picture rendering method provided by the above embodiment.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The computer instructions are read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the picture rendering methods provided in various alternative implementations of the above aspects.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, where the above program may be stored in a computer readable storage medium, and when the program is executed, the program performs steps including the above method embodiments; and the aforementioned storage medium may be at least one of the following media: read-Only Memory (ROM), RAM, magnetic disk or optical disk, etc.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment is mainly described in a different point from other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, with reference to the description of the method embodiments in part. The apparatus and system embodiments described above are merely illustrative, in which elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing is merely one specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered in the protection scope of the present application. Further combinations of the present application may be made to provide further implementations based on the implementations provided in the above aspects. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. The picture rendering method is characterized in that the to-be-rendered content comprising N to-be-rendered sub-contents is used for rendering to obtain a virtual picture, the N to-be-rendered sub-contents are respectively subjected to picture rendering through N renderers, and N is an integer larger than 1 aiming at a target renderer in the N renderers, and the method comprises the following steps:
acquiring target sub-content to be rendered in the content to be rendered, wherein the target sub-content to be rendered is one of the N sub-contents to be rendered;
generating a rendering completion mark when rendering the target sub-content to be rendered to obtain a rendering sub-picture;
sending the rendering completion identification to other renderers, wherein the other renderers are N-1 renderers except the target rendering machine in the N renderers;
If N-1 rendering completion identifications are obtained from the other rendering machines, generating unified completion identifications according to the N rendering completion identifications, wherein the unified completion identifications are used for identifying the order in which the virtual pictures are rendered;
and sending the unified completion identification and the rendering sub-picture to the multicast machine so that the multicast machine synthesizes the virtual picture according to the unified completion identification and the rendering sub-picture.
2. The method of claim 1, wherein the obtaining the target sub-content to be rendered in the content to be rendered comprises:
acquiring the content to be rendered, a target identifier for identifying the target rendering machine, and a corresponding relation between the identifier and a cutting area;
determining a target clipping region corresponding to the target identifier according to the corresponding relation between the identifier and the clipping region;
and acquiring the target sub-content to be rendered from the content to be rendered according to the target clipping region.
3. The method of claim 2, wherein the correspondence between the identification and the clipping region is determined according to display card performances of the N renderers, and the clipping region corresponding to a renderer with higher display card performance is larger.
4. The method according to claim 1, wherein the method further comprises:
and in the process of rendering the sub-content according to the target sub-content to be rendered, if the rendering sub-content is not obtained within the preset time, generating a rendering completion mark.
5. The method according to claim 1, wherein the method further comprises:
acquiring a time stamp used for representing the generation time of the rendering completion identification;
the sending the rendering completion identifier to other renderers includes:
sending the rendering completion identification and a timestamp used for representing the generation time of the rendering completion identification to other renderers;
the generating a unified completion identifier according to the N rendering completion identifiers includes:
determining the timestamp with the latest time from the timestamps corresponding to the N rendering completion identifiers;
and taking the timestamp with the latest time as the unified completion identification.
6. The method of claim 1, wherein the target sub-content to be rendered comprises content to be rendered corresponding to the rendered sub-picture and content to be rendered corresponding to pictures located around the rendered sub-picture, the method further comprising:
Rendering according to the target sub-content to be rendered to obtain a sub-picture to be rendered;
acquiring size information of the rendered sub-picture;
and cutting the to-be-rendered sub-picture according to the size information of the rendered sub-picture to obtain the rendered sub-picture.
7. The method of claim 1, wherein the sending the unified completion identification and the rendered sprite to a broadcaster comprises:
and sending the unified completion identification and rendering sub-pictures to the broadcaster by using a communication technology for accelerating the graphics processor among multiple machines.
8. The method of claim 1, wherein the sending the unified completion identification and the rendered sprite to a broadcaster comprises:
generating a header file according to the rendering sub-picture and the unified completion mark, wherein the header file is used for indicating the director to synthesize to obtain the virtual picture;
generating a data stream according to the header file and the rendered sprite;
and sending the data stream to the director.
9. The method of claim 8, wherein said transmitting said data stream to said director comprises:
dividing the data stream into a plurality of sub-data streams according to a fixed bit rate, the fixed bit rate being determined according to a time length and a size of the data stream;
And sending the multiple sub-data streams to the director one by one.
10. The method of claim 8, wherein the method further comprises:
the method comprises the steps of obtaining an identification of a backup renderer, wherein the backup renderer is used for rendering sub-content to be rendered according to the target sub-content to be rendered to obtain rendered sub-content;
the generating a header file according to the rendering sub-picture and the unified completion identification includes:
and generating a header file according to the identifier of the backup rendering machine, the rendering sub-picture and the unified completion identifier.
11. A method of picture rendering, the method comprising:
receiving M rendering sub-pictures sent by N rendering machines and M unified completion identifiers respectively corresponding to the M rendering sub-pictures, wherein the unified completion identifiers are used for identifying the order in which the virtual pictures are rendered, and N is a positive integer less than or equal to M;
acquiring N rendering sub-pictures with the same unified completion identifier from the M rendering sub-pictures;
and synthesizing the N rendering sub-pictures to obtain the virtual picture.
12. The method of claim 11, wherein receiving M rendered sprites transmitted by N renderers comprises:
Receiving M data streams sent by N renderers, wherein the data streams comprise a header file and a rendering sub-picture, and the header file is generated according to the rendering sub-picture and the unified completion identification;
storing the header file into a system memory;
and storing the rendered sub-picture into a video memory.
13. The method of claim 12, wherein the header file further includes an identification of a backup renderer for rendering the rendered sub-content according to the target sub-content to be rendered, the method further comprising:
if abnormal data streams are determined from the M data streams;
acquiring a header file in the abnormal data stream;
acquiring the identification of the backup rendering machine from the header file in the abnormal data stream;
and acquiring a rendering sub-picture rendered by the backup rendering machine according to the identification of the backup rendering machine.
14. The utility model provides a picture rendering device which characterized in that, wait to render the content including N and be rendered sub-content and be used for rendering and obtain a virtual picture, wait to render sub-content and carry out the picture respectively through N rending machine, to the target rending machine in N rending machines, the target rending machine includes picture rendering device, and N is the integer that is greater than 1, the device includes: the device comprises an acquisition unit, a first generation unit, a first transmission unit, a second generation unit and a second transmission unit;
The obtaining unit is configured to obtain a target sub-content to be rendered in the content to be rendered, where the target sub-content to be rendered is one of the N sub-contents to be rendered;
the first generation unit is used for generating a rendering completion mark when rendering the sub-content to be rendered according to the target to obtain a rendering sub-picture;
the first sending unit is configured to send the rendering completion identifier to other renderers, where the other renderers are N-1 renderers except the target rendering machine in the N renderers;
the second generating unit is configured to generate a unified completion identifier according to N rendering completion identifiers if N-1 rendering completion identifiers are obtained from the other renderers, where the unified completion identifier is used to identify an order in which the virtual frames complete rendering;
the second sending unit is used for sending the unified completion identification and the rendering sub-picture to the multicast machine so that the multicast machine synthesizes the virtual picture according to the unified completion identification and the rendering sub-picture.
15. A picture rendering apparatus, the apparatus comprising: a receiving unit, an acquiring unit and a synthesizing unit;
The receiving unit is used for receiving M rendering sub-pictures sent by N rendering machines and M unified completion identifiers respectively corresponding to the M rendering sub-pictures, wherein the unified completion identifiers are used for identifying the order in which the virtual pictures are rendered, and N is a positive integer less than or equal to M;
the obtaining unit is used for obtaining N rendering sub-pictures with the same unified completion identifier from the M rendering sub-pictures;
and the synthesis unit is used for synthesizing the N rendering sub-pictures to obtain the virtual picture.
16. A picture rendering system, the system comprising a plurality of renderers and a director;
the renderer for performing the method of any of claims 1-10;
the jukebox being adapted to perform the method of any one of claims 11-13.
17. The system of claim 16, further comprising a switch such that the renderer sends the rendered sprites to the director via the switch.
18. A computer device, the computer device comprising a processor and a memory:
the memory is used for storing a computer program and transmitting the computer program to the processor;
The processor is configured to perform the method of any of claims 1-10 or the method of any of claims 11-13 according to instructions in the computer program.
19. A computer readable storage medium for storing a computer program for performing the method of any one of claims 1-10 or for performing the method of any one of claims 11-13.
20. A computer program product comprising a computer program, characterized in that it when run on a computer device causes the computer device to perform the method of any one of claims 1-10 or to perform the method of any one of claims 11-13.
CN202211192246.4A 2022-09-28 2022-09-28 Picture rendering method and related device Pending CN116503498A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211192246.4A CN116503498A (en) 2022-09-28 2022-09-28 Picture rendering method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211192246.4A CN116503498A (en) 2022-09-28 2022-09-28 Picture rendering method and related device

Publications (1)

Publication Number Publication Date
CN116503498A true CN116503498A (en) 2023-07-28

Family

ID=87325482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211192246.4A Pending CN116503498A (en) 2022-09-28 2022-09-28 Picture rendering method and related device

Country Status (1)

Country Link
CN (1) CN116503498A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116866621A (en) * 2023-09-05 2023-10-10 湖南马栏山视频先进技术研究院有限公司 Cloud synchronization method and system for video real-time rendering

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116866621A (en) * 2023-09-05 2023-10-10 湖南马栏山视频先进技术研究院有限公司 Cloud synchronization method and system for video real-time rendering
CN116866621B (en) * 2023-09-05 2023-11-03 湖南马栏山视频先进技术研究院有限公司 Cloud synchronization method and system for video real-time rendering

Similar Documents

Publication Publication Date Title
JP6310073B2 (en) Drawing system, control method, and storage medium
US12022160B2 (en) Live streaming sharing method, and related device and system
CN113209632B (en) Cloud game processing method, device, equipment and storage medium
EP3574662B1 (en) Ambisonic audio with non-head tracked stereo based on head position and time
WO2023071603A1 (en) Video fusion method and apparatus, electronic device, and storage medium
KR102564729B1 (en) Method and apparatus for transmitting information on 3D content including a plurality of viewpoints
CN104768023A (en) System and method for delivering graphics over network
Akyildiz et al. Wireless extended reality (XR): Challenges and new research directions
CN105550934A (en) System and method for pushing WeChat soft advertisement in virtual reality
CN116503498A (en) Picture rendering method and related device
WO2022073840A1 (en) Latency management with deep learning based prediction in gaming applications
CN108401163B (en) Method and device for realizing VR live broadcast and OTT service system
CN106558016B (en) 4K movie & TV cloud preparation assembly line
US11431770B2 (en) Method, system, apparatus, and electronic device for managing data streams in a multi-user instant messaging system
CN111381787A (en) Screen projection method and equipment
WO2024027611A1 (en) Video live streaming method and apparatus, electronic device and storage medium
WO2020063171A1 (en) Data transmission method, terminal, server and storage medium
US20230206575A1 (en) Rendering a virtual object in spatial alignment with a pose of an electronic device
US20230025664A1 (en) Data processing method and apparatus for immersive media, and computer-readable storage medium
KR20240007142A (en) Segmented rendering of extended reality data over 5G networks
CN114157903A (en) Redirection method, redirection device, redirection equipment, storage medium and program product
EP4373089A1 (en) Data processing method and apparatus, computer, and readable storage medium
WO2023169003A1 (en) Point cloud media decoding method and apparatus and point cloud media coding method and apparatus
EP4290868A1 (en) 3d object streaming method, device, and program
EP4202611A1 (en) Rendering a virtual object in spatial alignment with a pose of an electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40089843

Country of ref document: HK