WO2022095526A1 - 图形引擎和适用于播放器的图形处理方法 - Google Patents

图形引擎和适用于播放器的图形处理方法 Download PDF

Info

Publication number
WO2022095526A1
WO2022095526A1 PCT/CN2021/111381 CN2021111381W WO2022095526A1 WO 2022095526 A1 WO2022095526 A1 WO 2022095526A1 CN 2021111381 W CN2021111381 W CN 2021111381W WO 2022095526 A1 WO2022095526 A1 WO 2022095526A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
player
graphics
data
engine
Prior art date
Application number
PCT/CN2021/111381
Other languages
English (en)
French (fr)
Inventor
李超然
王昊
董重
王兆政
Original Assignee
上海哔哩哔哩科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海哔哩哔哩科技有限公司 filed Critical 上海哔哩哔哩科技有限公司
Priority to US18/033,788 priority Critical patent/US20230403437A1/en
Publication of WO2022095526A1 publication Critical patent/WO2022095526A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Definitions

  • the present application relates to the technical field of graphics processing, and in particular, to a graphics engine, and a graphics processing method, system, computer device and computer-readable storage medium suitable for a player.
  • the purpose of the embodiments of the present application is to provide a graphics engine, as well as a graphics processing method, system, computer device and computer-readable storage medium suitable for a player, so as to solve the problem of adapting the graphics engine to video service scenarios.
  • An aspect of the embodiments of the present application provides a graphics engine, including: an engine scene layer, configured to perform graphics processing operations according to preset logic; wherein: performing the graphics processing operations according to the preset logic includes: based on predetermined logic
  • the transmission protocol interacts with the player to obtain playback data, and performs the graphics processing operation according to the playback data and the preset logic.
  • the playback data includes the video playback progress of the player; performing the graphics processing operation according to the playback data and the preset logic includes: according to the video playback progress and the preset logic , to get the barrage position data of each barrage.
  • the playback data includes request data for requesting to close the bullet screen; performing the graphics processing operation according to the playback data and the preset logic includes: performing a bullet screen according to the request data and the preset logic. screen hidden.
  • the method further includes: a view layer for transmitting a message to the engine scene layer; the message includes: a frame update signal, a touch event, a scene coordination signal and/or interaction data with the player.
  • a Runtime runtime layer for running JS codes the Runtime runtime layer is bound with a JS application program interface; wherein, the JS application program interface is used as an externally open interface of the graphics engine, used for Interaction between the graphics engine and third-party functional modules.
  • a shell layer used as an interactive interface between the graphics engine and the system platform, for adapting the graphics engine to the target system platform.
  • the engine scene layer is further used for: combining multiple textures into one texture.
  • the engine scene layer is further used to: update node data of multiple nodes in the scene; generate multiple rendering instructions for the multiple nodes according to the node data of the multiple nodes;
  • the nodes include one or more first rendering nodes that do not require an independent rendering environment and one or more second rendering nodes that require an independent rendering environment;
  • the multiple rendering instructions are cached in multiple rendering queues;
  • the multiple rendering Each rendering queue includes a first rendering queue and one or more second rendering queues; the first rendering queue is used to cache the rendering instructions of each first rendering node; each second rendering queue corresponds to a second rendering node respectively and is used to cache the rendering instructions of the corresponding second rendering node; traverse the multiple rendering queues, and combine multiple rendering commands with the same target parameters into one rendering batch to obtain multiple rendering batches; and according to each rendering batch Call graphics library or hardware.
  • Another aspect of the embodiments of the present application provides a graphics processing method suitable for a player, including: performing data interaction with the player through a graphics engine to obtain playback data; The graphics engine performs graphics processing operations on the next frame of the player.
  • a transmission protocol for interaction is pre-agreed between the graphics engine and the player.
  • the playback data includes the video playback progress of the player; performing a graphics processing operation on the next frame of the player by the graphics engine includes: according to the video playback progress and the preset Logic, get the bullet screen position data of each bullet screen.
  • the playback data includes request data for requesting to close the bullet screen; performing a graphics processing operation on the next frame of the player through the graphics engine includes: performing a graphics processing operation according to the request data and the preset logic. The barrage is hidden.
  • performing a graphics processing operation on the next frame of the player by the graphics engine further includes: combining multiple textures into one texture.
  • performing a graphics processing operation on the next frame of the player by the graphics engine further includes: updating node data of multiple nodes in the scene; Multiple nodes generate multiple rendering instructions; the multiple nodes include one or more first rendering nodes that do not require an independent rendering environment and one or more second rendering nodes that require an independent rendering environment; the multiple rendering Instructions are cached in multiple rendering queues; the multiple rendering queues include a first rendering queue and one or more second rendering queues; the first rendering queue is used to cache the rendering instructions of each first rendering node; Each second rendering queue corresponds to a second rendering node and is used to cache the rendering instructions of the corresponding second rendering node; traverse the multiple rendering queues, and combine multiple rendering commands with the same target parameters into one rendering batch to obtain Multiple render batches; and calls to the graphics library or hardware based on each render batch.
  • Another aspect of the embodiments of the present application further provides a graphics processing system suitable for a player, including: an interaction module for performing data interaction with the player through a graphics engine to obtain playback data; a graphics processing module for based on The playback data and preset logic perform graphics processing operations on the next frame of the player through the graphics engine.
  • FIG. 1 Another aspect of the embodiments of the present application further provides a computer device, the computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, the processor executing the computer When the instruction is readable, perform the following steps: perform data interaction with the player through a graphics engine to obtain playback data; based on the playback data and preset logic, perform graphics processing on the next frame of the player through the graphics engine operate.
  • Another aspect of the embodiments of the present application further provides a computer-readable storage medium, where computer-readable instructions are stored in the computer-readable storage medium, and the computer-readable instructions can be executed by at least one processor to cause The at least one processor performs the following steps: performing data interaction with the player through a graphics engine to obtain playback data; based on the playback data and preset logic, executing the next frame of the player through the graphics engine Graphics processing operations.
  • the graphics engine provided by the embodiments of the present application, as well as the graphics processing method, system, computer device and computer-readable storage medium applicable to the player, are set between the engine scene layer and the player in order to adapt to the video playback service.
  • the communication mechanism enables the interaction between the engine scene layer and the player regarding the playback data of the video being played, so that the graphics engine can efficiently obtain the playback data for graphics processing operations and ensure the video rendering effect.
  • FIG. 1 schematically shows an architecture diagram of a graphics engine according to Embodiment 1 of the present application
  • Figure 2 is a video frame carrying a bullet screen
  • FIG. 3 schematically shows a flowchart of a graphics processing method applicable to a player according to Embodiment 2 of the present application
  • Fig. 4 is a sub-step diagram of step S302 in Fig. 3;
  • Fig. 5 is another sub-step diagram of step S302 in Fig. 3;
  • Fig. 6 is another sub-step diagram of step S302 in Fig. 3;
  • Fig. 7 is another sub-step diagram of step S302 in Fig. 3;
  • FIG. 8 schematically shows a specific flow chart of rendering a frame
  • FIG. 9 schematically shows a block diagram of a graphics processing system suitable for a player according to Embodiment 3 of the present application.
  • FIG. 10 schematically shows a schematic diagram of a hardware architecture of a computer device suitable for implementing a graphics processing method suitable for a player according to Embodiment 4 of the present application.
  • any graphic is composed of basic patterns or repeated patterns such as points, lines, surfaces, etc. These patterns are primitives.
  • a graphics engine is a functional component used for graphics rendering. With the rapid development of graphics software and hardware technology, graphics engines have also developed rapidly and have been used in animation, virtual reality, game development, simulation and other fields.
  • DirectX graphics interface based on Microsoft's common object model
  • OpenGL Open Graphics Library, open underlying graphics library
  • graphics engine such as graphics engine OSG and Unity.
  • the purpose of this application is to provide a graphics engine or a graphics processing method especially suitable for a player, which can support cross-platform, achieve lightweight, and achieve the purpose of easy use, and can fully fit the company's business scenario: both the player and the player can Good data interaction can provide sufficient technical support for other businesses.
  • the present application also aims to optimize the rendering process, thereby reducing the computing resources consumed by graphics rendering and improving rendering efficiency.
  • the graphics engine provided by this application can be used to add high-performance 2D/3D content with smooth animation to applications, or to create games using a set of advanced tools based on 2D/3D games.
  • the graphics engine provided in this application can also be integrated into an application, which is responsible for executing the graphics engine software package.
  • FIG. 1 schematically shows an architectural diagram of a graphics engine 1 according to Embodiment 1 of the present application.
  • the graphics engine 1 may include:
  • the engine scene layer 2 is used to perform graphics processing operations according to preset logic.
  • performing the graphics processing operation according to the preset logic includes: interacting with the player based on a predetermined transmission protocol to obtain playback data, and performing the graphics processing operation according to the playback data and the preset logic.
  • the engine scene layer 2 as the core of the graphics engine 1 and used to implement engine functions, is internally encapsulated with the preset logic (set).
  • preset logics can include related controls, scenes, nodes or sub-nodes (Node, SpriteNode, LabelNode), action (Action), texture (Texture), font (Font), shader (Shader), physical simulation ( Simulates Physics), application constraints (Applies Constraints) and other business implementation logic.
  • a communication mechanism is set between the engine scene layer 2 and the player, so that the engine scene layer 2 and the player interact with the playback data of the video being played, so that The graphics engine 1 can efficiently obtain playback data for graphics processing operations to ensure video rendering effects.
  • the playback data may include video playback progress, size, user-triggered events on the player, and the like.
  • the playback data may be video playback progress from the player.
  • the bullet screen is a subtitle that pops up and moves in a predetermined direction when watching a video through the network.
  • Barrage has no fixed vocabulary in English, it is usually called: comment, danmaku, barrage, bullet screen, bullet-screen comment, etc. Barrage allows users to post comments or thoughts, but unlike ordinary video sharing sites that only display in the dedicated comment area under the player, it will appear on the video screen in real time in the form of sliding subtitles to ensure that all viewers can notice.
  • the bullet screen is popped up on a specific playback node, and slides in a predetermined direction on the video playback interface.
  • the engine scene layer 2 When the engine scene layer 2 is rendering a video frame, it needs to determine the playback position of the video frame in the video (ie, the video playback progress), and use the playback position to determine where each bullet screen is drawn to the video frame. Therefore, the engine scene layer is used to: determine the bullet screen position of each bullet screen according to the video playback progress and the preset logic.
  • the playback data may be request data from the player for requesting to close the bullet screen.
  • the player when the player receives a user operation to close the bullet screen, it will generate request data for closing the bullet screen, and transfer the request data for closing the bullet screen to the preset transmission protocol. passed to the engine scene layer.
  • the engine scene layer is rendering a video frame and receives the request data for closing the bullet chat from the player, it will hide all or part of the bullet chat on the video frame. Therefore, the engine scene layer 2 is used to: hide the bullet screen according to the request data and the preset logic.
  • the data obtained by the data interaction between the graphics engine 1 and the player can be used for the calculation and graphics rendering of the graphics engine 1 .
  • which data of the player to be used by the graphics engine 1, or how to use the data of the player is determined by the preset logic.
  • the view layer 3 is used to pass messages to the engine scene layer.
  • the view layer 3 can transmit the message of the player or the system platform to the engine scene layer, and can also transmit the message of the engine scene layer to the player or the system platform.
  • the message includes: a frame update signal (Vsync), a touch event (Touch Event), a scene coordination signal (View Coor) and/or interaction data with the player.
  • the Runtime operation layer 4 is used for running JS code.
  • the Runtime operation layer 4 is bound with a JS application program interface; wherein, the JS application program interface is used as an externally open interface of the graphics engine for interaction between the graphics engine and third-party functional modules.
  • the shell layer 5 as an interactive interface between the graphics engine and the system platform, is used to adapt the graphics engine to the target system platform.
  • the shell layer 5 is to encapsulate the function of the graphics engine, so that the graphics engine can be adapted to each system platform, and the system platform can be Android (Android), IOS (Apple Mobile Operating System), macOS (Apple desktop computer operating systems), browsers, etc.
  • the shell layer is designed to enable the graphics engine to support multi-terminal and cross-platform development.
  • log log
  • file system FileSystem
  • Task operation Task operation
  • Thread Manager thread management
  • Thread Manager thread management
  • clock timepoint
  • restart message loop, etc.
  • the realization of the graphics rendering pipeline can be completed by encapsulating the graphics interface, which provides support for implementing the function of the engine scene layer.
  • the graphics engine pipeline is: a series of orderly processing processes from data input to the GPU to final rendering into graphics.
  • the graphics rendering pipeline involves rendering buffers (Renderbuffer), frame buffers (Framebuffer), shaders, textures, etc.
  • Step A The view layer 3 receives the frame update signal and forwards it to the engine scene layer.
  • Step B The engine scene layer 2 starts the processing operation of the next frame of the player according to the frame update signal.
  • Step C Based on the playback data, the engine scene layer 2 processes various data of scene nodes such as video frames and bullet screens according to preset logic (such as motion evaluation, physical simulation, etc.) to obtain updated data; based on the update After the data, you can initiate a Draw Call call command to the graphics processor (GPU, Graphics Processing Unit) or call the underlying graphics library (eg, OpenGL) to implement rendering operations.
  • the motion evaluation may include various evaluations such as position change, color change, and transparency change.
  • Shell layer 5 is used for the system platform adaptation of the graphics engine
  • the runtime layer 4 can parse and run codes such as JavaScript, for the function extension of the graphics engine.
  • each node will call the Draw Call command one or more times, and it takes time to prepare data before each Draw Call command is called.
  • the data prepared by the Draw Call command called multiple times may have similarities, and calling the same data through a Draw Call command can reduce the data storage to improve the efficiency, and can also improve the efficiency. Rendering efficiency, reducing GPU computing resources.
  • the engine scene layer 2 is also used to combine multiple common textures into one texture, so that more nodes can share the same texture object (Texture Object). Texture objects are used to store texture data and make texture data more efficient to obtain.
  • Step C1 updating the node data of multiple nodes in the scene
  • Step C2 according to the node data of the multiple nodes, generate multiple rendering instructions for the multiple nodes;
  • the multiple nodes include one or more first rendering nodes that do not require an independent rendering environment and require an independent rendering environment.
  • Step C3 Cache the multiple rendering instructions into multiple rendering queues; the multiple rendering queues include a first rendering queue and one or more second rendering queues; the first rendering queue is used to cache each The rendering instructions of the first rendering node; each second rendering queue corresponds to a second rendering node and is used to cache the rendering instructions of the corresponding second rendering node;
  • Step C4 traversing the multiple rendering queues, and combining multiple rendering commands with the same target parameters into one rendering batch to obtain multiple rendering batches;
  • step C5 the graphics library or hardware is called according to each rendering batch, so as to reduce the number of calls.
  • the target parameters may include texture parameters, shader parameters, and the like.
  • the one or more second rendering nodes may be CropNode (node to be cropped), EffectNode (node to be filtered), and the like.
  • Each first rendering node or each second rendering node may have one or more rendering instructions as needed.
  • FIG. 3 schematically shows a flowchart of a graphics processing method applicable to a player according to Embodiment 2 of the present application.
  • the method can be run in a mobile terminal.
  • the method may include steps S300-S302, wherein:
  • Step S300 performing data interaction with the player through the graphics engine to obtain playback data.
  • Step S302 based on the playback data and preset logic, perform a graphics processing operation on the next frame of the player through the graphics engine.
  • the graphics engine may be the graphics engine described in the first embodiment.
  • a transmission protocol (such as a data format, etc.) for interaction is pre-agreed between the graphics engine and the player, so as to realize effective interaction between the two.
  • the playback data includes video playback progress of the player.
  • the step S302 may include step S400 : obtaining the bullet screen position data of each bullet screen according to the video playback progress and the preset logic. That is, through the interaction data between the graphics engine and the player, it is used to determine the position of the bullet screen.
  • the playback data includes request data for requesting to close the bullet screen.
  • the step S302 may include step S500 : hide the bullet screen according to the request data and the preset logic. That is, through the interaction data between the graphics engine and the player, it is used to hide the bullet screen.
  • the step S302 may include step S600 : merging multiple (commonly used) textures into one texture, so that more nodes can share the same texture object to save memory .
  • the step S302 may include steps S700-S708, wherein: step S700, updating the node data of multiple nodes in the scene; step S702, according to the multiple nodes the node data, generate multiple rendering instructions for the multiple nodes; the multiple nodes include one or more first rendering nodes that do not require an independent rendering environment and one or more second rendering nodes that require an independent rendering environment ; Step S704, cache the multiple rendering instructions into multiple rendering queues; the multiple rendering queues include a first rendering queue and one or more second rendering queues; the first rendering queue is used to cache each Each second rendering queue corresponds to a second rendering node and is used to cache the rendering instructions of the corresponding second rendering node; Step S706, traverse the multiple rendering queues, and set the same target parameters The multiple rendering commands are combined into one rendering batch to obtain multiple rendering batches; and step S708, the graphics library or hardware is invoked according to each rendering batch. This embodiment can reduce the number of calls and the like.
  • Step S800 the graphics engine layer obtains the frame update signal (Vsync) from the system platform through the view layer.
  • Step S802 the frame data is updated to update the node data of multiple nodes in the scene.
  • step S802 specifically includes: based on the playback data and preset logic, updating the node data of each node in the scene; the updating process of the node data of each node may include motion evaluation, physical simulation, and the like.
  • Step S804 the Draw Call command is merged.
  • step S804 specifically includes: generating multiple rendering instructions for the multiple nodes according to the node data of the multiple nodes; caching the multiple rendering instructions into multiple rendering queues; and traversing the multiple rendering instructions Rendering queue, which combines multiple rendering commands with the same target parameters into one rendering batch to obtain multiple rendering batches;
  • step S806 it is judged whether redrawing is necessary, and if necessary, the process proceeds to step S802C, otherwise, the processing operation is stopped and the next Vsync is asynchronously awaited. For example, if the frame data of this frame does not change from the data of the previous frame, it will not be redrawn.
  • FIG. 9 schematically shows a block diagram of a graphics processing system suitable for a player according to Embodiment 3 of the present application.
  • the graphics processing system suitable for a player may be divided into one or more program modules, one or more programs
  • the modules are stored in the storage medium and executed by one or more processors to complete the embodiments of the present application.
  • the program modules referred to in the embodiments of the present application refer to a series of computer-readable instruction segments capable of performing specific functions. The following description will specifically introduce the functions of each program module in this embodiment.
  • the graphics processing system 900 suitable for a player may include an interaction module 910 and a graphics processing module 920, wherein:
  • the interaction module 910 is used to perform data interaction with the player through the graphics engine to obtain playback data
  • the graphics processing module 920 is configured to perform graphics processing operations on the next frame of the player through the graphics engine based on the playback data and preset logic.
  • a transmission protocol for interaction is pre-agreed between the graphics engine and the player.
  • the playback data includes the video playback progress of the player; the graphics processing module 920 is further configured to: obtain each bullet screen according to the video playback progress and the preset logic barrage location data.
  • the playback data includes request data for requesting to close the bullet screen; the graphics processing module 920 is configured to hide the bullet screen according to the request data and the preset logic.
  • the graphics processing module 920 is further configured to: perform action evaluation on a plurality of nodes in the scene according to the preset logic; and update each node according to the action evaluation of each node According to the node data of each node, at least some of the nodes in the plurality of nodes are merged into several merged nodes; and according to each merged node or each unmerged node in the plurality of nodes, Call the graphics library separately.
  • a graphics processing module used for the node data to include texture data
  • the graphics processing module 920 used for: when the multiple nodes include several nodes with the same texture, then merge the several nodes into corresponding merged nodes .
  • the graphics processing module 920 is further configured to: combine multiple textures into one texture.
  • the graphics processing module 920 is further configured to: update node data of multiple nodes in the scene; and generate multiple renderings for the multiple nodes according to the node data of the multiple nodes instructions; the multiple nodes include one or more first rendering nodes that do not require an independent rendering environment and one or more second rendering nodes that require an independent rendering environment; cache the multiple rendering instructions to multiple rendering queues in; the multiple rendering queues include a first rendering queue and one or more second rendering queues; the first rendering queue is used to cache the rendering instructions of each first rendering node; each second rendering queue corresponds to a second rendering node and used to cache the rendering instructions of the corresponding second rendering node; traverse the multiple rendering queues, and combine multiple rendering commands with the same target parameters into one rendering batch to obtain multiple rendering batches; and The graphics library or hardware is invoked on individual render batches.
  • FIG. 10 schematically shows a schematic diagram of a hardware architecture of a computer device 900 suitable for implementing a graphics processing method suitable for a player according to Embodiment 4 of the present application.
  • the computer device 900 may be the mobile terminal 4 or be a part of the mobile terminal 4 .
  • the computer device 900 is a device that can automatically perform numerical calculation and/or information processing according to pre-set or stored instructions.
  • it can be a smartphone, tablet, etc.
  • the computer device 900 includes at least but not limited to: a memory 1010, a processor 1020, and a network interface 1030 that can communicate with each other through a system bus. in:
  • the memory 1010 includes at least one type of computer-readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (eg, SD or DX memory, etc.), random access memory (RAM), static random access memory, etc. (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, etc.
  • the memory 1010 may be an internal storage module of the computer device 900 , such as a hard disk or memory of the computer device 900 .
  • the memory 1010 may also be an external storage device of the computer device 900, such as a plug-in hard disk, a Smart Media Card (SMC for short), a Secure Digital (Secure Digital) device equipped on the computer device 900 Digital, referred to as SD) card, flash memory card (Flash Card) and so on.
  • the memory 1010 may also include both an internal storage module of the computer device 900 and an external storage device thereof.
  • the memory 1010 is generally used to store the operating system installed in the computer device 900 and various application software, such as program codes suitable for the graphics processing method of the player.
  • the memory 1010 may also be used to temporarily store various types of data that have been output or will be output.
  • the processor 1020 may be a central processing unit (Central Processing Unit, CPU for short), a controller, a microcontroller, a microprocessor, or other data processing chips.
  • the processor 1020 is generally used to control the overall operation of the computer device 900 , such as performing control and processing related to data interaction or communication with the computer device 900 .
  • the processor 1020 is configured to run program codes or process data stored in the memory 1010 .
  • the network interface 1030 which may include a wireless network interface or a wired network interface, is typically used to establish a communication link between the computer device 900 and other computer devices.
  • the network interface 1030 is used to connect the computer device 900 to an external terminal through a network, and to establish a data transmission channel and a communication link between the computer device 900 and the external terminal.
  • the network can be Intranet, Internet, Global System of Mobile communication (GSM for short), Wideband Code Division Multiple Access (WCDMA for short), 4G network , 5G network, Bluetooth (Bluetooth), Wi-Fi and other wireless or wired networks.
  • FIG. 10 only shows a computer device having components 1010-1030, but it should be understood that implementation of all of the shown components is not required and that more or fewer components may be implemented instead.
  • the graphics processing method suitable for the player stored in the memory 1010 may also be divided into one or more program modules, and executed by one or more processors (the processor 1020 in this embodiment). Execute to complete the embodiments of the present application.
  • the present application also provides a computer-readable storage medium, where computer-readable instructions are stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the following steps are implemented: performing data interaction with the player through a graphics engine, to Obtain playback data; based on the playback data and preset logic, perform graphics processing operations on the next frame of the player through the graphics engine.
  • the computer-readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read-Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc.
  • the computer-readable storage medium may be an internal storage unit of a computer device, such as a hard disk or memory of the computer device.
  • the computer-readable storage medium may also be an external storage device of a computer device, such as a plug-in hard disk equipped on the computer device, a smart memory card (Smart Media Card, SMC for short), a secure digital ( Secure Digital, referred to as SD) card, flash memory card (Flash Card) and so on.
  • the computer-readable storage medium may also include both an internal storage unit of a computer device and an external storage device thereof.
  • the computer-readable storage medium is generally used to store the operating system and various application software installed on the computer device, for example, the program code of the graphics processing method applicable to the player in the embodiment.
  • the computer-readable storage medium can also be used to temporarily store various types of data that have been output or will be output.
  • each module or each step of the above-mentioned embodiments of the present application can be implemented by a general-purpose computing device, and they can be concentrated on a single computing device, or distributed in multiple computing devices. network, they can optionally be implemented with program code executable by a computing device, so that they can be stored in a storage device and executed by the computing device, and in some cases, can be different from the The illustrated or described steps are performed in sequence, either by fabricating them separately into individual integrated circuit modules, or by fabricating multiple modules or steps of them into a single integrated circuit module. As such, the embodiments of the present application are not limited to any specific combination of hardware and software.
  • the graphics engine described in the first embodiment of the present application has made targeted improvements especially for the adaptation of the player, and is especially suitable for dealing with fast iteration and high effectiveness in the bullet screen activity scene.
  • the graphics engine described in the first embodiment of the application based on the core requirements of the video playback service, through the encapsulation of the graphics interface and engine logic, the functions of the engine are satisfied and the engine is extremely "lightweight". It enables developers to quickly learn and develop their own required programs, and ensure that the program size is as small as possible.
  • the engine logic, views and tools of the graphics engine are adapted to different platform interfaces to meet the unification of business logic of different platform engines, so that they can be quickly deployed on multiple platforms and provide JS applications.
  • the interface provides the calling service for the graphics engine, which ensures the scalability of the function and the efficiency of the performance.
  • the transmission protocol and data encapsulation format between the player's play page and the graphics engine can be agreed, and the methods of the play page and the graphics engine to receive information can be encapsulated, thereby establishing a communication mechanism between the two.
  • the graphics engine calculates an addition operation "6+8”
  • the engine scene layer of the graphics engine can send an addition method request to the player with the parameters 6 and 8 respectively. After receiving the request, the player returns the calculation result to the engine scene layer.
  • the above method request information can be as follows:

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种适用于播放器的图形处理方法,包括:通过图形引擎与播放器进行数据交互,以得到播放数据;基于所述播放数据和预设逻辑,通过所述图形引擎对所述播放器的下一帧执行图形处理操作。在本申请中,在所述引擎场景层和所述播放器之间设置通信机制,使得所述引擎场景层和所述播放器之间就正在播放的视频之播放数据进行交互,从而使得所述引擎场景层可以高效获取播放数据用于图形处理操作,尤其适用于视频播放业务,确保视频渲染效果。

Description

图形引擎和适用于播放器的图形处理方法
本申请申明2020年11月05日递交的申请号为202011223807.3、名称为“图形引擎和适用于播放器的图形处理方法”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请涉及图形处理技术领域,尤其涉及一种图形引擎,以及适用于播放器的图形处理方法、***、计算机设备和计算机可读存储介质。
背景技术
随着互联网的发展,例如便携式计算机、平板式计算机、个人计算机、智能手机、平板式计算机等各类电子设备被广泛应用。为了提高用户体验,越来越多的操作***或应用程序需要图形引擎的支持,通过图形引擎渲染二维图形或三维图形。现有的图形引擎有OSG(OpenSceneGraph,三维渲染引擎)、Unity引擎等。
在移动端时代,业务场景更为复杂,在不同的业务场景下需要的引擎功能不尽相同,轻量化要求也更高。部分公司根据自己的业务需要开发了不同的图形引擎,如,cocos2d-x、苹果公司的SpriteKit、Google公司开发的用于Flutter中的skia、阿里巴巴公司的Gcanvas等。其他各个公司也在根据自己的独特业务需求进行引擎设计。发明人意识到,上述诸多图形引擎中,都不能有效地适应视频播放业务。
发明内容
本申请实施例的目的是提供一种图形引擎,以及适用于播放器的图形处理方法、***、计算机设备和计算机可读存储介质,用于解决图形引擎面向视频业务场景的适应问题。
本申请实施例的一个方面提供了一种图形引擎,包括:引擎场景层,用于根据预设逻辑执行图形处理操作;其中:根据所述预设逻辑执行所述图形处理操作,包括:基于预定传输协议与播放器交互以得到播放数据,及根据所述播放数据和所述预设逻辑执行所述图形处理操作。
可选的,所述播放数据包括所述播放器的视频播放进度;根据所述播放数据和所述预设逻辑执行所述图形处理操作,包括:根据所述视频播放进度和所述预设逻辑,获取各个弹幕的弹幕位置数据。
可选的,所述播放数据包括请求关闭弹幕的请求数据;根据所述播放数据和所述预设逻辑执行所述图形处理操作,包括:根据所述请求数据和所述预设逻辑进行弹幕隐藏。
可选的,还包括:视图层,用于将消息传递至所述引擎场景层;所述消息包括:帧更新信号、触控事件、场景协同信号和/或与所述播放器的交互数据。
可选的,还包括:Runtime运行层,用于运行JS代码;所述Runtime运行层绑定有JS应用程序接口;其中,所述JS应用程序接口作为所述图形引擎的对外开放接口,用于所述图形引擎和第三方功能模块之间的交互。
可选的,还包括:shell层,作为所述图形引擎和***平台之间的交互接口,用于将所 述图形引擎适配于目标***平台。
可选的,所述引擎场景层,还用于:将多个纹理合并成一个纹理。
可选的,所述引擎场景层,还用于:更新场景中的多个节点的节点数据;根据所述多个节点的节点数据,为所述多个节点生成多个渲染指令;所述多个节点包括不需要独立渲染环境的一个或多个第一渲染节点和需要独立渲染环境的一个或多个第二渲染节点;将所述多个渲染指令缓存到多个渲染队列中;所述多个渲染队列包括第一渲染队列和一个或多个第二渲染队列;所述第一渲染队列用于缓存每个第一渲染节点的渲染指令;每个第二渲染队列分别对应一个第二渲染节点并用于缓存相应第二渲染节点的渲染指令;遍历所述多个渲染队列,将目标参数相同的多个渲染命令合并成一个渲染批次,以得到多个渲染批次;及根据各个渲染批次调用图形库或硬件。
本申请实施例的一个方面又提供了一种适用于播放器的图形处理方法,包括:通过图形引擎与播放器进行数据交互,以得到播放数据;基于所述播放数据和预设逻辑,通过所述图形引擎对所述播放器的下一帧执行图形处理操作。
可选的,所述图形引擎和所述播放器之间预先约定有用于交互的传输协议。
可选的,所述播放数据包括所述播放器的视频播放进度;通过所述图形引擎对所述播放器的下一帧执行图形处理操作,包括:根据所述视频播放进度和所述预设逻辑,获取各个弹幕的弹幕位置数据。
可选的,所述播放数据包括请求关闭弹幕的请求数据;通过所述图形引擎对所述播放器的下一帧执行图形处理操作,包括:根据所述请求数据和所述预设逻辑进行弹幕隐藏。
可选的,通过所述图形引擎对所述播放器的下一帧执行图形处理操作,还包括:将多个纹理合并成一个纹理。
可选的,通过所述图形引擎对所述播放器的下一帧执行图形处理操作,还包括:更新场景中的多个节点的节点数据;根据所述多个节点的节点数据,为所述多个节点生成多个渲染指令;所述多个节点包括不需要独立渲染环境的一个或多个第一渲染节点和需要独立渲染环境的一个或多个第二渲染节点;将所述多个渲染指令缓存到多个渲染队列中;所述多个渲染队列包括第一渲染队列和一个或多个第二渲染队列;所述第一渲染队列用于缓存每个第一渲染节点的渲染指令;每个第二渲染队列分别对应一个第二渲染节点并用于缓存相应第二渲染节点的渲染指令;遍历所述多个渲染队列,将目标参数相同的多个渲染命令合并成一个渲染批次,以得到多个渲染批次;及根据各个渲染批次调用图形库或硬件。
本申请实施例的一个方面又提供了一种适用于播放器的图形处理***,包括:交互模块,用于通过图形引擎与播放器进行数据交互,以得到播放数据;图形处理模块,用于基于所述播放数据和预设逻辑,通过所述图形引擎对所述播放器的下一帧执行图形处理操作。
本申请实施例的一个方面又提供了一种计算机设备,所述计算机设备包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时执行如下步骤:通过图形引擎与播放器进行数据交互,以得到播放数据;基于所述播放数据和预设逻辑,通过所述图形引擎对所述播放器的下一帧执行图形处理操作。
本申请实施例的一个方面又提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机可读指令,所述计算机可读指令可被至少一个处理器所执行,以使所述至少一个处理器执行如下步骤:通过图形引擎与播放器进行数据交互,以得到播放数据; 基于所述播放数据和预设逻辑,通过所述图形引擎对所述播放器的下一帧执行图形处理操作。
本申请实施例提供的图形引擎,以及适用于播放器的图形处理方法、***、计算机设备和计算机可读存储介质,为适应视频播放业务,在所述引擎场景层和所述播放器之间设置通信机制,使得所述引擎场景层和所述播放器之间就正在播放的视频之播放数据进行交互,使得图形引擎可以高效获取播放数据用于图形处理操作,确保视频渲染效果。
附图说明
图1示意性示出了根据本申请实施例一的图形引擎的架构图;
图2为携带弹幕的视频帧;
图3示意性示出了根据本申请实施例二的适用于播放器的图形处理方法的流程图;
图4为图3中步骤S302的子步骤图;
图5为图3中步骤S302的另一子步骤图;
图6为图3中步骤S302的另一子步骤图;
图7为图3中步骤S302的另一子步骤图;
图8示意性示出了渲染一帧的具体流程图;
图9示意性示出了根据本申请实施例三的适用于播放器的图形处理***的框图;及
图10示意性示出了根据本申请实施例四的适于实现适用于播放器的图形处理方法的计算机设备的硬件架构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,在本申请实施例中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。
在本申请的描述中,需要理解的是,步骤前的数字标号并不标识执行步骤的前后顺序,仅用于方便描述本申请及区别每一步骤,因此不能理解为对本申请的限制。
以下为本申请的术语解释:
图元,任何一个图形都是由点、线、面等基础图案或者重复的图案组合而成,这些图案即图元。
图形引擎,是用于图形绘制的功能组件,随着图形软硬件技术的快速发展,图形引擎也得到了快速的发展,在动画、虚拟现实、游戏开发、仿真模拟等领域得到应用。
DirectX(基于微软的通用对象模式的图形接口)和OpenGL(Open Graphics Library,开放式底层图形库)可以用于实现编写三维图形。但是,由于其设计场景中的具体对象较 为困难,更着重于基础图元,因此后来对其进行封装,成为使用更加简便,功能更加丰富的开发环境,可以称之为图形引擎,诸如图形引擎OSG和Unity。
在移动端时代,业务场景更为复杂,在不同的业务场景下需要的引擎功能不尽相同,轻量化要求也更高。部分公司根据自己的业务需要开发了不同的图形引擎,如,cocos2d-x、苹果公司的SpriteKit、Google公司开发的用于Flutter中的skia、阿里巴巴公司的Gcanvas等。其他各个公司也在根据自己的独特业务需求进行引擎设计。
上述诸多图形引擎中,缺乏这样一款图形引擎,既能够既满足跨平台,轻量化的要求,又能和播放器交互,特别适应视频业务场景,同时又有良好性能能够支撑各种业务。
本申请旨在提供一种图形引擎或特别适于播放器的图形处理方法,其能够支持跨平台和实现轻量化、达到使用简便的目的,并能够完全契合公司业务场景:既能够和播放器能够良好的数据交互,又能够为其他业务能够进行足够的技术支撑。本申请还旨在优化渲染流程,从而减少图形渲染所消耗的计算资源,提升渲染效率。
需要说明的是,本申请提供的图形引擎,可以用于向应用程序添加具有流畅动画的高性能2D/3D内容,或使用一组基于2D/3D游戏的高级工具来创建游戏。本申请提供的图形引擎亦可以集成到应用程序中,其负责执行图形引擎软件包。
实施例一
图1示意性示出了根据本申请实施例一的图形引擎1的架构图。所述图形引擎1可以包括:
(1)引擎场景层(Scene)2。
所述引擎场景层2,用于根据预设逻辑执行图形处理操作。
其中,根据所述预设逻辑执行所述图形处理操作,包括:基于预定传输协议与播放器交互以得到播放数据,及根据所述播放数据和所述预设逻辑执行所述图形处理操作。
所述引擎场景层2,作为所述图形引擎1的内核并用于实现引擎功能,内部封装有所述预设逻辑(集合)。这些预设逻辑(集合),可以包括有关控制、场景、节点或子节点(Node、SpriteNode、LabelNode)、动作(Action)、纹理(Texture)、字体(Font)、着色(Shader)、物理模拟(Simulates Physics)、应用约束(Applies Constraints)等各类业务实现逻辑。
为适应视频播放业务,在所述引擎场景层2和所述播放器之间设置通信机制,使得所述引擎场景层2和所述播放器之间就正在播放的视频之播放数据进行交互,使得图形引擎1可以高效获取播放数据用于图形处理操作,确保视频渲染效果。
所述播放数据,可以包括视频播放进度、尺寸大小、用户在播放器上的用户触发事件等。
作为示例,所述播放数据可以是来自所述播放器的视频播放进度。以下以弹幕为例,对视频播放进行在引擎场景层的作用做一个示例性描述。在观看视频节目的过程中,用户通常会通过发送弹幕的方式,与主播或其他用户进行互动。所述弹幕,是通过网络观看视频时弹出的并沿预定方向移动的字幕。弹幕在英文中还没有固定词汇,其通常称之为:comment、danmaku、barrage、bullet screen、bullet-screen comment等。弹幕允许用户发表评论或感想,但与普通视频分享网站只在播放器下专用点评区显示不同,其会以滑动字幕的方式实时出现在视频画面上,保证所有观看者都能注意到。可知,弹幕是在特定播放节点上被弹出,并在视频播放界面上以预定方向滑过。当引擎场景层2在渲染一个视频帧时, 需要确定这个视频帧在视频中的播放位置(即视频播放进度),通过该播放位置确定将各个弹幕分别绘制到这个视频帧的哪个位置。因此,所述引擎场景层用于:根据所述视频播放进度和所述预设逻辑,确定各个弹幕的弹幕位置。
作为示例,所述播放数据可以是来自所述播放器的请求关闭弹幕的请求数据。继续以弹幕为例,当所述播放器接收到关闭弹幕的用户操作时,会生成用于关闭弹幕的请求数据,并将所述关闭弹幕的请求数据以所述预设传输协议传递至所述引擎场景层中。当引擎场景层在渲染一个视频帧并接收到来自所述播放器的所述关闭弹幕的请求数据时,会将这个视频帧上的所有或部分弹幕隐藏。因此,所述引擎场景层2用于:根据所述请求数据和所述预设逻辑进行弹幕隐藏。
以上示例性的说明:可以将所述图形引擎1和所述播放器之间的数据交互得到的数据,用于所述图形引擎1的计算和图形渲染。当然,所述图形引擎1要使用所述播放器的哪些数据,或怎么使用所述播放器的数据,是由所述预设逻辑来确定的。
(2)视图层(View)3。
所述视图层3用于将消息传递至所述引擎场景层。
所述视图层3可以将播放器或***平台的消息传递到所述引擎场景层,也可以将所述引擎场景层的消息传递至播放器或***平台。所述消息包括:帧更新信号(Vsync)、触控事件(Touch Event)、场景协同信号(View Coor)和/或与所述播放器的交互数据。
(3)Runtime运行层4。
所述Runtime运行层4用于运行JS代码。
所述Runtime运行层4绑定有JS应用程序接口;其中,所述JS应用程序接口作为所述图形引擎的对外开放接口,用于所述图形引擎和第三方功能模块之间的交互。
通过所述Runtime运行层4和所述JS应用程序接口,开发者可以根据需求开发运行在所述图形引擎之上的扩展功能,从而保证了功能的可扩展性和性能的高效性。这种图形引擎的可扩展性为后续业务需求的增长及引擎功能的扩展提供了支持。
(4)shell层5。
shell层5,作为所述图形引擎和***平台之间的交互接口,用于将所述图形引擎适配于目标***平台。
所述shell层5在于对图形引擎的功能进行封装,使得所述图形引擎能够适配在各个***平台上,***平台可以是安卓(Android)、IOS(苹果公司手机操作***)、macOS(苹果公司桌面电脑操作***)、浏览器等。所述shell层旨在使得所述图形引擎能够达到支持多端跨平台开发。
(5)图形引擎基础工具(Base)。
如,日志(Logging)、文件***(FileSystem)、任务运行(TaskRunner)、线程管理(Thread)、时钟(Timepoint)、重启、消息循环等。
在本实施例中,可以通过对图形接口的封装完成图形渲染管线(Graphics pipeline/rendering pipeline)的实现,为实施引擎场景层的功能提供支撑。所述图形引擎管线为:从数据被输入GPU,到最后渲染成为图形所经历的一系列有序处理过程。
图形渲染管线涉及渲染缓冲区(Renderbuffer)、帧缓冲区(Framebuffer)、着色器、纹理等。
以上列举了所述图形引擎1的部分架构组成,所述图形引擎处理一帧的大致工作流程 如下:
步骤A:视图层3接收帧更新信号,并转发至引擎场景层。
步骤B:引擎场景层2根据所述帧更新信号,开启播放器的下一帧的处理操作。
步骤C:引擎场景层2基于播放数据,根据预设逻辑对视频帧、弹幕等场景节点各种数据进行处理(如运动评估、物理模拟等),以得到更新后的数据;基于所述更新后的数据,可以向图形处理器(GPU,Graphics Processing Unit)发起Draw Call调用命令或者调用底层图形库(如,OpenGL),以实施渲染操作。需要说明的是,所述运动评估可以包括位置变化、颜色变化、透明度变化等各类评估。
其中,Shell层5则用于所述图形引擎的***平台适配;
Runtime运行层4可以解析和运行JavaScript等代码,用于所述图形引擎的功能扩展。
在步骤C中,每个节点都会调用一次或多次Draw Call命令,每次Draw Call命令调用之前都需耗时地准备数据。然,本申请人发现,在实际场景中,多次调用的Draw Call命令可能所准备的数据有相似性,通过一次Draw Call命令去调用相同的数据,可以减少数据存储以提升效率,也可以提升渲染效率,降低GPU的计算资源。如图2所示,图2中出现大量相同的弹幕头像和相同的弹幕文字,将纹理相同的节点(如弹幕头像或弹幕文字)合并绘制,则可以减少资源开销。
但在合并过程中,还需要解决以下问题:
(1)动态合并纹理内存问题。为了解决上述问题,所述引擎场景层2还用于:将多个常用纹理合并成一个纹理,使得更多的节点可以共享同一个纹理对象(Texture Object)。纹理对象用于存储纹理数据,并且使纹理数据更加高效获得。
(2)如何整理纹理频繁变动所产生的碎片,如何解决大量频繁更换的中文文本(尤其是弹幕文字),绘制顺序影响Z轴顺序问题。为了解决上述问题,所述引擎场景层2的合并流程如下:
步骤C1,更新场景中的多个节点的节点数据;
步骤C2,根据所述多个节点的节点数据,为所述多个节点生成多个渲染指令;所述多个节点包括不需要独立渲染环境的一个或多个第一渲染节点和需要独立渲染环境的一个或多个第二渲染节点;
步骤C3,将所述多个渲染指令缓存到多个渲染队列中;所述多个渲染队列包括第一渲染队列和一个或多个第二渲染队列;所述第一渲染队列用于缓存每个第一渲染节点的渲染指令;每个第二渲染队列分别对应一个第二渲染节点并用于缓存相应第二渲染节点的渲染指令;
步骤C4,遍历所述多个渲染队列,将目标参数相同的多个渲染命令合并成一个渲染批次,以得到多个渲染批次;
步骤C5,根据各个渲染批次调用图形库或硬件,以减少调用次数。
需要说明的是,所述目标参数可以包括纹理参数、着色器参数等。所述一个或多个第二渲染节点可以是CropNode(待裁剪节点)、EffectNode(待滤镜过滤节点)等。每个第一渲染节点或每个第二渲染节点根据需要可能一个或多个渲染指令。
实施例二
图3示意性示出了根据本申请实施例二的适用于播放器的图形处理方法的流程图。所述 方法可以运行在移动终端中。如图3所示,所述方法可以包括步骤S300~S302,其中:
步骤S300,通过图形引擎与播放器进行数据交互,以得到播放数据。
步骤S302,基于所述播放数据和预设逻辑,通过所述图形引擎对所述播放器的下一帧执行图形处理操作。
所述图形引擎可以是实施例一所描述的图形引擎。
在示例性的实施例中,所述图形引擎和所述播放器之间预先约定有用于交互的传输协议(如数据格式等),以实现二者之间的有效交互。
在示例性的实施例中,所述播放数据包括所述播放器的视频播放进度。
如图4所示,所述步骤S302可以包括步骤S400:根据所述视频播放进度和所述预设逻辑,获取各个弹幕的弹幕位置数据。即通过图形引擎和播放器的交互数据,用于弹幕位置的确定。
在示例性的实施例中,所述播放数据包括请求关闭弹幕的请求数据。
如图5所示,所述步骤S302可以包括步骤S500:根据所述请求数据和所述预设逻辑进行弹幕隐藏。即通过图形引擎和播放器的交互数据,用于弹幕隐藏操作。
在示例性的实施例中,如图6所示,所述步骤S302可以包括步骤S600:将多个(常用)纹理合并成一个纹理,使得更多的节点可以共享同一个纹理对象,以节省内存。
在示例性的实施例中,如图7所示,所述步骤S302可以包括步骤S700~S708,其中:步骤S700,更新场景中的多个节点的节点数据;步骤S702,根据所述多个节点的节点数据,为所述多个节点生成多个渲染指令;所述多个节点包括不需要独立渲染环境的一个或多个第一渲染节点和需要独立渲染环境的一个或多个第二渲染节点;步骤S704,将所述多个渲染指令缓存到多个渲染队列中;所述多个渲染队列包括第一渲染队列和一个或多个第二渲染队列;所述第一渲染队列用于缓存每个第一渲染节点的渲染指令;每个第二渲染队列分别对应一个第二渲染节点并用于缓存相应第二渲染节点的渲染指令;步骤S706,遍历所述多个渲染队列,将目标参数相同的多个渲染命令合并成一个渲染批次,以得到多个渲染批次;及步骤S708,根据各个渲染批次调用图形库或硬件。本实施例可以减少调用次数等。
为便于理解,如图8所示,以下提供一个示例性流程:
步骤S800,图形引擎层通过视图层获取来自***平台的帧更新信号(Vsync)。
步骤S802,帧数据更新,以更新场景中多个节点的节点数据。
其中,步骤S802具体包括:基于播放数据和预设逻辑,更新场景中的各个节点的节点数据;在所述各个节点的节点数据的更新过程中,可以包括运动评估、物理模拟等。
步骤S804,Draw Call命令合并。
其中,步骤S804具体包括:根据所述多个节点的节点数据,为所述多个节点生成多个渲染指令;将所述多个渲染指令缓存到多个渲染队列中;以及遍历所述多个渲染队列,将目标参数相同的多个渲染命令合并成一个渲染批次,以得到多个渲染批次;
步骤S806,判断是否需要重绘,如果需要,则进入步骤S802C,否则停止处理操作并异步等待下一个Vsync。例如,这一帧的帧数据和上一帧的数据没有变化,则不会要重绘。
步骤S808,帧绘制:
(1)按渲染批次调用OpenGL等图形库,将场景以及以递归的方式将挂载在场景上的各个节点渲染出来。需要说明的是,场景绘制可能会涉及摄像头参数设置、背景清理等。
(2)异步等待下一个Vsync。
实施例三
图9示意性示出了根据本申请实施例三的适用于播放器的图形处理***的框图,该适用于播放器的图形处理***可以被分割成一个或多个程序模块,一个或者多个程序模块被存储于存储介质中,并由一个或多个处理器所执行,以完成本申请实施例。本申请实施例所称的程序模块是指能够完成特定功能的一系列计算机可读指令段,以下描述将具体介绍本实施例中各程序模块的功能。
如图9所示,该适用于播放器的图形处理***900可以包括交互模块910和图形处理模块920,其中:
交互模块910,用于通过图形引擎与播放器进行数据交互,以得到播放数据;
图形处理模块920,用于基于所述播放数据和预设逻辑,通过所述图形引擎对所述播放器的下一帧执行图形处理操作。
在示例性的实施例中,所述图形引擎和所述播放器之间预先约定有用于交互的传输协议。
在示例性的实施例中,所述播放数据包括所述播放器的视频播放进度;所述图形处理模块920,还用于:根据所述视频播放进度和所述预设逻辑,获取各个弹幕的弹幕位置数据。
在示例性的实施例中,所述播放数据包括请求关闭弹幕的请求数据;图形处理模块920,用于根据所述请求数据和所述预设逻辑进行弹幕隐藏。
在示例性的实施例中,所述图形处理模块920,还用于:根据所述预设逻辑,对场景中的多个节点进行动作评估;根据对各个节点的动作评估,更新所述各个节点的节点数据;根据所述各个节点的节点数据,将所述多个节点中至少部分节点合并为若干个合并节点;及根据各个合并节点或所述多个节点中的各个未被合并的节点,分别调用图形库。
图形处理模块,用于所述节点数据包括纹理数据;图形处理模块920,用于:当所述多个节点中包括纹理相同的若干个节点时,则将该若干个节点合并为相应的合并节点。
在示例性的实施例中,所述图形处理模块920,还用于:将多个纹理合并成一个纹理。
在示例性的实施例中,所述图形处理模块920,还用于:更新场景中的多个节点的节点数据;根据所述多个节点的节点数据,为所述多个节点生成多个渲染指令;所述多个节点包括不需要独立渲染环境的一个或多个第一渲染节点和需要独立渲染环境的一个或多个第二渲染节点;将所述多个渲染指令缓存到多个渲染队列中;所述多个渲染队列包括第一渲染队列和一个或多个第二渲染队列;所述第一渲染队列用于缓存每个第一渲染节点的渲染指令;每个第二渲染队列分别对应一个第二渲染节点并用于缓存相应第二渲染节点的渲染指令;遍历所述多个渲染队列,将目标参数相同的多个渲染命令合并成一个渲染批次,以得到多个渲染批次;及根据各个渲染批次调用图形库或硬件。
实施例四
图10示意性示出了根据本申请实施例四的适于实现适用于播放器的图形处理方法的计算机设备900的硬件架构示意图。本实施例中,计算机设备900可以为移动终端4或者作为移动终端4的一部分。本实施例中,计算机设备900是一种能够按照事先设定或者存储的指令,自动进行数值计算和/或信息处理的设备。例如,可以是智能手机、平板电脑等。如图10所 示,计算机设备900至少包括但不限于:可通过***总线相互通信链接存储器1010、处理器1020、网络接口1030。其中:
存储器1010至少包括一种类型的计算机可读存储介质,可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,存储器1010可以是计算机设备900的内部存储模块,例如该计算机设备900的硬盘或内存。在另一些实施例中,存储器1010也可以是计算机设备900的外部存储设备,例如该计算机设备900上配备的插接式硬盘,智能存储卡(Smart Media Card,简称为SMC),安全数字(Secure Digital,简称为SD)卡,闪存卡(Flash Card)等。当然,存储器1010还可以既包括计算机设备900的内部存储模块也包括其外部存储设备。本实施例中,存储器1010通常用于存储安装于计算机设备900的操作***和各类应用软件,例如适用于播放器的图形处理方法的程序代码等。此外,存储器1010还可以用于暂时地存储已经输出或者将要输出的各类数据。
处理器1020在一些实施例中可以是中央处理器(Central Processing Unit,简称为CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器1020通常用于控制计算机设备900的总体操作,例如执行与计算机设备900进行数据交互或者通信相关的控制和处理等。本实施例中,处理器1020用于运行存储器1010中存储的程序代码或者处理数据。
网络接口1030可包括无线网络接口或有线网络接口,该网络接口1030通常用于在计算机设备900与其他计算机设备之间建立通信链接。例如,网络接口1030用于通过网络将计算机设备900与外部终端相连,在计算机设备900与外部终端之间的建立数据传输通道和通信链接等。网络可以是企业内部网(Intranet)、互联网(Internet)、全球移动通讯***(Global System of Mobile communication,简称为GSM)、宽带码分多址(Wideband Code Division Multiple Access,简称为WCDMA)、4G网络、5G网络、蓝牙(Bluetooth)、Wi-Fi等无线或有线网络。
需要指出的是,图10仅示出了具有部件1010-1030的计算机设备,但是应该理解的是,并不要求实施所有示出的部件,可以替代的实施更多或者更少的部件。
在本实施例中,存储于存储器1010中的适用于播放器的图形处理方法还可以被分割为一个或者多个程序模块,并由一个或多个处理器(本实施例为处理器1020)所执行,以完成本申请实施例。
实施例五
本申请还提供一种计算机可读存储介质,计算机可读存储介质其上存储有计算机可读指令,计算机可读指令被处理器执行时实现如下步骤:通过图形引擎与播放器进行数据交互,以得到播放数据;基于所述播放数据和预设逻辑,通过所述图形引擎对所述播放器的下一帧执行图形处理操作。
本实施例中,计算机可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,计算机可读存储介质可以是计算机设备的内部存储单元,例如该计算机设备的硬盘或内存。在另一些实施例中,计算机可读存储介质也 可以是计算机设备的外部存储设备,例如该计算机设备上配备的插接式硬盘,智能存储卡(Smart Media Card,简称为SMC),安全数字(Secure Digital,简称为SD)卡,闪存卡(Flash Card)等。当然,计算机可读存储介质还可以既包括计算机设备的内部存储单元也包括其外部存储设备。本实施例中,计算机可读存储介质通常用于存储安装于计算机设备的操作***和各类应用软件,例如实施例中适用于播放器的图形处理方法的程序代码等。此外,计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的各类数据。
显然,本领域的技术人员应该明白,上述的本申请实施例的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本申请实施例不限制于任何特定的硬件和软件结合。
需要说明的是,现有的大多数图形引擎都过于“臃肿”,不利于开发者的快速学习。
相对而言,本申请实施例一所述的图形引擎,特别针对播放器的适配做了针对性的改进,尤其适合在弹幕活动场景中,应对迭代快、实效性高的情景。在申请实施例一所述的图形引擎中,基于视频播放业务的核心需求,通过对图形接口及引擎逻辑的封装,在满足引擎的功能同时以实现引擎的极尽“轻量化“。使开发人员能够快速学***台接口的适配,满足不同平台引擎业务逻辑的统一,使之能在多个平台快速部署,并提供JS应用程序接口为图形引擎提供调用服务,保证了功能的可扩展性和性能的高效性。
针对和播放器进行交互,可以约定播放器的播放页与图形引擎之间的传输协议和数据封装格式,并封装播放页和图形引擎各自接受信息的方法,从而建立两者之间的通信机制。例如,图形引擎计算一个加法运算“6+8”,图形引擎的引擎场景层可以播放器发送一个加法的方法请求,参数分别为6和8,播放器在接收到请求后将计算结果返回给引擎场景层。
上述方法请求信息可以如下:
Figure PCTCN2021111381-appb-000001
需要说明的是,以上仅为本申请的优选实施例,并非因此限制本申请的专利保护范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在 其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种图形引擎,包括:
    引擎场景层,用于根据预设逻辑执行图形处理操作;其中:
    根据所述预设逻辑执行所述图形处理操作,包括:基于预定传输协议与播放器交互以得到播放数据,及根据所述播放数据和所述预设逻辑执行所述图形处理操作。
  2. 根据权利要求1所述的图形引擎,所述播放数据包括所述播放器的视频播放进度;根据所述播放数据和所述预设逻辑执行所述图形处理操作,包括:
    根据所述视频播放进度和所述预设逻辑,获取各个弹幕的弹幕位置数据。
  3. 根据权利要求1或2所述的图形引擎,所述播放数据包括请求关闭弹幕的请求数据;根据所述播放数据和所述预设逻辑执行所述图形处理操作,包括:
    根据所述请求数据和所述预设逻辑进行弹幕隐藏。
  4. 根据权利要求1至3任意一项所述的图形引擎,还包括:
    视图层,用于将消息传递至所述引擎场景层;
    所述消息包括:帧更新信号、触控事件、场景协同信号和/或与所述播放器的交互数据。
  5. 根据权利要求1至4任意一项所述的图形引擎,还包括:
    Runtime运行层,用于运行JS代码;
    所述Runtime运行层绑定有JS应用程序接口;其中,所述JS应用程序接口作为所述图形引擎的对外开放接口,用于所述图形引擎和第三方功能模块之间的交互。
  6. 根据权利要求1至5任意一项所述的图形引擎,还包括:
    shell层,作为所述图形引擎和***平台之间的交互接口,用于将所述图形引擎适配于目标***平台。
  7. 根据权利要求1至6任意一项所述的图形引擎,所述引擎场景层,还用于:
    将多个纹理合并成一个纹理。
  8. 根据权利要求1至7任意一项所述的图形引擎,所述引擎场景层,还用于:
    更新场景中的多个节点的节点数据;
    根据所述多个节点的节点数据,为所述多个节点生成多个渲染指令;所述多个节点包括不需要独立渲染环境的一个或多个第一渲染节点和需要独立渲染环境的一个或多个第二渲染节点;
    将所述多个渲染指令缓存到多个渲染队列中;所述多个渲染队列包括第一渲染队列和一个或多个第二渲染队列;所述第一渲染队列用于缓存每个第一渲染节点的渲染指令;每个第二渲染队列分别对应一个第二渲染节点并用于缓存相应第二渲染节点的渲染指令;
    遍历所述多个渲染队列,将目标参数相同的多个渲染命令合并成一个渲染批次,以得到多个渲染批次;及
    根据各个渲染批次调用图形库或硬件。
  9. 一种适用于播放器的图形处理方法,包括:
    通过图形引擎与播放器进行数据交互,以得到播放数据;
    基于所述播放数据和预设逻辑,通过所述图形引擎对所述播放器的下一帧执行图形处理操作。
  10. 根据权利要求9所述的图形处理方法,所述图形引擎和所述播放器之间预先约定有用于交互的传输协议。
  11. 根据权利要求9或10所述的图形处理方法,所述播放数据包括所述播放器的视频播放进度;通过所述图形引擎对所述播放器的下一帧执行图形处理操作,包括:
    根据所述视频播放进度和所述预设逻辑,获取各个弹幕的弹幕位置数据。
  12. 根据权利要求9至11任意一项所述的图形处理方法,所述播放数据包括请求关闭弹幕的请求数据;通过所述图形引擎对所述播放器的下一帧执行图形处理操作,包括:
    根据所述请求数据和所述预设逻辑进行弹幕隐藏。
  13. 根据权利要求9至12任意一项所述的图形处理方法,通过所述图形引擎对所述播放器的下一帧执行图形处理操作,还包括:
    将多个纹理合并成一个纹理。
  14. 根据权利要求9至13任意一项所述的图形处理方法,通过所述图形引擎对所述播放器的下一帧执行图形处理操作,还包括:
    更新场景中的多个节点的节点数据;
    根据所述多个节点的节点数据,为所述多个节点生成多个渲染指令;所述多个节点包括不需要独立渲染环境的一个或多个第一渲染节点和需要独立渲染环境的一个或多个第二渲染节点;
    将所述多个渲染指令缓存到多个渲染队列中;所述多个渲染队列包括第一渲染队列和一个或多个第二渲染队列;所述第一渲染队列用于缓存每个第一渲染节点的渲染指令;每个第二渲染队列分别对应一个第二渲染节点并用于缓存相应第二渲染节点的渲染指令;
    遍历所述多个渲染队列,将目标参数相同的多个渲染命令合并成一个渲染批次,以得到多个渲染批次;及
    根据各个渲染批次调用图形库或硬件。
  15. 一种适用于播放器的图形处理***,包括:
    交互模块,用于通过图形引擎与播放器进行数据交互,以得到播放数据;
    图形处理模块,用于基于所述播放数据和预设逻辑,通过所述图形引擎对所述播放器的下一帧执行图形处理操作。
  16. 一种计算机设备,所述计算机设备包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时执行以下步骤:
    通过图形引擎与播放器进行数据交互,以得到播放数据;
    基于所述播放数据和预设逻辑,通过所述图形引擎对所述播放器的下一帧执行图形处理操作。
  17. 根据权利要求16所述的计算机设备,所述图形引擎和所述播放器之间预先约定有用于交互的传输协议。
  18. 根据权利要求16或17所述的计算机设备,
    所述播放数据包括所述播放器的视频播放进度;通过所述图形引擎对所述播放器的下一帧执行图形处理操作,包括:根据所述视频播放进度和所述预设逻辑,获取各个弹幕的弹幕位置数据;和/或
    所述播放数据包括请求关闭弹幕的请求数据;通过所述图形引擎对所述播放器的下一帧执行图形处理操作,包括:根据所述请求数据和所述预设逻辑进行弹幕隐藏。
  19. 根据权利要求16至18任意一项所述的计算机设备,通过所述图形引擎对所述播放器的下一帧执行图形处理操作,还包括:
    将多个纹理合并成一个纹理;和/或
    更新场景中的多个节点的节点数据;根据所述多个节点的节点数据,为所述多个节点生成多个渲染指令;所述多个节点包括不需要独立渲染环境的一个或多个第一渲染节点和需要独立渲染环境的一个或多个第二渲染节点;将所述多个渲染指令缓存到多个渲染队列中;所述多个渲染队列包括第一渲染队列和一个或多个第二渲染队列;所述第一渲染队列用于缓存每个第一渲染节点的渲染指令;每个第二渲染队列分别对应一个第二渲染节点并用于缓存相应第二渲染节点的渲染指令;遍历所述多个渲染队列,将目标参数相同的多个渲染命令合并成一个渲染批次,以得到多个渲染批次;及根据各个渲染批次调用图形库或硬件。
  20. 一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机可读指令,所述计算机可读指令可被至少一个处理器所执行,以使所述至少一个处理器执行权利要求9至14中任意一项所述的适用于播放器的图形处理方法的步骤。
PCT/CN2021/111381 2020-11-05 2021-08-09 图形引擎和适用于播放器的图形处理方法 WO2022095526A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/033,788 US20230403437A1 (en) 2020-11-05 2021-08-09 Graphics engine and graphics processing method applicable to player

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011223807.3A CN112423111A (zh) 2020-11-05 2020-11-05 图形引擎和适用于播放器的图形处理方法
CN202011223807.3 2020-11-05

Publications (1)

Publication Number Publication Date
WO2022095526A1 true WO2022095526A1 (zh) 2022-05-12

Family

ID=74828642

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/111381 WO2022095526A1 (zh) 2020-11-05 2021-08-09 图形引擎和适用于播放器的图形处理方法

Country Status (3)

Country Link
US (1) US20230403437A1 (zh)
CN (1) CN112423111A (zh)
WO (1) WO2022095526A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116757916A (zh) * 2023-08-21 2023-09-15 成都中科合迅科技有限公司 图形绘制引擎的变更控制方法和***

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112423111A (zh) * 2020-11-05 2021-02-26 上海哔哩哔哩科技有限公司 图形引擎和适用于播放器的图形处理方法
CN113923519B (zh) * 2021-11-11 2024-02-13 深圳万兴软件有限公司 视频渲染方法、装置、计算机设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104144351A (zh) * 2014-04-04 2014-11-12 北京泰然神州科技有限公司 应用虚拟化平台的视频播放方法和装置
US9317175B1 (en) * 2013-09-24 2016-04-19 Amazon Technologies, Inc. Integration of an independent three-dimensional rendering engine
CN105597321A (zh) * 2015-12-18 2016-05-25 武汉斗鱼网络科技有限公司 一种全屏游戏状态下的弹幕显示方法与***
CN111432276A (zh) * 2020-03-27 2020-07-17 北京奇艺世纪科技有限公司 一种游戏引擎、互动视频交互方法和电子设备
WO2020156264A1 (zh) * 2019-01-30 2020-08-06 华为技术有限公司 渲染方法及装置
US20200306631A1 (en) * 2019-03-29 2020-10-01 Electronic Arts Inc. Dynamic streaming video game client
CN112423111A (zh) * 2020-11-05 2021-02-26 上海哔哩哔哩科技有限公司 图形引擎和适用于播放器的图形处理方法
CN112995692A (zh) * 2021-03-04 2021-06-18 广州虎牙科技有限公司 互动数据处理方法、装置、设备及介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2798484A4 (en) * 2011-12-29 2015-08-26 Intel Corp BROWSER MEDIA SERVICE FOR MULTIMEDIA CONTENT
CN102662646B (zh) * 2012-03-01 2015-09-23 华为技术有限公司 传感数据处理方法及计算节点
CN106792188B (zh) * 2016-12-06 2020-06-02 腾讯数码(天津)有限公司 一种直播页面的数据处理方法、装置、***和存储介质
CN106658145B (zh) * 2016-12-27 2020-07-03 北京奇虎科技有限公司 一种直播数据处理方法和装置
CN110213636B (zh) * 2018-04-28 2023-01-10 腾讯科技(深圳)有限公司 在线视频的视频帧生成方法、装置、存储介质及设备
CN109408175B (zh) * 2018-09-28 2021-07-27 北京赛博贝斯数据科技有限责任公司 通用高性能深度学习计算引擎中的实时交互方法及***
CN109816763A (zh) * 2018-12-24 2019-05-28 苏州蜗牛数字科技股份有限公司 一种图形渲染方法
CN111408138B (zh) * 2019-01-04 2023-07-07 厦门雅基软件有限公司 基于游戏引擎的渲染方法、装置及电子设备
CN111556325A (zh) * 2019-02-12 2020-08-18 广州艾美网络科技有限公司 结合音视频的渲染方法、介质及计算机设备
CN111031400B (zh) * 2019-11-25 2021-04-27 上海哔哩哔哩科技有限公司 弹幕呈现方法和***
CN110784773A (zh) * 2019-11-26 2020-02-11 北京奇艺世纪科技有限公司 弹幕生成方法、装置、电子设备及存储介质
CN111796826B (zh) * 2020-06-30 2022-04-22 北京字节跳动网络技术有限公司 一种弹幕的绘制方法、装置、设备和存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9317175B1 (en) * 2013-09-24 2016-04-19 Amazon Technologies, Inc. Integration of an independent three-dimensional rendering engine
CN104144351A (zh) * 2014-04-04 2014-11-12 北京泰然神州科技有限公司 应用虚拟化平台的视频播放方法和装置
CN105597321A (zh) * 2015-12-18 2016-05-25 武汉斗鱼网络科技有限公司 一种全屏游戏状态下的弹幕显示方法与***
WO2020156264A1 (zh) * 2019-01-30 2020-08-06 华为技术有限公司 渲染方法及装置
US20200306631A1 (en) * 2019-03-29 2020-10-01 Electronic Arts Inc. Dynamic streaming video game client
CN111432276A (zh) * 2020-03-27 2020-07-17 北京奇艺世纪科技有限公司 一种游戏引擎、互动视频交互方法和电子设备
CN112423111A (zh) * 2020-11-05 2021-02-26 上海哔哩哔哩科技有限公司 图形引擎和适用于播放器的图形处理方法
CN112995692A (zh) * 2021-03-04 2021-06-18 广州虎牙科技有限公司 互动数据处理方法、装置、设备及介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116757916A (zh) * 2023-08-21 2023-09-15 成都中科合迅科技有限公司 图形绘制引擎的变更控制方法和***
CN116757916B (zh) * 2023-08-21 2023-10-20 成都中科合迅科技有限公司 图形绘制引擎的变更控制方法和***

Also Published As

Publication number Publication date
CN112423111A (zh) 2021-02-26
US20230403437A1 (en) 2023-12-14

Similar Documents

Publication Publication Date Title
WO2022095526A1 (zh) 图形引擎和适用于播放器的图形处理方法
WO2022116759A1 (zh) 图像渲染方法、装置、计算机设备和存储介质
CN107223264B (zh) 一种渲染方法及装置
US10026147B1 (en) Graphics scenegraph rendering for web applications using native code modules
US8675000B2 (en) Command buffers for web-based graphics rendering
EP1594091B1 (en) System and method for providing an enhanced graphics pipeline
TWI543108B (zh) 群眾外包式(crowd-sourced)視訊顯像系統
JP6062438B2 (ja) タイル単位レンダラーを用いてレイヤリングするシステムおよび方法
US10013157B2 (en) Composing web-based interactive 3D scenes using high order visual editor commands
US10207190B2 (en) Technologies for native game experience in web rendering engine
US9396564B2 (en) Atlas generation based on client video configuration
WO2022089592A1 (zh) 一种图形渲染方法及其相关设备
US20110321049A1 (en) Programmable Integrated Processor Blocks
WO2019238145A1 (zh) 一种基于WebGL的图形绘制方法、装置及***
US20130127849A1 (en) Common Rendering Framework and Common Event Model for Video, 2D, and 3D Content
WO2022033162A1 (zh) 一种模型加载方法以及相关装置
WO2023197762A1 (zh) 图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品
CN114564630A (zh) 一种图数据Web3D可视化的方法、***和介质
US11010863B2 (en) Bindpoint emulation
CN112700519A (zh) 动画展示方法、装置、电子设备及计算机可读存储介质
CN111796812B (zh) 图像渲染的方法、装置、电子设备及计算机可读存储介质
US9189448B2 (en) Routing image data across on-chip networks
CN116503529A (zh) 渲染、3d画面控制方法、电子设备和计算机可读存储介质
KR101586655B1 (ko) 반복순환 구조의 게임 장면 구현 및 동작 방법
WO2024011733A1 (zh) 3d图像实现方法及***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21888225

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21888225

Country of ref document: EP

Kind code of ref document: A1