CN113794887A - Method and related equipment for video coding in game engine - Google Patents

Method and related equipment for video coding in game engine Download PDF

Info

Publication number
CN113794887A
CN113794887A CN202110942885.7A CN202110942885A CN113794887A CN 113794887 A CN113794887 A CN 113794887A CN 202110942885 A CN202110942885 A CN 202110942885A CN 113794887 A CN113794887 A CN 113794887A
Authority
CN
China
Prior art keywords
video
encoding
game engine
global
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110942885.7A
Other languages
Chinese (zh)
Inventor
刘运松
周炎钧
徐林
马库斯·奥布赖恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rongming Microelectronics Jinan Co ltd
Original Assignee
Rongming Microelectronics Jinan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rongming Microelectronics Jinan Co ltd filed Critical Rongming Microelectronics Jinan Co ltd
Priority to CN202110942885.7A priority Critical patent/CN113794887A/en
Publication of CN113794887A publication Critical patent/CN113794887A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a method for coding video in a game engine and related equipment, which comprises the following steps: sequentially acquiring video frames output by the GPU, and acquiring video coding parameters from a game engine; and encoding the video frame output by the GPU rendering based on the encoding behavior determined by the video encoding parameters. The video coding method can acquire additional video coding parameters through the game engine, and can effectively improve the coding efficiency of the video coding engine.

Description

Method and related equipment for video coding in game engine
Technical Field
The present invention relates to the field of graphics processing technologies, and in particular, to a method for video encoding in a game engine and a related device.
Background
Typically, in a hardware video encoding process, a hardware video encoder accepts pictures from each frame in a video stream in turn for encoding, in which process an attempt is made to achieve optimal video quality for a target bit rate. Since hardware encoding typically requires a consistent time consuming encoding (independent of the complexity of each frame), and the cost constraints on hardware design due to factors such as chip size, a hardware video encoder has limited ability to understand each picture frame it is encoding. For example, motion vector search, which is commonly used in video coding, limits the motion vector search range because a larger search range implemented in hardware typically costs more memory storage, memory access, performance impact, and higher operating power consumption. For another example, for a video stream containing long-term continuous static content, if the Lookahead coding is not implemented, it is impossible for a hardware video encoder to find the rule of statistical information of content behaviors on multiple frames.
The Lookahead encoding typically implements video encoding in two steps, with the first encoding pass performing up to 40 frames to record frame encoding statistics, and the second encoding pass, which is the actual encoding, can use the statistics to make the correct adjustments to the encoding parameters to achieve better performance in terms of quality and bit rate distribution. However, lookahead encoding can have a severe impact on frame input and encoded bitstream output delay.
FIG. 1 illustrates a system block diagram of a simplified cloud gaming service. Users typically interact with a game or desktop application (local or cloud) through physical components such as a keyboard, mouse, and tablet. The game engine is capable of processing data and providing rendering instructions to a Graphics Processing Unit (GPU) to perform rendering of a frame buffer. Typically, frames following GPU rendering are processed by the display component, so no special instructions are required to avoid image tearing, other than the frame synchronization instructions. As internet video streaming and cloud games become more mainstream, these frames are not sent to the display component but are processed by the video encoder engine due to the absence of a physical display component. Each rendered frame provided by the GPU will be video encoded in sequence. The video encoder engine will also need to store a copy of the converted frame in the reference frame buffer for future frame prediction encoding. Finally, a compressed encoded bitstream output is generated by the video encoder engine for transmission to the receiving end for a reverse operation to render the frames rendered by the GPU for display at the client.
A game or desktop application engine, typically software running on an operating system using a general purpose CPU. For games, it typically processes and executes game commands from different sources, including physical game input sources, audio and sound, network scripts, game object management, and most importantly, rendering to the GPU engine through a direct interface.
The final objects are rendered to a two-dimensional viewpoint through the rendering by the GPU, which is called display frame buffering. Since there may be multiple frame buffers, some of which are used for consumption (displayed on the screen or encoded by the video encoder) and others are used for rendering operations.
The video encoder engine is the user of the 2D frame obtained after GPU rendering. It requires the current frame, i.e. the new available frame after GPU rendering, to be encoded by the video encoder.
Disclosure of Invention
The embodiment of the invention provides a method and related equipment for video coding in a game engine, which are used for improving the coding efficiency and speed.
In a first aspect, an embodiment of the present invention provides a method for video coding in a game engine, including:
sequentially acquiring video frames output by the GPU, and acquiring video coding parameters from a game engine;
and encoding the video frame output by the GPU rendering based on the encoding behavior determined by the video encoding parameters.
In some embodiments, the obtained video coding parameters include at least one of: global motion offset position parameters, forced scene change parameters, time intervals for the next frame, timestamps for GPU rendered frames, and game frame stream rate control expectation parameters.
In some embodiments, where the video encoding parameters include a global motion offset location parameter, encoding the video frames of the GPU rendering output based on the encoding behavior determined by the video encoding parameters includes:
and searching based on the global motion offset position parameter outside the searching range of the video encoder to realize the motion estimation calculation between frames.
In some embodiments, outside the search range of the video encoder, searching based on the global motion offset location parameter comprises:
and configuring a target pixel block based on the global motion offset position parameter, and performing block matching by taking the current position of the target pixel block as a starting point to perform global offset search.
In some embodiments, performing the global offset search further comprises:
configuring a global offset region based on the target image in the window, and not performing a global offset search outside the global offset region.
In some embodiments, performing the global offset search further comprises:
where there is multiple window overlap, an exclusive or logic operation is utilized to selectively enable global offsets for different windows.
In some embodiments, where the mandatory scene change parameter comprises a global motion offset location parameter, the mandatory scene change parameter is configured to:
in case of determining a scene change based on the logic of the game engine as an external parameter input, encoding is enforced using I-frames for a following scene to skip motion estimation and compensation.
In a second aspect, an embodiment of the present invention provides an apparatus for video coding in a game engine, including a video coding engine configured to:
sequentially acquiring video frames output by the GPU, and acquiring video coding parameters from a game engine;
and encoding the video frame output by the GPU rendering based on the encoding behavior determined by the video encoding parameters.
In a third aspect, an embodiment of the present invention provides a computer device, including a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more computer programs stored in the memory to implement the steps of the aforementioned method of video encoding.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the foregoing steps of the method for video coding.
According to the embodiment of the invention, the video coding parameters are acquired from the game engine, and the video frames output by the GPU rendering are coded based on the coding behavior determined by the video coding parameters, so that additional video coding parameters can be acquired through the game engine, and the coding efficiency of the video coding engine can be effectively improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 illustrates a system block diagram of an existing gaming service;
FIG. 2 illustrates a basic flow diagram of an embodiment of the present disclosure;
FIG. 3 illustrates a system block diagram of a gaming service of an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of motion estimation using global offset according to an embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of inter motion estimation of a pixel block of the prior art.
Fig. 6 shows a schematic diagram of inter-frame motion estimation of a pixel block of an embodiment of the present disclosure.
Fig. 7 shows another schematic diagram of inter-frame motion estimation for pixel blocks of the disclosed embodiment.
FIG. 8 illustrates a player perspective definition diagram of an embodiment of the present disclosure.
FIG. 9 illustrates a flowchart of the operation of forcing a scene change parameter according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The first embodiment of the present invention provides a method for video encoding in a game engine, as shown in fig. 2, including:
s201, video frames output by the GPU in a rendering mode are sequentially acquired, and video coding parameters are acquired from a game engine. In some embodiments, the obtained video coding parameters include at least one of: global motion offset position parameters, forced scene change parameters, time intervals for the next frame, timestamps for GPU rendered frames, and game frame stream rate control expectation parameters. As an alternative to obtaining the video coding parameters from the game engine, the game engine may send the video coding parameters by means of an SDK interface. The video rendering engine may obtain the required parameters from the SDK interface of the game engine. The specific SDK may include a thin drawing submission interface provided by Vulkan, directx12, and Metal, and may also submit the rendering of the game engine to the GPU. Modern game engines are becoming more and more complex and capable of directly handling GPU command buffers, memory allocation and synchronization decisions, thereby mapping well to multi-core/multi-thread functions of modern CPUs, allowing parallel operation. In some embodiments, the game engine may provide parameters to the video encoder in the form of an SDK, and the parameters that can be specifically provided may include various game behaviors, such as world view conversion to rendered 2D view, physical input information, static object positions, frame rendering behaviors, and the like. The role of the SDK is to convert game related information into encoding parameters for use by the video encoder engine.
The GPU rendering engine is an ultra-high-performance parallel pipeline engine taking data as a center. It requires commands from the game engine to perform real-time computer graphics rendering to a displayable frame buffer. The primary GPU hardware engine alone is from NVIDIA and AMD. The process of performing rendering actually requires data processing through multiple stages.
S202, based on the encoding behavior determined by the video encoding parameters, encoding the video frame output by the GPU.
The final objects are rendered to a two-dimensional viewpoint through the rendering by the GPU, which is called display frame buffering. Since there may be multiple frame buffers, some of which are used for consumption (displayed on the screen or encoded by the video encoder) and others are used for rendering operations. The video encoder engine is the user of the 2D frame obtained after GPU rendering. It requires the current frame, i.e. the new available frame after GPU rendering, to be encoded by the video encoder.
According to the embodiment of the invention, the video coding parameters are acquired from the game engine, and the video frames output by the GPU rendering are coded based on the coding behavior determined by the video coding parameters, so that additional video coding parameters can be acquired through the game engine, and the coding efficiency of the video coding engine can be effectively improved.
In some embodiments, where the video encoding parameters include a global motion offset location parameter, encoding the video frames of the GPU rendering output based on the encoding behavior determined by the video encoding parameters includes:
and searching based on the global motion offset position parameter outside the searching range of the video encoder to realize the motion estimation calculation between frames.
As shown in fig. 4, in the method of video coding of the present embodiment, the global motion offset location parameter may be applied in the motion estimation hardware, wherein the global motion offset location parameter may provide the amount that the entire frame has moved in the 2D location, thereby simplifying the motion estimation calculation for inter-frame prediction. In this example, the global motion offset parameter is applied outside the search range of the video encoder to perform a targeted search, thereby achieving the effect of simplifying the operation. Existing hardware video encoders can perform searches of up to 128 pixels (vertical/horizontal) in motion estimation. Mainly for inter-frame prediction, where the previous frame (including the future frame for B-frames) is within the full frame. Whereas for motion scenes in a game, especially at high resolutions, fast motion can easily exceed the 128 pixel capability of the motion estimator. The method of the present disclosure uses global motion offsets, and the encoder of the video coding engine can perform a 128-pixel search from the corresponding offset location according to the global motion offset location parameter, thereby greatly simplifying the computational load of motion estimation.
In some embodiments, outside the search range of the video encoder, searching based on the global motion offset location parameter comprises:
and configuring a target pixel block based on the global motion offset position parameter, and performing block matching by taking the current position of the target pixel block as a starting point to perform global offset search.
As shown in fig. 5, in the conventional implementation of the search range for inter prediction, pixel blocks are used to perform inter prediction, but since the search range of the video encoder is limited, a matching block cannot be found, and thus the existing search scheme cannot achieve an ideal search effect. The video encoding method of the present disclosure, as shown in fig. 6, may perform the shift of the search range according to the global motion shift location parameter. The search between frames is also performed by means of pixel blocks, for example pixel blocks, for performing a search match of the previous frame with the current frame. The pixel block used for searching may be searched from the start position-global offset of its current position. The global offset may not be 100% accurate, but as long as it is within the maximum search range of the video encoder, a good match can be found, simplifying the computational load of motion estimation. The global offset in this embodiment may be optionally configured with a specific window. The GUI panel may also not translate at all and remain static only during the game.
In some embodiments, performing the global offset search further comprises:
configuring a global offset region based on the target image in the window, and not performing a global offset search outside the global offset region.
FIG. 7 illustrates an example of a window global offset application in which image portions in a window may be configured to use global offsets, while outside of the region, global offsets will be disabled. I.e. the region outside the image portion, the global offset may be disabled.
In some embodiments, performing the global offset search further comprises:
where there is multiple window overlap, an exclusive or logic operation is utilized to selectively enable global offsets for different windows. That is, in practical applications, for complex scenes where multiple windows overlap, the multiple windows overlap may use exclusive or logic. Thereby enabling global shifting selectively for different windows according to the global motion shift location parameter.
The game engine typically focuses on rendering a portion of the game engine world space based on the perspective of the player. For simplicity of description, taking a two-dimensional game engine as an example, as shown in fig. 8, the player's perspective defines the distance and angle relative to the camera. For some 2D games, their relative distance is often maintained, but scrolling in world space is allowed to obtain different information of the game world. Game engines like GODOT provide screen scrolling Application Programming Interface (API) scrolling while locking objects of interest. However, to support different terminal screen resolutions, the game engine may normalize the value of the angle of view to between 0.0 and 1.0. Most game engines have been able to convert the normalized values to points on the screen defined in units of pixels. In some embodiments, the two APIs may be combined together and the resulting scrolling may be converted into pixels, which may then be passed to a video encoder.
In some embodiments, where the mandatory scene change parameter comprises a global motion offset location parameter, the mandatory scene change parameter is configured to: in case of determining a scene change based on the logic of the game engine as an external parameter input, encoding is enforced using I-frames for a following scene to skip motion estimation and compensation.
In particular, rate control typically controls the encoding bit rate such that the result meets a desired target bit rate. Controls the frame type, quantization and entropy coding of the hybrid encoder. As shown in fig. 9, the encoding with I-frames can be enforced for new scenes during the encoding process by additional support for enforcing scene changes, thereby skipping the motion estimation and compensation steps.
The encoder may be informed of scene changes as an external parameter input by the logic of the game engine to encode the current frame. Without requiring special hardware to detect scene changes. As shown in fig. 9, the unused motion estimation and compensation logic hardware at this time may be placed in a power-save mode.
The disclosed method of video encoding employs additional parameters to allow motion estimation to be performed within an offset search range, effectively expanding the search range of a hardware video encoder. The game engine can completely bypass the motion estimation logic according to the parameters corresponding to the scene change of the game engine so as to save electric energy and improve the coding efficiency and speed.
In a second aspect, an embodiment of the present invention provides an apparatus for video coding in a game engine, including a video coding engine configured to:
sequentially acquiring video frames output by the GPU, and acquiring video coding parameters from a game engine;
and encoding the video frame output by the GPU rendering based on the encoding behavior determined by the video encoding parameters.
In a third aspect, an embodiment of the present invention provides a computer device, including a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more computer programs stored in the memory to implement the steps of the aforementioned method of video encoding.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the foregoing steps of the method for video coding.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A method of video encoding in a game engine, comprising:
sequentially acquiring video frames output by the GPU, and acquiring video coding parameters from a game engine;
and encoding the video frame output by the GPU rendering based on the encoding behavior determined by the video encoding parameters.
2. A method of video coding in a game engine as claimed in claim 1, wherein the obtained video coding parameters comprise at least one of: global motion offset position parameters, forced scene change parameters, time intervals for the next frame, timestamps for GPU rendered frames, and game frame stream rate control expectation parameters.
3. The method of video coding in a game engine of claim 1, wherein in the case that the video coding parameters include a global motion offset location parameter, encoding the video frames rendered output by the GPU based on the encoding behavior determined by the video coding parameters comprises:
and searching based on the global motion offset position parameter outside the searching range of the video encoder to realize the motion estimation calculation between frames.
4. A method of video encoding in a game engine as recited in claim 3, wherein searching based on the global motion offset location parameter outside the search range of the video encoder comprises:
and configuring a target pixel block based on the global motion offset position parameter, and performing block matching by taking the current position of the target pixel block as a starting point to perform global offset search.
5. A method for video encoding in a game engine as recited in claim 4, wherein performing a global offset search further comprises:
configuring a global offset region based on the target image in the window, and not performing a global offset search outside the global offset region.
6. A method for video encoding in a game engine as recited in claim 5, wherein performing a global offset search further comprises:
where there is multiple window overlap, an exclusive or logic operation is utilized to selectively enable global offsets for different windows.
7. A method of video coding in a game engine as claimed in claim 2, wherein in case the mandatory scene change parameter comprises a global motion offset location parameter, the mandatory scene change parameter is configured to:
in case of determining a scene change based on the logic of the game engine as an external parameter input, encoding is enforced using I-frames for a following scene to skip motion estimation and compensation.
8. An apparatus for video encoding in a game engine, comprising a video encoding engine configured to:
sequentially acquiring video frames output by the GPU, and acquiring video coding parameters from a game engine;
and encoding the video frame output by the GPU rendering based on the encoding behavior determined by the video encoding parameters.
9. A computer device comprising a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more computer programs stored in the memory to implement the steps of the method of video encoding as claimed in any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, realizes the steps of the method of video encoding according to any one of claims 1 to 7.
CN202110942885.7A 2021-08-17 2021-08-17 Method and related equipment for video coding in game engine Pending CN113794887A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110942885.7A CN113794887A (en) 2021-08-17 2021-08-17 Method and related equipment for video coding in game engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110942885.7A CN113794887A (en) 2021-08-17 2021-08-17 Method and related equipment for video coding in game engine

Publications (1)

Publication Number Publication Date
CN113794887A true CN113794887A (en) 2021-12-14

Family

ID=78876135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110942885.7A Pending CN113794887A (en) 2021-08-17 2021-08-17 Method and related equipment for video coding in game engine

Country Status (1)

Country Link
CN (1) CN113794887A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116546228A (en) * 2023-07-04 2023-08-04 腾讯科技(深圳)有限公司 Plug flow method, device, equipment and storage medium for virtual scene

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009031093A1 (en) * 2007-09-07 2009-03-12 Ambx Uk Limited A method for generating an effect script corresponding to a game play event
CN103716643A (en) * 2012-10-01 2014-04-09 辉达公司 System and method for improving video encoding using content information
CN104096362A (en) * 2013-04-02 2014-10-15 辉达公司 Improving the allocation of a bitrate control value for video data stream transmission on the basis of a range of player's attention
WO2019097319A1 (en) * 2017-11-17 2019-05-23 Ati Technologies Ulc Game engine application direct to video encoder rendering
CN110227259A (en) * 2018-03-06 2019-09-13 华为技术有限公司 A kind of method, apparatus of data processing, server and system
CN110959169A (en) * 2017-04-21 2020-04-03 泽尼马克斯媒体公司 System and method for game generated motion vectors

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009031093A1 (en) * 2007-09-07 2009-03-12 Ambx Uk Limited A method for generating an effect script corresponding to a game play event
CN103716643A (en) * 2012-10-01 2014-04-09 辉达公司 System and method for improving video encoding using content information
CN104096362A (en) * 2013-04-02 2014-10-15 辉达公司 Improving the allocation of a bitrate control value for video data stream transmission on the basis of a range of player's attention
CN110959169A (en) * 2017-04-21 2020-04-03 泽尼马克斯媒体公司 System and method for game generated motion vectors
WO2019097319A1 (en) * 2017-11-17 2019-05-23 Ati Technologies Ulc Game engine application direct to video encoder rendering
US20190158704A1 (en) * 2017-11-17 2019-05-23 Ati Technologies Ulc Game engine application direct to video encoder rendering
CN111357289A (en) * 2017-11-17 2020-06-30 Ati科技无限责任公司 Game engine application for video encoder rendering
CN110227259A (en) * 2018-03-06 2019-09-13 华为技术有限公司 A kind of method, apparatus of data processing, server and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116546228A (en) * 2023-07-04 2023-08-04 腾讯科技(深圳)有限公司 Plug flow method, device, equipment and storage medium for virtual scene
CN116546228B (en) * 2023-07-04 2023-09-22 腾讯科技(深圳)有限公司 Plug flow method, device, equipment and storage medium for virtual scene

Similar Documents

Publication Publication Date Title
US10827182B2 (en) Video encoding processing method, computer device and storage medium
JP4900976B2 (en) Method for switching compression level in an image streaming system, and system, server, and computer program
CN110741640B (en) Optical flow estimation for motion compensated prediction in video coding
US20110221865A1 (en) Method and Apparatus for Providing a Video Representation of a Three Dimensional Computer-Generated Virtual Environment
US7506071B2 (en) Methods for managing an interactive streaming image system
JP2016502332A (en) Predicted characteristics compensated for next-generation video content
US9984504B2 (en) System and method for improving video encoding using content information
US20170094306A1 (en) Method of acquiring neighboring disparity vectors for multi-texture and multi-depth video
US20230362388A1 (en) Systems and methods for deferred post-processes in video encoding
Semsarzadeh et al. Video encoding acceleration in cloud gaming
KR20060025099A (en) A method and device for three-dimensional graphics to two-dimensional video encoding
CN112449182A (en) Video encoding method, device, equipment and storage medium
CN113794887A (en) Method and related equipment for video coding in game engine
US20140099039A1 (en) Image processing device, image processing method, and image processing system
CN115361582B (en) Video real-time super-resolution processing method, device, terminal and storage medium
Lee et al. Fast modified region detection for mobile VNC systems
CN115761083A (en) Rendering result processing method, device, equipment and medium
CN116419032A (en) Video playing method, device, equipment and computer readable storage medium
CN117939161A (en) Image decoding method, device, electronic equipment and readable storage medium
JP2015073192A (en) Video encoder and video encoding program
CN118283298A (en) Video transmission method, processing method, apparatus, device, medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination