CN111111163B - Method and device for managing computing resources and electronic device - Google Patents

Method and device for managing computing resources and electronic device Download PDF

Info

Publication number
CN111111163B
CN111111163B CN201911347261.XA CN201911347261A CN111111163B CN 111111163 B CN111111163 B CN 111111163B CN 201911347261 A CN201911347261 A CN 201911347261A CN 111111163 B CN111111163 B CN 111111163B
Authority
CN
China
Prior art keywords
virtual
synchronization signal
virtual entity
entity
computing resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911347261.XA
Other languages
Chinese (zh)
Other versions
CN111111163A (en
Inventor
龚志鹏
赵新达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911347261.XA priority Critical patent/CN111111163B/en
Publication of CN111111163A publication Critical patent/CN111111163A/en
Application granted granted Critical
Publication of CN111111163B publication Critical patent/CN111111163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/358Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A method, apparatus, and electronic device for managing computing resources and a computer-readable storage medium are disclosed. The method comprises the following steps: acquiring the number of virtual entities sharing computing resources as a first number; for each of the first number of virtual entities, generating a dedicated synchronization signal for that virtual entity, wherein the dedicated synchronization signals for each virtual entity are offset from each other; processing requests are received from a first number of virtual entities, wherein the processing requests for each virtual entity are sent based on the virtual entity's dedicated synchronization signal.

Description

Method and device for managing computing resources and electronic device
Technical Field
The present disclosure relates to the field of cloud computing and virtualization technologies, and more particularly, to a method, device, electronic device, and computer-readable storage medium for managing computing resources.
Background
In a cloud server providing a cloud game, a plurality of virtual entities carried on the cloud server may share hardware computing resources of the cloud server, thereby creating competition for the hardware computing resources. For example, multiple virtual entities may issue computing instructions to the same graphics card on the cloud server at the same time. In this case, the graphics card may not respond to the computing instruction of a part of the virtual entities in time, resulting in the emptiness of the virtual entities, and further causing the operation delay of the applications on the virtual entities. At present, an effective cloud hardware computing resource management method is still lacked, so that the utilization rate of hardware computing resources is not high.
Disclosure of Invention
Embodiments of the present disclosure provide methods, devices, electronic devices, and computer-readable storage media for managing computing resources.
An embodiment of the present disclosure provides a method of managing computing resources, including: acquiring the number of virtual entities sharing computing resources as a first number; for each of the first number of virtual entities, generating a dedicated synchronization signal, wherein the dedicated synchronization signals of each virtual entity are offset from each other; processing requests are received from the first number of virtual entities, wherein the processing requests for each virtual entity are sent based on the virtual entity's dedicated synchronization signal.
An embodiment of the present disclosure provides an apparatus for managing computing resources, including a synchronization signal management module and a plurality of virtual synchronization signal modules corresponding to a plurality of virtual entities, respectively, wherein the synchronization signal management module is configured to provide different offsets to different synchronization signal modules; the virtual synchronization signal module configured to: receiving a reference synchronization signal; the synchronization signal of the virtual entity is set such that an offset of the synchronization signal with respect to the reference synchronization signal corresponds to the received synchronization signal offset.
An embodiment of the present disclosure provides a method of processing computing resources, the method comprising: performing a first virtual entity-related computation using the computation resource based on a first dedicated synchronization signal of the first virtual entity; performing a second virtual entity related calculation using the calculation resource based on a second dedicated synchronization signal of the second virtual entity; wherein the first dedicated synchronization signal and the second dedicated synchronization signal are offset from each other.
An embodiment of the present disclosure provides an electronic device, including: a processor; a memory storing computer instructions that, when executed by the processor, implement the above-described method.
Embodiments of the present disclosure provide a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the above-described method.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly introduced below. The drawings in the following description are merely exemplary embodiments of the disclosure.
Fig. 1A is an example schematic diagram illustrating a scenario in which multiple virtual entities use computing resources of a graphics card, according to an embodiment of the present disclosure.
FIG. 1B is a diagram illustrating the use of computing resources of competing graphics cards by multiple virtual entities.
Fig. 1C is an architecture diagram illustrating interaction of a cloud game with a terminal over multiple virtual entities.
FIG. 2A is a flow diagram illustrating a method of managing computing resources according to an embodiment of the present disclosure.
FIG. 2B is a block diagram illustrating an apparatus for managing computing resources in accordance with an embodiment of the present disclosure.
Fig. 2C is a flowchart illustrating an example of a virtual synchronization signal module in an apparatus for managing computing resources setting a dedicated synchronization signal of a virtual entity according to an embodiment of the present disclosure.
FIG. 3A is a flow diagram illustrating a method of processing computing resources according to an embodiment of the present disclosure.
Fig. 3B is a schematic diagram illustrating a plurality of virtual entities contending for a computing resource, according to an embodiment of the disclosure.
Fig. 4A is an example flow diagram illustrating a virtual entity registering its shared computing resources according to an embodiment of the present disclosure.
Fig. 4B is an example flow diagram illustrating a virtual entity modifying a dedicated synchronization signal according to an embodiment of the present disclosure.
Fig. 4C is an example flow diagram illustrating a virtual entity handling synchronization signal offset in accordance with an embodiment of the present disclosure.
FIG. 5 is a flow diagram illustrating a virtual entity deregistering its shared computing resources according to an embodiment of the present disclosure.
Fig. 6 is a block diagram illustrating an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, example embodiments according to the present disclosure will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
In the present specification and the drawings, steps and elements having substantially the same or similar characteristics are denoted by the same or similar reference numerals, and repeated description of the steps and elements will be omitted. Meanwhile, in the description of the present disclosure, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance or order.
For the purpose of describing the present disclosure, concepts related to the present disclosure are introduced below.
Cloud computing (cloud computing) is an internet-based computing method, and in this way, software and hardware resources and information of a server side can be provided to a terminal device according to requirements. Cloud computing is a comprehensive computing technology, and mainly relates to computer technologies such as distributed computing, utility computing, load balancing, parallel computing, network storage, hot backup redundancy and virtualization.
Virtualization is a process of representing hardware resources of a computer as a plurality of logical computing resources. For example, a physical computer may be virtualized into multiple logical computers, each of which may have a different operating system running thereon. The operating systems have virtual hardware of the operating systems, and the application programs on the operating systems can run on the virtual hardware independent of each other without mutual influence. The virtualization technology can create a special purpose virtual environment for specific application purposes, and can realize dynamic allocation, flexible scheduling, cross-domain sharing and the like of hardware resources, thereby improving the utilization rate of the hardware resources.
The virtualization technology breaks through the boundary of time and space, and is one of important computing technologies in the cloud computing technology. The virtualization technology enables physical resources to be isolated from the operating environment of the application program, and therefore the cloud server can provide services such as data backup, migration and expansion for a plurality of terminals.
The scheme provided by the embodiment of the disclosure relates to a cloud computing and virtualization technology, and is specifically explained by the following embodiment.
Fig. 1A is an example schematic diagram illustrating a scenario in which multiple virtual entities use computing resources, according to an embodiment of the present disclosure. FIG. 1B is a diagram illustrating the use of competing computing resources by multiple virtual entities. Fig. 1C is an architecture diagram illustrating interaction of a cloud game with a terminal over a plurality of virtual entities. Currently, there are many applications of mobile phones or computer software that require network to implement their functions, especially for game applications. The network may be an Internet of Things (Internet of Things) based on the Internet and/or a telecommunication network, which may be a wired network or a wireless network, for example, which may be a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a cellular data communication network, or other electronic networks capable of implementing information exchange functions. The networked game application usually relies on a display card on a cloud server to synthesize a game picture to be displayed by the terminal or perform hardware coding, and such a game application is also called a cloud game.
As shown in fig. 1A, there may be a variety of hardware computing resources on the cloud server, such as a central processor unit, a communication interface, memory, and so forth. Taking the video card resource shown in fig. 1A as an example, a plurality of video cards (e.g., video card-1, video card-2, etc.) exist on the cloud server, and a plurality of virtual entities are loaded on the cloud server. Each of these graphics cards may perform related computations for applications in different virtual entities. Each virtual entity may correspond to a different operating system (such as OS-1 and OS-2). Meanwhile, different cloud games may run on different operating systems. The operating system may include an Android (Android) system, a Linux system, a Unix system, or a Windows (Windows) system, among others.
The virtual entity may be a virtual machine. The virtual machine running on the cloud server may implement an independent and complete virtual operating system by virtualizing hardware devices of the cloud server. Meanwhile, a plurality of application programs can be loaded on each virtual operating system. Furthermore, the virtual entity may also be a container (docker). A container may be an application program-level application (e.g., a gaming application) that may simulate multiple virtual environments (e.g., "user space") by isolating execution of processes to enable multiple applications to share the kernel or virtual hardware of a cloud server. Of course, the virtual entities are not limited to the two virtualization systems described above, and those skilled in the art will appreciate that the virtual entities may also include other entities that apply virtualization technologies.
An architecture diagram of a typical application running on a virtual entity, a cloud game, is shown in fig. 1C. The game client may transmit data of the user operating the game to the game server via a control stream, and the game server may transmit one or more frames of audio and video frames to the game client via a data stream (e.g., in the manner of Steam). For example, the data stream and the Control stream may be transmitted by a Real Time Streaming Protocol (RTSP), a Real-Time Transport Protocol (RTP), a Real-Time Transport Control Protocol (RTCP), or other protocols, but the disclosure is not limited thereto.
The architecture shown in FIG. 1C allows for support of various types of games (e.g., cell phone games or computer games). Multiple games may be run in the same virtual entity on a cloud game server (e.g., the cloud server described above), or only one game may be run per dashed entity. A game agent may be run in the virtual entity that corresponds to a cloud game that a particular game client is running. The game agent may be a separate process or thread running on the virtual entity. Through the game agent, the game server may obtain audio/video frames of the cloud game (e.g., audio/video frames output by a rendering process/thread), encode the audio/video frames (e.g., receive audio/video frames encoded by an encoding process/thread), and then transmit the encoded audio/video frames to the game client via a data stream. Further, when the game agent receives an input event of the user (e.g., an instruction to direct a certain character in the game to go forward, go backward, shoot, jump, etc.) from the game client, the game agent may process a next audio frame/video frame of the cloud game or acquire a cloud game execution result (e.g., a game win/loss) by reproducing the input event. The game client can decode the audio frame/video frame, then play the decoded audio frame/video frame to obtain the next frame of picture of the cloud game, and continue to perform user operation according to the next frame of picture. The game client encodes the user operation (i.e., input event) and transmits it to the game server through a control stream.
An application on a cloud server may perform relevant computations through reference synchronization signals (e.g., clock signals generated by hardware circuits) in various hardware devices of the cloud server. For example, for a cloud game that needs to consume a large amount of computing resources to render a screen, it is generally necessary to drive a CPU (Central Processing Unit) according to a Vertical synchronization signal (Vsync) to prepare a computing instruction and submit the computing instruction to multiple GPUs (Graphics Processing units) on multiple display cards to execute the computing instruction in parallel. The above-described calculation instruction may be an instruction to render a picture of the next frame of the game. And displaying the subsequent vertical synchronizing signal on the terminal when the vertical synchronizing signal arrives through a picture rendered by the GPU on the display card. In addition, part of the operating system needs to synthesize pictures or perform encoding according to the vertical synchronization signal, which also needs the CPU to submit some computing instructions for rendering or encoding to some of the multiple display cards.
The computational instructions for rendering may generate audio frames or video frames through a model specific to each cloud game. A model is a description of a three-dimensional object or virtual scene that is strictly defined by language or data structures and includes information such as geometry, viewpoint, texture, lighting and shading. For example, in a cloud game, when a game character manipulated by a user travels to a location, a game event is triggered (e.g., a monster suddenly appears), and a model of the game may describe the scene of the game event (e.g., the image of the monster, including the shape of the monster as viewed from the point of view of the game character manipulated by the user, clothing texture, lighting, sound of the monster, background sounds, etc.). Rendering calculations may convert these descriptions into audio and/or video frames, forming images and sounds to be seen and heard by the user at the game client. The virtual entity may trigger multiple GPU units in its shared graphics card to render pictures in the video frame via the vertical synchronization signal. These GPU units may sequentially perform Vertex shading (Vertex shading), Shape Assembly (Shape Assembly), Geometry shading (Geometry shading), Rasterization (Rasterization), and Fragment Shader (Fragment Shader) operations to calculate RGB (Red, Green, blue) values of each pixel in a video frame, and further obtain a picture to be displayed by a game client.
The computational instructions for encoding may indicate that a video frame or an image frame is encoded. For example, video frames may be transmitted using h.264 encoding to encode the video frames into a video stream, or Audio frames may be transmitted using Advanced Audio Coding (AAC) to encode the Audio frames into an Audio stream. The virtual entity can trigger a plurality of GPU units in the shared display card through the vertical synchronization signal to perform hardware coding on the video frame and the audio frame so as to improve coding efficiency and compression effect. Taking the example of encoding a video frame into a video stream through h.264 encoding, multiple GPUs in a graphics card may sequentially perform multiple steps of inter-frame and intra-frame prediction (Estimation), transformation (Transform) and inverse transformation, Quantization (Quantization) and inverse Quantization, Loop Filter (Loop Filter) and Entropy Coding (Entropy Coding), and encode the video frame into multiple transmittable data blocks. After receiving the data blocks, the cloud game client can decode the data blocks to obtain the video frame and the audio frame to be displayed.
The vertical synchronization signal is a synchronization signal for the GPU on the graphics card to calculate a frame of a picture, which indicates the end of the previous frame and the start of the next frame. The same cloud service may have a plurality of display cards, where each display card may have a different vertical synchronization signal or the same vertical synchronization signal. The signal is valid once before each frame scan, and the field frequency of the display card, i.e. the number of refreshes of the screen per second, can be determined by the signal and can also be referred to as the refresh frequency of the display card. For example, the refresh rate of the graphics card may be 60Hz, 120Hz, etc., i.e., 60 times, 120 times, etc., per second, and one display frame time is 1/60 seconds, 1/120 seconds, etc. The vertical synchronization signal is generated by a display driving device (e.g., a video card), and is used to synchronize a gate driving signal, a data signal, and the like required for displaying a frame of picture in a display process.
Referring to fig. 1B, virtual entities 1 to 3 share GPU resources on one graphics card. Alternatively, the vertical synchronization signals used by the virtual entities 1 to 3 may be physical vertical synchronization signals used by the graphics card. It is assumed that different applications run on the virtual entities 1 to 3, and these applications need to acquire a picture possibly to be displayed on the terminal through hardware computing resources of the video card. The process of acquiring a picture that may be displayed on a terminal is typically split into three subtasks: the system comprises a rendering task of rendering a picture, an encoding task of encoding the rendering picture and a transmission task of transmitting a code stream to a terminal. Among them, the rendering task and the encoding task may require the GPU of the graphics card to participate in the computation, resulting in competition of the computational resources of the GPU.
Assume that the applications on vm 1 through vm 3 all submit rendering instructions (i.e., rendering instructions-1 through-3 in fig. 1B) to the GPU rendering engine of the graphics card at the time when the first vertical synchronization signal (Vsync) arrives. These rendering instructions will cause the GPU to perform rendering tasks a-C. The execution results of the rendering tasks a to C are pictures displayed to the end user within a frame duration after the second vertical synchronization signal (second Vsync). At this time, the GPU rendering engine receives 3 rendering instructions simultaneously, which respectively instruct the GPU to complete the rendering task: render task a, render task B, and render task C. Because the GPU generally employs a time sharing mechanism, the GPU rendering engine can only perform one rendering task at a time. As shown in FIG. 1B, the GPU begins executing rendering task C after rendering task A and rendering task B are completed. Assume that virtual entity 3 requires the GPU rendering engine to execute rendering task C by submitting rendering instruction C and needs to obtain the results of rendering task C. In the case of fig. 1B, the virtual entity 3 can only be empty when the GPU executes rendering task a and rendering task B, resulting in the virtual entity 3 experiencing the delay shown by the black square in fig. 1B in obtaining the result of rendering task C.
In addition, after obtaining the results of the GPU rendering engines performing the rendering tasks a to C, the virtual entities 1 to 3 may also need to send encoding instructions to the GPU encoding engine of the display card to request the GPU encoding engine to perform the corresponding encoding tasks a to C. It is also possible that encoding tasks a through C are based on the vertical synchronization signal of the same graphics card. Similar to the GPU rendering engine executing the rendering task, due to the time sharing mechanism adopted by the GPU, the virtual entities 1 to 3 may experience a delay in obtaining the execution result of the corresponding encoding task.
The present disclosure presents a method, device, electronic device, and computer-readable storage medium for managing computing resources. According to the method, the vertical synchronization signals of all the virtual entities are centrally managed on the physical machine, so that the virtual entities can send GPU rendering instructions and coding instructions at different moments, GPU resource competition is reduced, and operation delay of cloud application is reduced.
FIG. 2A is a flow diagram illustrating a method 2000 of managing computing resources according to an embodiment of the disclosure. Fig. 2B is a block diagram illustrating an apparatus 2100 for managing computing resources according to an embodiment of the disclosure. Fig. 2C is a flowchart illustrating an example in which the virtual synchronization signal module in the apparatus 2100 for managing computing resources sets a dedicated synchronization signal of a virtual entity according to an embodiment of the present disclosure.
The method 2000 of managing computing resources according to embodiments of the present disclosure may be applied to any electronic device having computing functionality. It is understood that the electronic device may be a different kind of hardware device, such as a Personal Digital Assistant (PDA), an audio/video device, a mobile phone, an MP3 player, a personal computer, a laptop computer, a cloud server, and so on. For example, the electronic device may be device 2100 in fig. 2B that manages computing resources. In the following, the present disclosure is illustrated by an example of an apparatus 2100, and those skilled in the art will appreciate that the present disclosure is not so limited.
The apparatus for managing computing resources 2100 is for managing computing resources for a plurality of virtual entities registered thereon. The apparatus 2100 for managing computing resources may include a synchronization signal management module 2110 and a plurality of virtual synchronization signal modules (e.g., a first virtual synchronization signal module 2131, a second virtual synchronization signal module 2132, etc. in fig. 2B) respectively corresponding to a plurality of virtual entities. Each virtual entity is provided with a virtual synchronization signal module corresponding to the virtual entity. For example, a first virtual entity has a first virtual synchronization signal module 2131 corresponding to the first virtual entity. For example, the second virtual entity has a second virtual synchronization signal module 2132 corresponding thereto. Although only the first virtual synchronization signal module, the second virtual synchronization signal module, and the nth virtual synchronization signal module are illustrated in fig. 2B, it should be understood by those skilled in the art that more virtual entities and/or other software may also be run on the electronic device, and the disclosure is not limited thereto.
The synchronization signal management module 2110 may run directly on top of the hardware resources of the device 2100. For example, when part of the plurality of virtual entities is a virtual machine, the synchronization signal management module 2110 may be a part/one component of a virtual machine management module (e.g., Hypervisor). The synchronization signal management module 2110 may also be part/component of a container Engine (e.g., Docker Engine) when some of the multiple virtual entities are containers. Of course, the synchronization signal management module 2110 may also be other management components, for example, a separate circuit for clock management in the device 2100, and the like, and the disclosure is not limited thereto.
The synchronization signal management module 2110 may monitor the activation of the virtual entities and allocate the most idle hardware resources to each virtual entity. Of course, the synchronization signal management module 2110 may also or alternatively allocate hardware resources according to the hardware devices specified by the virtual entity. For example, the synchronization signal management module 2110 may allocate any one of a plurality of display cards on a physical machine to a virtual entity. The cloud game server is usually provided with a plurality of display cards, which can provide more computing resources and ensure that the cloud game can run smoothly without cards. Compared with the technology of calculating the pictures of the video frames and the audio frames by using only a single display card at the client, the cloud game server using a plurality of display cards has higher operation speed. Alternatively, after the synchronization signal management module 2110 allocates physical hardware resources to the started virtual entity, the synchronization signal management module 2110 may record a plurality of virtual entities sharing the same hardware in a registry of the hardware. Optionally, one or more cloud games may be run on these virtual entities.
The synchronization signal management module 2110 may be configured to provide different offsets to different virtual synchronization signal modules. For example, the synchronization signal management module 2110 may perform operation 2010 of fig. 2A, obtaining the number of virtual entities sharing the computing resource as the first number.
It is assumed that the synchronization signal management module 2110 registers the first virtual entity and the second virtual entity as sharing the same hardware, for example, the same video card. Therefore, the first virtual entity and the second virtual entity can share GPU resources on the display card. At this time, the synchronization signal management module 2110 may calculate the number of virtual entities sharing the same graphics card as the first number. Suppose that N virtual entities share the same graphics card (i.e., the first number is N). The synchronization signal management module 2110 may determine an offset of a dedicated synchronization signal of each virtual entity registered on the graphics card with respect to a reference synchronization signal based on the reference synchronization signal of the graphics card and a characteristic (e.g., field frequency) of the graphics card. An example of the reference synchronization signal is a vertical synchronization signal of a graphics card shared by the first virtual entity and the second virtual entity, and the description of the reference synchronization signal is given below by taking the vertical synchronization signal of the graphics card as an example. Similarly, an example of the dedicated synchronization signal is a virtual vertical synchronization signal dedicated to the virtual entity, and the description of the dedicated synchronization signal is given below by taking the virtual vertical synchronization signal as an example.
If the field frequency of the graphics card is fps (i.e. how many times the graphics card refreshes in 1 second), the display frame time refresh _ period is 1000000/fps. The display frame time refresh _ period is a number of microseconds of one period of the vertical synchronization signal. Next, the synchronization signal management module 2110 may calculate the vertical synchronization signal intervals timeload allocated to the N virtual containers respectively. Wherein, time is refresh _ period/n. The synchronization signal management module 2110 may make the following records in the registry:
the first dummy entity is assigned a first dummy vertical synchronization signal offset VSync _ offset 1 ,VSync_offset 1 =timeslot*0;
The second virtual entity is assigned a second virtual vertical synchronization signal offset VSync _ offset 2 ,VSync_offset 2 =timeslot*1……
The Nth dummy entity is assigned to the Nth dummy vertical synchronization signal offset VSync _ offset n ,VSync_offset n =timeslot*(n-1)。
The foregoing method and method for assigning the offset are merely examples, and those skilled in the art should understand that the offset of the virtual vertical synchronization signal may also have other manners for assigning, for example, randomly generating different offsets for each virtual entity, and the disclosure is not limited thereto.
After the synchronization signal management module 2110 provides an offset to each of the virtual synchronization signal modules, any one of the virtual synchronization signal modules (e.g., the first virtual synchronization signal module 2131, the second virtual synchronization signal module 2132, etc.) described above may be configured to: the offset provided by the reference synchronization signal and synchronization signal management module 2110 is received. In turn, the virtual synchronization signal modules may be configured to set the dedicated synchronization signal of its corresponding virtual entity such that an offset of the dedicated synchronization signal relative to the reference synchronization signal corresponds to the received offset. For example, for each of the first number of virtual entities, the virtual synchronization signal module thereof may perform operation 2020, generating a dedicated synchronization signal, wherein the dedicated synchronization signals of each virtual entity are offset from each other.
For example, referring to fig. 2C, an example method 2300 for a virtual synchronization signal module to set a dedicated synchronization signal of a virtual entity is described, taking the first virtual synchronization signal module 2131 as an example. The virtual synchronization signal module 2131 may be connected to the synchronization signal management module 2110 at start-up to obtain its provision of synchronization signal offsets. Specifically, the synchronization signal offset may be an offset of a virtual vertical synchronization signal obtained by the first virtual entity for adjusting the first virtual entity. Next, the first virtual synchronization signal module 2131 receives a reference synchronization signal. The reference synchronization signal may be a physical vertical synchronization signal from the cloud server shown in fig. 2C (which is determined by the clock circuit of the graphics card). The reference synchronization signal may also be a virtual vertical synchronization signal, which may be a virtual vertical synchronization signal generated by a virtual machine management module or a container engine for managing a plurality of virtual entities. The first virtual synchronization signal module 2131 may shift the reference synchronization signal by a period of time according to the received synchronization signal shiftInter (e.g., VSync _ offset as described above) 1 ) Thereby generating a dedicated synchronization signal (e.g., the dedicated virtual synchronization signal in fig. 2C) for the first virtual entity. The first virtual synchronization signal module 2131 may send the dedicated synchronization signal to the operating system of the first virtual entity, or generate a computation instruction related to the dedicated synchronization signal, instruct the graphics card to perform a corresponding computation, and so on. Finally, if the first virtual synchronization signal module 2131 wishes to shut down, it may also send a shut down message to the synchronization signal management module 2110, which may indicate that the first virtual entity is no longer sharing the computing resources. Fig. 2C only shows an example of a virtual synchronization signal module, and those skilled in the art should understand that the virtual synchronization signal module can also perform other operations to make the offset of the dedicated synchronization signal relative to the reference synchronization signal correspond to the received offset, which is not limited by the present disclosure.
Finally, in operation 2030, the device 2100 may receive a processing request from one or more of the first number of virtual entities. Wherein the processing request of each virtual entity is transmitted based on the dedicated synchronization signal of the virtual entity. For example, a graphics card on device 2100 may receive rendering instructions and/or encoding instructions sent by a first virtual entity and a second virtual entity, respectively. These instructions are based on dedicated synchronization signals (e.g., virtual vertical synchronization signals) of the first virtual entity and the second virtual entity, so that the graphics card can perform related rendering calculations or encoding calculations based on these virtual vertical synchronization signals.
According to the method 2000 and the device 2100 for managing computing resources in the embodiment of the present disclosure, vertical synchronization signals of all virtual entities can be centrally managed on a physical machine, so that a plurality of virtual entities can send instructions to a shared hardware device at different times, thereby reducing GPU resource contention and reducing operation delay of cloud application.
FIG. 3A is a flow diagram illustrating a method 3000 of processing computing resources according to an embodiment of the disclosure. Fig. 3B is a schematic diagram illustrating a plurality of virtual entities contending for a computing resource, according to an embodiment of the disclosure.
The method 3000 of processing computing resources according to an embodiment of the present disclosure may also be similarly applied in the apparatus 2100 described above. Alternatively, the shareable hardware components in device 2100 may be configured to perform any of operations 3010-3020 in method 3000. While any of the video cards in the device 2100 are described below as an example, those skilled in the art will appreciate that the method 3000 may also be applied to other hardware components having shareable functionality.
First, in operation 3010, the device 2100 performs a first virtual entity related calculation using the calculation resource based on a first dedicated synchronization signal of the first virtual entity. The apparatus 2100 then performs a second virtual entity related calculation using the computing resource based on a second dedicated synchronization signal of the second virtual entity in operation 3020. Wherein the first dedicated synchronization signal and the second dedicated synchronization signal are offset from each other.
For example, when the computing resource is a graphics card computing resource, operation 3010 may include performing a first virtual entity related computation using the graphics card computing resource based on a first virtual vertical synchronization signal of a first virtual entity, and operation 3020 may include performing a second virtual entity related computation using the graphics card computing resource based on a second virtual vertical synchronization signal of a second virtual entity. The first virtual vertical synchronization signal and the second virtual vertical synchronization signal have different offsets relative to the vertical synchronization signal of the display card shared by the first virtual vertical synchronization signal and the second virtual vertical synchronization signal.
Referring to fig. 3B, assume that an application of the first virtual entity submits a first rendering instruction to a GPU rendering engine of the graphics card based on the first virtual vertical synchronization signal (first virtual Vsync). At this time, the first virtual vertical synchronization signal coincides with the physical vertical synchronization signal of the display card. The first rendering instruction is to cause the GPU rendering engine to perform a first rendering task. Similarly, the application of the second virtual entity submits a second rendering instruction to the GPU rendering engine of the graphics card based on the second virtual vertical synchronization signal (second virtual Vsync).
At this point, the GPU rendering engine may receive rendering instructions at different times, so that these rendering tasks may not need to be queued for execution. For example, in fig. 3B, at the time the second virtual entity submits the second rendering instruction, the first rendering task has been performed, so the GPU rendering engine may immediately execute the second rendering task. In such a case, the second virtual entity does not need to experience a delay to obtain the results of the execution of the second rendering task. The delay experienced by each virtual entity is greatly reduced compared to the situation in fig. 1B.
In addition, after obtaining the results of the GPU rendering engine executing the first rendering task and the second rendering task, the first virtual entity and the second virtual entity may also need to send the first encoding instruction and the second encoding instruction to the GPU encoding engine of the display card to request the GPU encoding engine to execute the corresponding first encoding task and second encoding task. The first encoding task is based on a first virtual vertical synchronization signal and the second encoding task is based on a second virtual vertical synchronization signal. Similar to the GPU rendering engine executing the rendering task, the delay that each virtual entity needs to experience in obtaining the execution result of its corresponding encoding task is also greatly reduced.
Compared with the method shown in fig. 1B in which virtual entities compete for GPU resources, the method 3000 reduces GPU resource competition by reasonably processing GPU resources, thereby reducing operation delay of cloud applications.
Fig. 4A is an example flow diagram illustrating a virtual entity registering its shared computing resources according to an embodiment of the present disclosure. Fig. 4B is an example flow diagram illustrating a virtual entity modifying a dedicated synchronization signal according to an embodiment of the disclosure. Fig. 4C is an example flow diagram illustrating a virtual entity handling synchronization signal offset in accordance with an embodiment of the present disclosure.
Referring to fig. 4A, the synchronization signal management module 2110 may be configured to manage computing resources of the device 2100. For example, the synchronization signal management module 2110 may query all the graphics cards on the cloud server, determine which graphics cards still have spare GPU computing resources, which graphics cards are overloaded, and so on.
The synchronization signal management module may then wait for a registration message (or initiation message) for the virtual entity. For example, the synchronization signal management module 2110 may be configured to receive a registration message for a virtual entity (e.g., a first virtual entity) from a virtual synchronization signal module (e.g., a first virtual synchronization signal module 2131). For example, the registration message may be used to register the first virtual entity with a registry of computing resources that it is to share. For example, the registration message may indicate that the first virtual entity needs to use GPU resources for rendering computations and encoding computations. Alternatively, the registration message may indicate that the first virtual entity needs to use the communication interface to transport the codestream out.
The synchronization signal management module 2110 may be further configured to register the virtual entity on a registry of computing resources that it is to share according to the registration message of the virtual entity. For example, the synchronization signal management module 2110 may allocate a graphics card to be used by the first virtual entity according to the registration message of the first virtual entity in combination with the use state of the graphics card resource that is learned by the first virtual entity. The synchronization signal management module 2110 may allocate the currently most idle graphics card to the first virtual entity, and may also allocate GPU resources on the designated graphics card to the first virtual entity, which is not limited in this disclosure. Assuming that the synchronization signal management module 2110 allocates the video card-1 of fig. 1A to the first virtual entity, the synchronization signal management module 2110 may add relevant information of the first virtual entity in a registry recording virtual entities sharing the video card-1. The related information may include, for example, a unique identifier of the virtual entity, a registration time of the virtual entity, and the like, which is not limited by the present disclosure.
At this time, since the registry is updated, the synchronization signal management module 2110 may also update the number of virtual entities in the registry. For example, the synchronization signal management module 2110 may update the first number to the second number. Since only the first virtual entity is added to the registry at this time, the second number is only 1 more than the first number. Due to the changing number of virtual entities sharing the computing resource, the synchronization signal offset corresponding to the virtual entities sharing the computing resource should also change. Thus, the synchronization signal management module 2110 may determine, according to the second number, an offset of the dedicated synchronization signal of each of the second number of virtual entities with respect to the reference synchronization signal, that is, an offset of the synchronization signal corresponding to the virtual entity. The synchronization signal management module 2110 may determine an offset of the dedicated synchronization signal of each of the second number of virtual entities with respect to the reference synchronization signal according to the method of the embodiment described in fig. 2A to 2C, and thus, the disclosure will not be repeated.
The synchronization signal management module 2110 may then send a corresponding synchronization signal offset to each virtual entity in the registry. Finally, after the virtual synchronization signal modules of the virtual entities sharing the same computing resource receive their corresponding synchronization signal offsets (e.g., the local Vsync offsets in fig. 4A), the virtual entities may acquire or update their dedicated synchronization signals. For example, referring to fig. 4B, it is assumed that the second virtual entity is in the computing resources of the shared video card-1 before the synchronization signal management module 2110 receives the registration message of the first virtual entity. The synchronization signal management module 2110 registers the first virtual entity on the registry of the video card-1 according to the registration message of the first virtual entity. At this time, the dedicated synchronization signal of the second virtual entity (e.g., the virtual vertical synchronization signal of the second virtual entity) will need to be updated. As shown in fig. 4B, the sync signal management module 2110 will inform all the un-closed virtual entities on the registry of the video card-1 of their updated sync signal offsets. After the second virtual synchronization signal module 2132 of the second virtual entity is in the state of listening for the synchronization signal management module 2110 all the time, the second virtual entity will update its dedicated synchronization signal after it receives the updated synchronization signal offset, for example, modify the local synchronization signal offset in fig. 4B to a new value.
Referring finally to fig. 4C, when each virtual entity (e.g., the first virtual entity in fig. 4B) receives a vertical synchronization signal sent by another system (e.g., its shared graphics card), the virtual entity may remain in a sleep state for its synchronization signal offset time. After the sleep expires, the virtual entity performs operations related to the vertical synchronization signal, such as sending related instructions to the GPU rendering engine or the encoding engine, and so on. Thus, the virtual entity successfully implements the offset of its own virtual vertical synchronization signal with respect to the physical vertical synchronization signal of the graphics card.
FIG. 5 is a flow diagram illustrating a virtual entity deregistering its shared computing resources according to an embodiment of the present disclosure.
Referring to fig. 5, the synchronization signal management module 2110 may be configured to receive (e.g., the first virtual synchronization signal module 2131) a deregistration message of a virtual entity (e.g., a first virtual entity). Alternatively, the logoff information of a virtual entity may be used to logoff the virtual entity from a registry of its shared computing resources. Assume that the first virtual entity and the second virtual entity share GPU resources on the graphics card-1, and the synchronization signal management module 2110 receives the logout message sent by the first virtual synchronization signal module 2131. At this time, the synchronization signal management module 2110 may determine that the first virtual entity will no longer share or use GPU resources on graphics card-1. At this time, the synchronization signal management module 2110 will log out the virtual entity from the registry of its shared computing resources according to the log-out message of the virtual entity. For example, the sync signal management module 2110 may delete the related information of the first virtual entity from the registry of the video card-1.
Similar to the flow described in fig. 4A, the sync signal management module 2110 may also update the number of virtual entities in the registry of the video card-1. For example, the synchronization signal management module 2110 may update the first number to the third number based on deregistration information of the virtual entity. Since only the first virtual entity is deleted in the registry at this time, the third number is only 1 less than the first number. Due to the changing number of virtual entities sharing the computing resource, the synchronization signal offset corresponding to the virtual entities sharing the computing resource should also change. Thus, the synchronization signal management module 2110 may determine, again according to the third number, the dedicated synchronization signal of each of the third number of virtual entities, that is, the synchronization signal offset corresponding to the virtual entity. The synchronization signal management module 2110 may determine an offset of the dedicated synchronization signal of each of the third number of virtual entities with respect to the reference synchronization signal according to the method of the embodiment described in fig. 2A to 2C, and thus, the disclosure will not be repeated.
The synchronization signal management module 2110 may then send a corresponding synchronization signal offset to each virtual entity in the registry. Finally, after the virtual synchronization signal module of each virtual entity sharing the same computing resource receives the corresponding synchronization signal offset, each virtual entity may acquire or update its dedicated synchronization signal. The method for each virtual entity to acquire or update its dedicated synchronization signal is similar to the method of the embodiment described in fig. 4A to 4C, and therefore the details of the present disclosure will not be repeated.
Fig. 6 is a block diagram illustrating an electronic device 600 according to an embodiment of the disclosure.
Referring to fig. 6, an electronic device 600 may include a processor 601 and a memory 602. The processor 601 and the memory 602 may be connected by a bus 603.
The processor 601 may perform various actions and processes according to programs stored in the memory 602. In particular, the processor 601 may be an integrated circuit chip having signal processing capabilities. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which may be of the X86 or ARM architecture.
The memory 602 has stored thereon computer instructions that, when executed by the microprocessor, implement the method 2000. The memory 602 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Synchronous Link Dynamic Random Access Memory (SLDRAM), and direct memory bus random access memory (DR RAM). It should be noted that the memories of the methods described in this disclosure are intended to comprise, without being limited to, these and any other suitable types of memories.
Embodiments of the present disclosure provide a computer-readable storage medium having stored thereon computer instructions that, when executed by a processor, implement method 2000.
According to experimental data, it is shown that, with the example of computing resource usage in fig. 1B, in the case where three virtual entities invoke a GPU engine (including a rendering engine, a decoding engine, and an encoding engine), the three virtual entities may submit encoding instructions at substantially the same time, thereby generating severe GPU resource contention, resulting in a maximum delay of a process of up to 12 ms. In contrast, in the example of computing resource usage in which the method 2000 for managing computing resources according to the embodiment of the present disclosure is adopted, since the three virtual entities stagger the time for submitting the encoding instruction, GPU resource contention is significantly reduced, and thus the maximum delay of the process is reduced to 3 ms.
Therefore, in the embodiment of the disclosure, the synchronization signals of all the virtual entities are centrally managed on the physical machine, so that the plurality of virtual entities can send instructions to the shared hardware device at different times, thereby reducing GPU resource contention and reducing the operation delay of the cloud application.
It is to be noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In general, the various example embodiments of this disclosure may be implemented in hardware or special purpose circuits, software, firmware, logic or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While aspects of embodiments of the disclosure have been illustrated or described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
The exemplary embodiments of the invention, as set forth in the foregoing detailed description, are intended to be illustrative, not limiting. It will be appreciated by those skilled in the art that various modifications and combinations of the embodiments or features thereof may be made without departing from the principles and spirit of the invention, and that such modifications are intended to be within the scope of the invention.

Claims (13)

1. A method of managing graphics card computing resources, comprising:
acquiring the number of virtual entities sharing the display card computing resources on a cloud server as a first number;
for each of the first number of virtual entities, generating a virtual vertical synchronization signal for the virtual entity, wherein the virtual vertical synchronization signal has a different offset relative to a vertical synchronization signal for a graphics card of the cloud server;
receiving processing requests from one or more of the first number of virtual entities, wherein the processing request for each virtual entity is sent based on the virtual vertical synchronization signal of the virtual entity;
and based on the virtual vertical synchronous signal of each virtual entity, utilizing the graphics card computing resource to execute rendering computation related to the processing request of the virtual entity.
2. The method of claim 1, further comprising:
updating the first number to a second number based on registration information of a virtual entity; and
determining a virtual vertical synchronization signal for each of the second number of virtual entities based on the second number;
wherein the registration information of the virtual entity is used for registering the virtual entity to a registry of computing resources to be shared by the virtual entity.
3. The method of claim 1, further comprising:
updating the first number to a third number based on logout information of a virtual entity; and
determining a virtual vertical synchronization signal for each of the third number of virtual entities according to the third number;
wherein the logout information of the virtual entity is used for logging out the virtual entity from a registry of computing resources shared by the virtual entity.
4. The method of claim 1, further comprising:
and performing encoding calculation related to the processing request of each virtual entity based on the virtual vertical synchronization signal of the virtual entity.
5. The method of claim 1, wherein,
the display card is one of a plurality of display cards on the physical machine; and is
One or more cloud games run on the virtual entity.
6. An apparatus for managing computing resources of a graphics card includes a synchronization signal management module and a plurality of virtual synchronization signal modules respectively corresponding to a plurality of virtual entities,
the synchronization signal management module is configured to provide different synchronization signal offsets to different virtual synchronization signal modules;
each of the virtual synchronization signal modules is configured to:
receiving a vertical synchronous signal of the display card and a synchronous signal offset provided by the synchronous signal management module; and
setting a virtual vertical synchronization signal of a corresponding virtual entity on a cloud server so that an offset of the virtual vertical synchronization signal with respect to a vertical synchronization signal of the graphics card corresponds to the synchronization signal offset, wherein the graphics card performs rendering calculation related to a processing request of the virtual entity based on the virtual vertical synchronization signal.
7. The device of claim 6, wherein the synchronization signal management module is further configured to:
receiving a registration message corresponding to a virtual entity from the virtual synchronization signal module;
and registering the virtual entity to a registry of the computing resources to be shared according to the registration message of the virtual entity.
8. The device of claim 6, wherein the synchronization signal management module is further configured to:
receiving a deregistration message corresponding to a virtual entity from the virtual synchronization signal module;
and logging off the virtual entity from the registry of the shared computing resources according to the logging off message of the virtual entity.
9. The device of claim 7 or 8, wherein the synchronization signal management module is further configured to:
updating the number of virtual entities in the registry;
calculating the offset of the virtual vertical synchronizing signal of each virtual entity in the virtual entities registered on the registry relative to the synchronizing signal of the vertical synchronizing signal of the display card according to the updated number of the virtual entities;
transmitting a corresponding synchronization signal offset to each virtual entity in the registry.
10. A method of processing graphics card computing resources, the method comprising:
based on a first virtual vertical synchronization signal of a first virtual entity on a cloud server, performing first virtual entity related computation using the graphics card computing resources;
based on a second virtual vertical synchronization signal of a second virtual entity on the cloud server, performing second virtual entity related computation using the graphics card computing resources;
wherein the first virtual vertical synchronization signal and the second virtual vertical synchronization signal have different offsets relative to a vertical synchronization signal of a graphics card, and the first virtual entity related calculation and the second virtual entity related calculation comprise rendering calculation.
11. The method of claim 10, wherein,
the first virtual entity related computation and the second virtual entity related computation further comprise an encoding computation.
12. An electronic device, comprising:
a processor;
memory storing computer instructions which, when executed by the processor, implement the method of any one of claims 1-5 and 10-11.
13. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the method of any one of claims 1-5 and 10-11.
CN201911347261.XA 2019-12-24 2019-12-24 Method and device for managing computing resources and electronic device Active CN111111163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911347261.XA CN111111163B (en) 2019-12-24 2019-12-24 Method and device for managing computing resources and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911347261.XA CN111111163B (en) 2019-12-24 2019-12-24 Method and device for managing computing resources and electronic device

Publications (2)

Publication Number Publication Date
CN111111163A CN111111163A (en) 2020-05-08
CN111111163B true CN111111163B (en) 2022-08-30

Family

ID=70500309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911347261.XA Active CN111111163B (en) 2019-12-24 2019-12-24 Method and device for managing computing resources and electronic device

Country Status (1)

Country Link
CN (1) CN111111163B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111744189B (en) * 2020-07-27 2022-02-01 腾讯科技(深圳)有限公司 Picture updating method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103309748A (en) * 2013-06-19 2013-09-18 上海交通大学 Adaptive scheduling host system and scheduling method of GPU virtual resources in cloud game
CN103338204A (en) * 2013-07-05 2013-10-02 曾德钧 Audio synchronization output method and system
CN104598292A (en) * 2014-12-15 2015-05-06 中山大学 Adaptive streaming adaptation and resource optimization method applied to cloud-game system
CN104971499A (en) * 2014-04-01 2015-10-14 索尼电脑娱乐公司 Game providing server
CN107950065A (en) * 2015-08-25 2018-04-20 Idac控股公司 framing, scheduling and synchronization in wireless system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013152453A (en) * 2011-12-27 2013-08-08 Canon Inc Image processing apparatus, image processing system, image processing method, and image processing program
US8583920B1 (en) * 2012-04-25 2013-11-12 Citrix Systems, Inc. Secure administration of virtual machines
US9806967B2 (en) * 2014-05-30 2017-10-31 Sony Corporation Communication device and data processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103309748A (en) * 2013-06-19 2013-09-18 上海交通大学 Adaptive scheduling host system and scheduling method of GPU virtual resources in cloud game
CN103338204A (en) * 2013-07-05 2013-10-02 曾德钧 Audio synchronization output method and system
CN104971499A (en) * 2014-04-01 2015-10-14 索尼电脑娱乐公司 Game providing server
CN104598292A (en) * 2014-12-15 2015-05-06 中山大学 Adaptive streaming adaptation and resource optimization method applied to cloud-game system
CN107950065A (en) * 2015-08-25 2018-04-20 Idac控股公司 framing, scheduling and synchronization in wireless system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
云游戏的架构设计和技术实现;Go中国;《https://www.sohu.com/a/220251670_657921》;20180201;全文 *
游戏中的垂直同步到底有什么用?什么时候该开?什么时候不该开?;嘿极客;《https://www.sohu.com/a/242380437_100119856》;20180720;全文 *

Also Published As

Publication number Publication date
CN111111163A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
US20230032554A1 (en) Data processing method and apparatus, and storage medium
CN111882626B (en) Image processing method, device, server and medium
CN107018370B (en) Display method and system for video wall
CA2814420C (en) Load balancing between general purpose processors and graphics processors
US9940898B2 (en) Variable refresh rate video capture and playback
US10555010B2 (en) Network-enabled graphics processing module
CN111654720B (en) Video encoding method, apparatus, device and readable storage medium
CN113457160B (en) Data processing method, device, electronic equipment and computer readable storage medium
CN113542757B (en) Image transmission method and device for cloud application, server and storage medium
WO2022257699A1 (en) Image picture display method and apparatus, device, storage medium and program product
CN112169322B (en) Remote rendering method and device, electronic equipment and readable storage medium
JP7475610B2 (en) Cloud native 3D scene gaming method and system
US11288765B2 (en) System and method for efficient multi-GPU execution of kernels by region based dependencies
CN108762934B (en) Remote graphic transmission system and method and cloud server
WO2024037110A1 (en) Data processing method and apparatus, device, and medium
CN112843676B (en) Data processing method, device, terminal, server and storage medium
US20230245420A1 (en) Image processing method and apparatus, computer device, and storage medium
CN111111163B (en) Method and device for managing computing resources and electronic device
CN115018693A (en) Docker image acceleration method and system based on software-defined graphics processor
CN115920372A (en) Data processing method and device, computer readable storage medium and terminal
US9384276B1 (en) Reducing latency for remotely executed applications
CN116726488A (en) Data processing method, device, equipment and readable storage medium
WO2023020270A1 (en) Decoding processing method and apparatus, computer device, and storage medium
Zhang Optimizing Video Processing for Next-Generation Mobile Platforms
WO2024010588A1 (en) Cloud-based gaming system for supporting legacy gaming applications with high frame rate streams

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant