CN113542757B - Image transmission method and device for cloud application, server and storage medium - Google Patents

Image transmission method and device for cloud application, server and storage medium Download PDF

Info

Publication number
CN113542757B
CN113542757B CN202110819854.2A CN202110819854A CN113542757B CN 113542757 B CN113542757 B CN 113542757B CN 202110819854 A CN202110819854 A CN 202110819854A CN 113542757 B CN113542757 B CN 113542757B
Authority
CN
China
Prior art keywords
engine
rendering
coding
cloud application
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110819854.2A
Other languages
Chinese (zh)
Other versions
CN113542757A (en
Inventor
刘玉雪
陈安庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110819854.2A priority Critical patent/CN113542757B/en
Publication of CN113542757A publication Critical patent/CN113542757A/en
Application granted granted Critical
Publication of CN113542757B publication Critical patent/CN113542757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application discloses an image transmission method, an image transmission device, a server and a storage medium for cloud application, and belongs to the technical field of cloud application. The method comprises the following steps: responding to an image rendering instruction of the cloud application, and calling a virtual hardware abstraction layer through a runtime library layer to perform image rendering to obtain original image data; the virtual hardware abstraction layer is called by the runtime library layer to carry out image coding on the original image data, so as to obtain a video stream; and pushing the video stream to the terminal through the runtime library layer. Because rendering, encoding and plug flow are integrated in the runtime library layer, the cloud application deployment integration level is higher, and the cloud application deployment is facilitated; rendering coding is completed between a runtime library layer and a virtual hardware abstraction layer, and a processing path passed by a cloud application picture is shorter, so that display delay of the cloud application picture is reduced.

Description

Image transmission method and device for cloud application, server and storage medium
Technical Field
The embodiment of the application relates to the technical field of cloud application, in particular to an image transmission method, device, server and storage medium of cloud application.
Background
Cloud applications (Cloud Apps) are an online application technology based on Cloud computing technology. In the cloud application scene, the application program does not run at the user side terminal, but runs in the server, the server renders the application sound and the picture into an audio-video stream, and the audio-video stream is transmitted to the user side terminal through the network for decoding and playing by the user side terminal.
In the related art, a server acquires an application picture through a Virtual Display (Virtual Display) mode, so as to encode the acquired application picture to obtain a video stream. However, in this way, a long acquisition path is required to acquire the application screen, resulting in a large delay in display of the cloud application screen.
Disclosure of Invention
The embodiment of the application provides an image transmission method, device, server and storage medium of cloud application. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides an image transmission method for a cloud application, where the method includes:
responding to an image rendering instruction of the cloud application, and calling a virtual hardware abstraction layer through a runtime library layer to conduct image rendering to obtain original image data, wherein the original image data is image data of a cloud application picture;
Invoking the virtual hardware abstraction layer through the runtime library layer to perform image coding on the original image data to obtain a video stream;
and pushing the video stream to a terminal through the runtime library layer.
On the other hand, an embodiment of the present application provides an image transmission device for a cloud application, where the device includes:
the rendering unit is used for responding to an image rendering instruction of the cloud application, calling a virtual hardware abstraction layer through a runtime library layer to conduct image rendering, and obtaining original image data, wherein the original image data is image data of a cloud application picture;
the coding unit is used for calling the virtual hardware abstraction layer through the runtime library layer to carry out image coding on the original image data so as to obtain a video stream;
and the pushing unit is used for pushing the video stream to the terminal through the runtime library layer based on the video stream.
In another aspect, embodiments of the present application provide a server including a processor and a memory; the memory stores at least one instruction for execution by the processor to implement an image transmission method for a cloud application as described in the above aspects.
In another aspect, embodiments of the present application provide a computer readable storage medium having at least one program code stored therein, the program code being loaded and executed by a processor to implement an image transmission method of a cloud application as described in the above aspect.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the image transmission method of the cloud application provided in various alternative implementations of the above aspects.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
in the embodiment of the application, the rendering, coding and pushing of the cloud application picture are integrated in the runtime library layer, so that when an image rendering instruction of the cloud application is received, the runtime library layer calls a virtual hardware abstraction layer to conduct image rendering, and further calls the virtual hardware abstraction layer to code the original image data obtained by rendering, and finally the runtime library layer pushes a video stream containing the coding to a terminal to realize image transmission of the cloud application; because rendering, encoding and plug flow are integrated in the runtime library layer, the cloud application deployment integration level in the embodiment of the application is higher, and cloud application deployment is facilitated; and rendering coding is completed between the runtime library layer and the virtual hardware abstraction layer, so that the processing path passed by the cloud application picture is shorter, and the display delay of the cloud application picture is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 illustrates a flowchart of an image transmission method for a cloud application provided by an exemplary embodiment of the present application;
FIG. 3 is a system architecture diagram of a containerized operating system shown in an exemplary embodiment of the present application;
fig. 4 shows a flowchart of an image transmission method of a cloud application according to another exemplary embodiment of the present application;
FIG. 5 is an interactive timing diagram of a cloud application screen transmission process shown in an exemplary embodiment of the present application;
fig. 6 shows a flowchart of an image transmission method of a cloud application according to another exemplary embodiment of the present application;
fig. 7 is a block diagram of an image transmission device of a cloud application according to an embodiment of the present application;
Fig. 8 shows a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application. The implementation environment may include: a terminal 110 and a server 120.
The cloud application client is installed and operated in the terminal 110, and through the cloud application client, the terminal 110 can use the cloud application by means of a cloud application technology without installing the cloud application, so that the storage space of the terminal 110 is saved (the size of the cloud application client is far smaller than that of the cloud application). The cloud application may be a game application, an instant messaging application, a shopping application, a navigation application, etc., and the embodiment does not limit the type of the cloud application; in addition, the cloud application client may support running a single cloud application, or may support running multiple cloud applications, which is not limited in this embodiment.
The terminal 110 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, etc. In some embodiments, the terminal 110 has a display screen, so that an application screen of the cloud application is displayed through the display screen; the terminal 110 has an audio component (external or built-in) so as to play an application sound of the cloud application through the audio component; the terminal 110 has an input component (e.g., a built-in input component such as a touch screen or an external input component such as a keyboard and a mouse), so that the cloud application is controlled through the input component.
The server 120 is a cloud device running a cloud application, and the server 120 may be a server cluster or a cloud computing center formed by one server or a plurality of servers. In some embodiments, hardware such as a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a code card, a memory, a hard disk, and the like is provided in the server 120.
Optionally, server 120 supports a single cloud application, or multiple cloud applications. In this embodiment, the cloud application is installed and run in a containerized operating system, which may be an Android (Android) system, an apple (IOS) system, or other terminal operating systems, which is not limited in this embodiment. In the cloud application running process, rendering, encoding and pushing of cloud application pictures are all executed by a containerized operating system.
In the running process of the cloud application, the server 120 performs rendering coding on the cloud application sound and the picture, pushes the cloud application sound and the picture to the terminal 110 in the form of an audio-video stream, and decodes and plays the audio-video stream by a cloud application client in the terminal 110 (the function is similar to that of a player). When a control operation for the cloud application is received, for example, a touch operation for a screen element in a cloud application screen is received through a touch display screen of the terminal 110, the terminal 110 sends an instruction stream to the server 120 through a cloud application client. The server 120 parses the received instruction stream, controls the cloud application based on the parsing result, and continues to push the updated cloud application picture and sound to the terminal 110 in the form of an audio/video stream.
The terminal 110 and the server 120 may be directly or indirectly connected through wired or wireless communication, which is not limited herein. In addition, only one terminal is shown in fig. 1, but in different embodiments, there are multiple terminals accessing the server 120 at the same time, and the server 120 provides cloud application services for the multiple terminals at the same time, which is not limited in the embodiments of the present application.
Referring to fig. 2, a flowchart of an image transmission method of a cloud application according to an exemplary embodiment of the present application is illustrated, where the method is applied to a server in the implementation environment shown in fig. 1, and the method includes:
in step 201, in response to an image rendering instruction of the cloud application, invoking a virtual hardware abstraction layer through a runtime library layer to perform image rendering, so as to obtain original image data, wherein the original image data is image data of a cloud application picture.
In one possible implementation, a containerized operating system is disposed in the server, where cloud applications, runtime (run time) library layers corresponding to the cloud applications, and virtualized hardware abstraction layers (Virtual Hardware Abstraction Layer, VHAL) are disposed. Wherein the runtime library layer is used for providing services related to a running mechanism, such as a message mechanism, when the system runs; the virtual hardware abstraction layer refers to an interface layer between the kernel of the operating system and the hardware circuit, and is used for abstracting hardware and providing a virtual hardware platform for the operating system, so that the operating system has hardware independence.
Optionally, the runtime library layer in the embodiment of the present application integrates rendering, encoding, and push functions. When an image rendering instruction of the cloud application is received, namely when the cloud application picture rendering requirement exists, the server calls the virtual hardware abstraction layer through the runtime library layer to conduct image rendering, and original image data of the cloud application picture is obtained. The virtual hardware abstraction layer is based on the call of the runtime library layer, and further performs image rendering in a software or hardware mode. For example, the virtual hardware abstraction layer calls the CPU to perform software rendering, or calls the GPU to perform hardware rendering.
Optionally, the image rendering instruction is automatically triggered in the running process of the cloud application, or is triggered by an instruction stream sent by the terminal.
Since the hardware in the server is different from the hardware in the terminal, it is necessary to customize the virtual hardware abstraction layer for the operating system in the server (containerized) for the hardware in the server.
Step 202, invoking a virtual hardware abstraction layer by a runtime library layer to perform image coding on the original image data, thereby obtaining a video stream.
In the use process of the cloud application, the terminal functions like a player, so that the server needs to further encode the original image data obtained by rendering to obtain a video stream composed of encoded image data. In one possible implementation, the runtime library layer continues to call the virtual hardware abstraction layer to encode the rendered raw image data to obtain a video stream. The virtual hardware abstraction layer is used for carrying out image coding in a software or hardware mode based on the call of the runtime library layer. For example, the virtual hardware abstraction layer invokes the CPU to perform software encoding or invokes the GPU to perform hardware encoding.
Step 203, pushing the video stream to the terminal through the runtime library layer.
Further, the runtime library layer pushes the video stream to the terminal by means of a push function, so that the cloud application client side at the terminal side can perform image decoding on the coded image data in the video stream, and a cloud application picture is restored.
In some embodiments, cloud application audio in the cloud application running process is encoded into an audio stream, the audio stream and the video stream are synthesized to obtain an audio-video stream, and the audio-video stream is pushed to the terminal by the runtime library layer, so that the terminal plays cloud application sound while displaying a cloud application picture.
Alternatively, the server may push the video stream to the terminal directly through the network, or may push the video stream to the terminal through an additional push server, which is not limited in this embodiment.
Obviously, by adopting the scheme provided by the embodiment of the application, the rendering coding of the cloud application picture is completed between the runtime library layer and the virtual hardware abstraction layer, so that the processing path of the cloud application picture is shortened, the display delay of the cloud application picture at the terminal side is reduced, and the use experience of a user can be improved for applications which are sensitive to delay, such as game applications; and rendering, coding and plug flow of cloud application pictures are integrated in a runtime library layer, so that the integration level of deployment is improved, and the deployment of cloud applications is facilitated.
In summary, in the embodiment of the present application, the rendering, encoding and pushing of the cloud application image are integrated in the runtime library layer, so that when an image rendering instruction of the cloud application is received, the runtime library layer invokes the virtual hardware abstraction layer to perform image rendering, and further invokes the virtual hardware abstraction layer to encode the original image data obtained by rendering, and finally the runtime library layer pushes the video stream to the terminal, so as to realize image transmission of the cloud application; because rendering, encoding and plug flow are integrated in the runtime library layer, the cloud application deployment integration level in the embodiment of the application is higher, and cloud application deployment is facilitated; and rendering coding is completed between the runtime library layer and the virtual hardware abstraction layer, so that the processing path passed by the cloud application picture is shorter, and the display delay of the cloud application picture is reduced.
In one possible implementation, as shown in fig. 3, the containerized operating system includes a cloud application 31, a runtime library layer 32, and a virtual hardware abstraction layer 33, where the virtual hardware abstraction layer 33 serves as an interface layer for hardware, and provides related services by calling hardware (such as GPU, coding card, etc.) of the server through a plug-in (such as rendering plug-in, audio plug-in, coding plug-in, etc.).
The runtime library layer 32 includes a rendering Engine 321 (Render Engine) for implementing image rendering, an encoding Engine 322 (Encode Engine) for implementing image encoding, and a plug Flow Engine 323 (Flow Engine) for performing plug Flow. In the running process of the cloud application 31, after receiving an image rendering instruction sent by the cloud application 31, the rendering engine 321 invokes the virtual hardware abstraction layer 33 to perform picture rendering, further encodes the original image data obtained by rendering through the encoding engine 322, and finally performs plug flow based on the video stream obtained by encoding through the plug flow engine 323.
The image transmission process of the cloud application is described below with reference to a rendering engine, an encoding engine, and a plug engine, using exemplary embodiments.
Referring to fig. 4, a flowchart of an image transmission method of a cloud application according to another exemplary embodiment of the present application is shown, where the embodiment of the present application uses a server in the implementation environment shown in fig. 1 as an example, and the method includes:
in step 401, an encoding service is registered by an encoding engine, and a push service is registered by a push engine.
Because the rendering engine, the encoding engine and the push engine correspond to different processes, in order to reduce consumption caused by inter-process communication, and thus further shorten image transmission delay, in one possible implementation, the encoding engine and the push engine in the runtime library layer register as services, and then the inter-process communication can be performed through a binder (bonding) mechanism.
Optionally, when receiving the cloud application start instruction, the encoding engine registers an encoding service (encoding service), and the push engine registers a push service (Flow service).
Step 402, responding to an image rendering instruction of the cloud application, and calling a virtual hardware abstraction layer through a rendering engine to conduct image rendering to obtain original image data.
When the runtime library layer receives an image rendering instruction of the cloud application, the virtual hardware abstraction layer is called by the rendering engine to render the image. In one possible implementation manner, the virtual hardware abstraction layer includes a gralloc module (gralloc_vhal) and an hw module (hw_vhal) for implementing image rendering, where the gralloc module is responsible for application of a layer (surface) (including allocation of a frame buffer), and the hw module is used for synthesizing the layer and transmitting the layer to a display device. Specifically, the image rendering process may include the following steps.
1. The buffer is applied to the gralloc module by the rendering engine.
Optionally, when receiving the image rendering instruction, the rendering engine applies for a buffer (for storing layers) to the gralloc module in the virtual hardware abstraction layer. Correspondingly, after the gralloc module receives the application, the user mode driving of the hardware is invoked to apply for a memory or a video memory for the cloud application.
2. And rendering the layers in the buffer area through a rendering engine.
After the application to the buffer, the rendering engine performs layer rendering in the buffer. In one possible implementation manner, the rendering engine supports multiple rendering modes, and the rendering engine can select a corresponding rendering mode to perform layer rendering according to the requirements of the cloud application. Illustratively, as shown in FIG. 3, the rendering engine 321 supports OpenGL ES rendering and Vulkan rendering. The embodiments of the present application are not limited to a specific rendering mode supported by the rendering engine.
In the layer rendering process, the virtual hardware abstraction layer calls hardware through the rendering plug-in, and layer rendering is performed in a software or hardware mode. Wherein the hardware rendering speed (e.g., by GPU rendering) is faster than the software rendering speed (e.g., by CPU rendering).
Optionally, when the remaining amount of the hardware resource is lower than the threshold value, software rendering is adopted, and when the remaining amount of the hardware resource is higher than the threshold value, hardware rendering is adopted. The embodiments of the present application are not limited to the specific rendering strategy employed.
3. And requesting the hw module to perform layer composition by using the rendering engine, and obtaining original image data after the hw module completes layer composition.
Because the cloud application picture is composed of a plurality of layers, after each image of the cloud application picture is subjected to layer rendering, each layer needs to be further synthesized to obtain original image data. In one possible implementation manner, after the rendering engine finishes layer rendering, the hw module of the virtual hardware abstraction layer is requested to perform layer composition, and correspondingly, the hw module synthesizes a plurality of layers of the same cloud application picture to obtain original image data.
In response to completing the image rendering, the encoding service is acquired through a binder mechanism, step 403.
Since the encoding service registration is performed in advance, after the image rendering is completed, the encoding service can be further acquired through a binder mechanism so that the encoding engine encodes the original image data.
In one possible implementation, after the virtual hardware abstraction layer finishes image rendering, the coding service is obtained through a binder mechanism. For example, in combination with the above example, after the hw module completes layer synthesis, the hw module obtains the coding service through a binder mechanism.
And step 404, based on the obtained coding service, invoking a virtual hardware abstraction layer to perform image coding on the original image data through a coding engine to obtain a video stream.
After the coding service is obtained, the server further calls a virtual hardware abstraction layer through a coding engine to code images. In one possible implementation, the virtual hardware abstraction layer includes an encoding module (encoding_vhal) for implementing image encoding, and the encoding engine performs image encoding by calling the encoding module.
In one possible implementation manner, the encoding engine supports multiple encoding modes, and the encoding engine can select a corresponding encoding mode to encode the image according to the requirements of the cloud application. Illustratively, as shown in FIG. 3, the encoding engine 322 supports FFmpeg as well as MediaCodec. The embodiment of the present application does not limit the specific coding modes supported by the coding engine.
Optionally, after receiving the coding instruction of the coding engine, the encoding module invokes the hardware to perform software coding or hardware coding. For example, the CPU may be invoked to software encode the raw image data, or the GPU may be invoked to hardware encode the raw image data (with hardware encoding speeds faster than software encoding speeds). In some embodiments, the software code is used when the remaining amount of hardware resources is below a threshold, and the hardware code is used when the remaining amount of hardware resources is above the threshold. The embodiment of the application does not limit the adopted specific coding strategy.
In one possible implementation manner, the server captures rendered data at the virtual hardware abstraction layer, so that captured original image data is directly transmitted to the coding engine to be coded, the processing path of the image data is reduced, and the coding speed is improved. Optionally, after the virtual hardware abstraction layer completes layer synthesis through the hw module, the original image data is sent to the encoding engine through the hw module.
In response to the completion of the image encoding, a push service is obtained via a binder mechanism, step 405.
Because the push service registration is performed in advance, after the image coding is completed, the push service can be further acquired through a binder mechanism, so that the push engine pushes the video stream to the terminal. In one possible implementation, after the image encoding is completed, the encoding engine obtains the push service through a binder mechanism.
And step 406, pushing the video stream to the terminal through a push engine based on the obtained push service.
In one possible implementation, the encoding engine sends the encoded video stream to the push engine through a push interface in the push service, and the push engine further pushes the video stream to the terminal.
Illustratively, as shown in fig. 3, the encoding engine 322 sends the video stream to the Media module of the push engine 323, which pushes the video stream.
Alternatively, the push engine may integrate multiple push methods, such as webtc, lib555, and the like, and the push engine may select a corresponding push method based on the network status, which is not limited in this embodiment.
In some embodiments, since the buffer is applied during image rendering, after the video push is completed, the virtual hardware abstraction layer needs to be notified to perform buffer resource reclamation in order to avoid the buffer being occupied for a long time.
In one possible implementation manner, the server sends a push completion response to the encoding engine through the push engine, the encoding engine further sends an encoding completion response to the hw module based on the push completion response, the hw module sends a release instruction to the rendering engine based on the encoding completion response, and finally the rendering engine instructs the gralloc module to release the buffer.
In this embodiment, the coding engine and the push engine in the runtime library layer register as services, so that after image rendering is completed, the coding service is obtained by using a binder mechanism to perform image coding, and after coding is completed, the push service is obtained by using the binder mechanism to perform video push, so that time consumption of inter-process communication can be reduced, and transmission delay of cloud application pictures is further reduced.
In addition, the image data rendered in the gralloc and hw modules are captured and directly transmitted to the coding engine for coding, so that the processing path of the image can be shortened, and the transmission delay of the cloud application picture is reduced.
In an illustrative example, a screen transfer process of a cloud application is shown in fig. 5.
In step 501, the encoding engine registers an encoded service with a service manager (ServiceManager).
In step 502, the push engine registers push services with the service management.
In step 503, the cloud application sends an image rendering instruction to the rendering engine.
At step 504, the rendering engine applies for a buffer to the gralloc module.
In step 505, the gralloc module returns a file descriptor (fd) of the buffer to the rendering engine.
At step 506, the rendering engine performs layer rendering in the buffer.
In step 507, the rendering engine sends a composition instruction to the hw module.
In step 508, the hw module creates a map for fd.
The hw module updates the reference count for fd from 0 to 1 (indicating that it is being used) and adds fd to the local map.
In step 509, the hw module sends a composite response to the rendering engine.
At step 510, the rendering engine notifies the cloud application that rendering is complete.
In step 511, the hw module requests the encoded service (getencoeServer) from the service management via the binder mechanism.
At step 512, the service management returns a coded service (bphencodeservice) to the hw module.
In step 513, the hw module sends fd to the encoding engine instructing the encoding engine to encode.
In step 514, the encoding engine invokes the hardware via the encode module to encode the image.
The encoding module encapsulates the hardware encoding and software encoding capabilities so that the underlying GPU or encoding card can be invoked for encoding.
The encode module notifies the completion of the encoding, step 515.
In step 516, the encoding engine requests a push service (GetFlowService) from the service management through a binder mechanism.
In step 517, the service manager returns a push stream service (BpFlowService) to the encoding engine.
In step 518, the encoding engine sends the video stream to the push engine.
Optionally, the encoding engine invokes a push interface in bpfllowservice to send the video stream to the push engine.
In step 519, the push engine pushes the video stream to the terminal.
In step 520, the push engine notifies the encoding engine that push is complete.
In step 521, the encoding engine sends an encoded response to the hw module.
In step 522, the hw module instructs the rendering engine to release fd.
After the hw module receives the coded response, the reference count of fd is set to 0, and fd is deleted from the local map.
At step 523, the rendering engine instructs the gralloc module to release the buffer.
At step 524, the gralloc module notifies the release to completion.
In one possible implementation manner, the plug flow engine is used for identifying an instruction flow sent by the terminal in addition to the plug flow function, so that according to an identification result, the instruction flow is determined to be used for controlling the cloud application or used for adjusting display parameters of the cloud application. Illustratively, as shown in fig. 3, after receiving the instruction stream through the Input module, the plug flow engine 323 recognizes the instruction stream, so as to inject an operation event into the cloud application 31 according to the recognition interface, or send an encoding parameter adjustment instruction to the encoding engine 322.
Optionally, on the basis of fig. 4, as shown in fig. 6, step 401 may include the following steps:
step 4011, receiving an instruction stream sent by a terminal.
In step 4012, the instruction type of the instructions in the instruction stream is identified by the push engine.
In a possible implementation manner, the types of the instructions in the instruction stream include an operation instruction and an encoding control instruction, wherein the operation instruction is used for controlling application elements in the cloud application, for example, the operation instruction is triggered when a user performs touch operation on a control in a cloud game picture, and the encoding control instruction is used for adjusting encoding parameters in the running process of the cloud application, for example, adjusting resolution, frame rate and the like of the cloud application. After the plug flow engine acquires the instruction stream, the type of the instruction in the instruction stream is identified.
When the instruction type is identified as the operation instruction, the plug engine injects an operation event into the cloud application (step 4013); and when the instruction type is identified as the coding control instruction, the push engine sends an adjustment instruction to the coding engine.
In step 4013, in response to the instruction type being an operation instruction, sending an operation event to the cloud application through the push engine, where the operation event is obtained by converting the instruction by the push engine, and the cloud application is configured to send an image rendering instruction based on the operation event.
In order to enable the cloud application to accurately respond to user operation, the plug flow engine needs to convert the instructions in the instruction stream to obtain operation events which can be identified by the cloud application, so that the operation events are sent to the cloud application, and the cloud application responds to the operation events.
In one illustrative example, when the cloud application is a game application supporting touch operations, the push engine converts touch coordinates in the instructions into touch (touch) events and sends the touch events to the cloud application, which controls elements in the game based on touch time.
In other possible embodiments, in response to the instruction type being an encoding control instruction, the server sends an encoding parameter adjustment instruction to the encoding engine through the push engine, the encoding parameter adjustment instruction being for instructing the encoding engine to adjust at least one of resolution and frame rate. Accordingly, the encoding engine adjusts the encoding parameters based on parameters included in the encoding parameter adjustment instruction.
For example, when the encoding parameter adjustment instruction indicates that the resolution is adjusted from 1080p to 720p, the encoding engine reduces the resolution of the original image data during the encoding process, thereby reducing the resolution of the video stream; when the coding parameter adjustment instruction indicates that the frame rate is increased from 30 frames/second to 60 frames/second, the coding engine performs frame insertion processing in the coding process, so that the frame rate of the video stream is increased.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Referring to fig. 7, a block diagram of an image transmission device of a cloud application according to an embodiment of the present application is shown. The apparatus may include:
the rendering unit 701 is configured to respond to an image rendering instruction of the cloud application, and call the virtual hardware abstraction layer through the runtime library layer to perform image rendering, so as to obtain original image data, where the original image data is image data of a cloud application picture;
an encoding unit 702, configured to invoke, by the runtime library layer, the virtual hardware abstraction layer to perform image encoding on the original image data, so as to obtain a video stream;
And the pushing unit 703 is configured to push the video stream to a terminal through the runtime library layer.
Optionally, the runtime library layer is provided with a plug flow engine, a rendering engine and an encoding engine;
the rendering unit 701 is configured to:
invoking the virtual hardware abstraction layer to conduct image rendering through the rendering engine to obtain the original image data;
the encoding unit 702 is configured to:
invoking the virtual hardware abstraction layer to carry out image coding on the original image data through the coding engine to obtain the video stream;
the plug flow unit 703 is configured to:
and pushing the video stream to the terminal through the push engine.
Optionally, the virtual hardware abstraction layer is provided with a gralloc module and an hw module;
the rendering unit 701 is specifically configured to:
applying for a buffer area to the gralloc module by the rendering engine;
performing layer rendering on the buffer area through the rendering engine;
and requesting the hw module to perform layer synthesis through the rendering engine, and obtaining the original image data after the hw module completes layer synthesis.
Optionally, the apparatus further includes a data sending unit, configured to:
And sending the original image data to the coding engine through the hw module.
Optionally, the apparatus further includes:
the release module is used for sending a push completion response to the coding engine through the push engine, the coding engine is used for sending a coding completion response to the hw module based on the push completion response, the hw module is used for sending a release instruction to the rendering engine based on the coding completion response, and the rendering engine is used for indicating the gralloc module to release the buffer zone based on the release instruction.
Optionally, the virtual hardware abstraction layer is provided with an encoding module;
the encoding unit 702 is configured to:
and calling the encoding module to carry out image encoding on the original image data through the encoding engine to obtain the video stream, wherein the encoding module is used for calling hardware to carry out software encoding or hardware encoding.
Optionally, the apparatus further includes:
the registration unit is used for registering the coding service through the coding engine and registering the push service through the push engine;
the encoding unit 702 is configured to:
acquiring the coding service through a binder mechanism in response to completion of image rendering;
Based on the obtained coding service, invoking the virtual hardware abstraction layer to carry out image coding on the original image data through the coding engine to obtain the video stream;
the plug flow unit 703 is configured to:
acquiring the push service through a binder mechanism in response to completion of image coding;
and pushing the video stream to the terminal through the push engine based on the obtained push service.
Optionally, the apparatus further includes:
the receiving unit is used for receiving the instruction stream sent by the terminal;
the identification unit is used for identifying the instruction type of the instruction in the instruction stream through the push engine;
the cloud application is used for sending the image rendering instruction based on the operation event.
Optionally, the apparatus further includes:
and the second sending unit is used for responding to the instruction type as an encoding control instruction, sending an encoding parameter adjusting instruction to the encoding engine through the push engine, wherein the encoding parameter adjusting instruction is used for indicating the encoding engine to adjust at least one of resolution and frame rate.
In summary, in the embodiment of the present application, the rendering, encoding and pushing of the cloud application picture are integrated in the runtime library layer, so that when an image rendering instruction of the cloud application is received, the runtime library layer invokes the virtual hardware abstraction layer to perform image rendering, and further invokes the virtual hardware abstraction layer to encode the original image data obtained by rendering, and finally the runtime library layer pushes the video stream containing the encoding to the terminal, so as to realize image transmission of the cloud application; because rendering, encoding and plug flow are integrated in the runtime library layer, the cloud application deployment integration level in the embodiment of the application is higher, and cloud application deployment is facilitated; and rendering coding is completed between the runtime library layer and the virtual hardware abstraction layer, so that the processing path passed by the cloud application picture is shorter, and the display delay of the cloud application picture is reduced.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the functional units is used as an example, in practical application, the above functional allocation may be implemented by different functional units according to needs, that is, the internal structure of the device is divided into different functional units, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Referring to fig. 8, a schematic structural diagram of a server according to an embodiment of the present application is shown. The server is used for implementing the method provided by the embodiment. Specifically, the present invention relates to a method for manufacturing a semiconductor device.
The server 800 includes a Central Processing Unit (CPU) 801, a system memory 804 including a Random Access Memory (RAM) 802 and a Read Only Memory (ROM) 803, and a system bus 805 connecting the system memory 804 and the central processing unit 801. The server 800 also includes a basic input/output system (I/O system) 806 for facilitating the transfer of information between various devices within the computer, and a mass storage device 807 for storing an operating system 813, application programs 814, and other program modules 815.
The basic input/output system 806 includes a display 808 for displaying information and an input device 809, such as a mouse, keyboard, or the like, for user input of information. Wherein the display 808 and the input device 809 are connected to the central processing unit 801 via an input output controller 810 connected to the system bus 805. The basic input/output system 806 can also include an input/output controller 810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 807 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805. The mass storage device 807 and its associated computer-readable media provide non-volatile storage for the server 800. That is, the mass storage device 807 may include a computer readable medium (not shown) such as a hard disk or CD-ROM drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 804 and mass storage device 807 described above may be collectively referred to as memory.
The server 800 may also operate via a network, such as the internet, connected to a remote computer on the network, according to various embodiments of the present application. I.e., the server 800 may be connected to the network 812 through a network interface unit 811 connected to the system bus 805, or other types of networks or remote computer systems may be connected using the network interface unit 811.
The memory has stored therein at least one instruction, at least one program, code set, or instruction set configured to be executed by one or more processors to implement the functions of the various steps in the embodiments described above.
Embodiments of the present application also provide a computer readable storage medium storing at least one program code that is loaded and executed by a processor to implement the image transmission method of the cloud application according to the above embodiments.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the image transmission method of the cloud application provided in various alternative implementations of the above aspects.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. In addition, the step numbers described herein are merely exemplary of one possible execution sequence among steps, and in some other embodiments, the steps may be executed out of the order of numbers, such as two differently numbered steps being executed simultaneously, or two differently numbered steps being executed in an order opposite to that shown, which is not limited by the embodiments of the present application.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (12)

1. An image transmission method for cloud applications, the method comprising:
responding to an image rendering instruction of the cloud application, and calling a virtual hardware abstraction layer through a runtime library layer to conduct image rendering to obtain original image data, wherein the original image data is image data of a cloud application picture;
Invoking the virtual hardware abstraction layer through the runtime library layer to perform image coding on the original image data to obtain a video stream;
and pushing the video stream to a terminal through the runtime library layer.
2. The method of claim 1, wherein the runtime library layer is provided with a plug flow engine, a rendering engine, and an encoding engine;
the step of calling a virtual hardware abstraction layer to conduct image rendering through the runtime library layer to obtain original image data comprises the following steps:
invoking the virtual hardware abstraction layer to conduct image rendering through the rendering engine to obtain the original image data;
the step of calling the virtual hardware abstraction layer through the runtime library layer to carry out image coding on the original image data to obtain a video stream comprises the following steps:
invoking the virtual hardware abstraction layer to carry out image coding on the original image data through the coding engine to obtain the video stream;
the pushing the video stream to the terminal through the runtime library layer comprises the following steps:
and pushing the video stream to the terminal through the push engine.
3. The method according to claim 2, wherein the virtual hardware abstraction layer is provided with a gralloc module and an hw module;
The step of calling the virtual hardware abstraction layer to conduct image rendering through the rendering engine to obtain the original image data comprises the following steps:
applying for a buffer area to the gralloc module by the rendering engine;
performing layer rendering on the buffer area through the rendering engine;
and requesting the hw module to perform layer synthesis through the rendering engine, and obtaining the original image data after the hw module completes layer synthesis.
4. The method of claim 3, wherein the requesting, by the rendering engine, the hw module to perform layer composition, the method comprising, after the hw module completes layer composition to obtain the raw image data:
and sending the original image data to the coding engine through the hw module.
5. A method according to claim 3, wherein after said pushing of said video stream to said terminal by said push engine, said method further comprises:
and sending a push completion response to the coding engine through the push engine, wherein the coding engine is used for sending a coding completion response to the hw module based on the push completion response, the hw module is used for sending a release instruction to the rendering engine based on the coding completion response, and the rendering engine is used for indicating the gralloc module to release the buffer zone based on the release instruction.
6. The method according to claim 2, wherein the virtual hardware abstraction layer is provided with an encode module;
the step of calling the virtual hardware abstraction layer to carry out image coding on the original image data through the coding engine to obtain the video stream comprises the following steps:
and calling the encoding module to carry out image encoding on the original image data through the encoding engine to obtain the video stream, wherein the encoding module is used for calling hardware to carry out software encoding or hardware encoding.
7. The method according to claim 2, wherein the method further comprises:
registering the coding service through the coding engine, and registering the push service through the push engine;
the step of calling the virtual hardware abstraction layer to carry out image coding on the original image data through the coding engine to obtain the video stream comprises the following steps:
acquiring the coding service through a binder mechanism in response to completion of image rendering;
based on the obtained coding service, invoking the virtual hardware abstraction layer to carry out image coding on the original image data through the coding engine to obtain the video stream;
the pushing the video stream to the terminal through the push engine comprises the following steps:
Acquiring the push service through a binder mechanism in response to completion of image coding;
and pushing the video stream to the terminal through the push engine based on the obtained push service.
8. The method according to any one of claims 2 to 7, further comprising:
receiving an instruction stream sent by the terminal;
identifying, by the push engine, an instruction type of an instruction in the instruction stream;
and responding to the instruction type as an operation instruction, sending an operation event to the cloud application through the plug flow engine, wherein the operation event is obtained by converting the instruction by the plug flow engine, and the cloud application is used for sending the image rendering instruction based on the operation event.
9. The method of claim 8, wherein after the identifying, by the push engine, the instruction type of the instructions in the instruction stream, the method further comprises:
and responding to the instruction type as an encoding control instruction, sending an encoding parameter adjusting instruction to the encoding engine through the push engine, wherein the encoding parameter adjusting instruction is used for indicating the encoding engine to adjust at least one of resolution and frame rate.
10. An image transmission apparatus for a cloud application, the apparatus comprising:
the rendering unit is used for responding to an image rendering instruction of the cloud application, calling a virtual hardware abstraction layer through a runtime library layer to conduct image rendering, and obtaining original image data, wherein the original image data is image data of a cloud application picture;
the coding unit is used for calling the virtual hardware abstraction layer through the runtime library layer to carry out image coding on the original image data so as to obtain a video stream;
and the pushing unit is used for pushing the video stream to the terminal through the runtime library layer.
11. A server, wherein the server comprises a processor and a memory; the memory stores at least one instruction for execution by the processor to implement the image transmission method of a cloud application according to any of claims 1 to 9.
12. A computer readable storage medium having stored therein at least one program code loaded and executed by a processor to implement the image transmission method of a cloud application according to any of claims 1 to 9.
CN202110819854.2A 2021-07-20 2021-07-20 Image transmission method and device for cloud application, server and storage medium Active CN113542757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110819854.2A CN113542757B (en) 2021-07-20 2021-07-20 Image transmission method and device for cloud application, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110819854.2A CN113542757B (en) 2021-07-20 2021-07-20 Image transmission method and device for cloud application, server and storage medium

Publications (2)

Publication Number Publication Date
CN113542757A CN113542757A (en) 2021-10-22
CN113542757B true CN113542757B (en) 2024-04-02

Family

ID=78129005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110819854.2A Active CN113542757B (en) 2021-07-20 2021-07-20 Image transmission method and device for cloud application, server and storage medium

Country Status (1)

Country Link
CN (1) CN113542757B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113946402A (en) * 2021-11-09 2022-01-18 中国电信股份有限公司 Cloud mobile phone acceleration method, system, equipment and storage medium based on rendering separation
CN114827186A (en) * 2022-02-25 2022-07-29 阿里巴巴(中国)有限公司 Cloud application processing method and system
CN114866802B (en) * 2022-04-14 2024-04-19 青岛海尔科技有限公司 Video stream sending method and device, storage medium and electronic device
CN114594993B (en) * 2022-05-10 2022-08-19 海马云(天津)信息技术有限公司 Graphics rendering instruction stream processing device, processing method, server and rendering method
CN115061648A (en) * 2022-05-13 2022-09-16 合肥杰发科技有限公司 Operation method of vehicle-mounted display system and vehicle-mounted display system
CN115665342B (en) * 2022-09-09 2024-03-05 维沃移动通信有限公司 Image processing method, image processing circuit, electronic device, and readable storage medium
CN115278289B (en) * 2022-09-27 2023-01-20 海马云(天津)信息技术有限公司 Cloud application rendering video frame processing method and device
CN116546228B (en) * 2023-07-04 2023-09-22 腾讯科技(深圳)有限公司 Plug flow method, device, equipment and storage medium for virtual scene
CN117278780A (en) * 2023-09-06 2023-12-22 上海久尺网络科技有限公司 Video encoding and decoding method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN111381914A (en) * 2018-12-29 2020-07-07 中兴通讯股份有限公司 Method and system for realizing 3D (three-dimensional) capability of cloud desktop virtual machine
CN111563879A (en) * 2020-03-27 2020-08-21 北京视博云信息技术有限公司 Method and device for detecting display quality of application picture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050223080A1 (en) * 2004-04-05 2005-10-06 Microsoft Corporation Updatable user experience

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN111381914A (en) * 2018-12-29 2020-07-07 中兴通讯股份有限公司 Method and system for realizing 3D (three-dimensional) capability of cloud desktop virtual machine
CN111563879A (en) * 2020-03-27 2020-08-21 北京视博云信息技术有限公司 Method and device for detecting display quality of application picture

Also Published As

Publication number Publication date
CN113542757A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN113542757B (en) Image transmission method and device for cloud application, server and storage medium
CN108066986B (en) Streaming media determining method and device and storage medium
US9940898B2 (en) Variable refresh rate video capture and playback
WO2022257699A1 (en) Image picture display method and apparatus, device, storage medium and program product
US20090323799A1 (en) System and method for rendering a high-performance virtual desktop using compression technology
US10165058B2 (en) Dynamic local function binding apparatus and method
CA2814420A1 (en) Load balancing between general purpose processors and graphics processors
CN111494936A (en) Picture rendering method, device, system and storage medium
CN113457160A (en) Data processing method and device, electronic equipment and computer readable storage medium
US20220193540A1 (en) Method and system for a cloud native 3d scene game
CN112272327B (en) Data processing method, device, storage medium and equipment
CN112354176A (en) Cloud game implementation method, cloud game implementation device, storage medium and electronic equipment
WO2024037110A1 (en) Data processing method and apparatus, device, and medium
CN113535063A (en) Live broadcast page switching method, video page switching method, electronic device and storage medium
CN114040251A (en) Audio and video playing method, system, storage medium and computer program product
US9948691B2 (en) Reducing input processing latency for remotely executed applications
CN116546228B (en) Plug flow method, device, equipment and storage medium for virtual scene
CN113411660B (en) Video data processing method and device and electronic equipment
US11936928B2 (en) Method, system and device for sharing contents
CN111026406A (en) Application running method, device and computer readable storage medium
TWI814134B (en) Remote rendering system, method and device based on virtual mobile architecture
CN115018693A (en) Docker image acceleration method and system based on software-defined graphics processor
WO2021217467A1 (en) Method and apparatus for testing intelligent camera
US9384276B1 (en) Reducing latency for remotely executed applications
US9954718B1 (en) Remote execution of applications over a dispersed network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant