CN115543535A - Android container system, android container construction method and device and electronic equipment - Google Patents

Android container system, android container construction method and device and electronic equipment Download PDF

Info

Publication number
CN115543535A
CN115543535A CN202211215483.8A CN202211215483A CN115543535A CN 115543535 A CN115543535 A CN 115543535A CN 202211215483 A CN202211215483 A CN 202211215483A CN 115543535 A CN115543535 A CN 115543535A
Authority
CN
China
Prior art keywords
android
gpu
server
containers
container
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211215483.8A
Other languages
Chinese (zh)
Other versions
CN115543535B (en
Inventor
杨上山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Moore Threads Technology Co Ltd
Original Assignee
Moore Threads Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moore Threads Technology Co Ltd filed Critical Moore Threads Technology Co Ltd
Priority to CN202211215483.8A priority Critical patent/CN115543535B/en
Publication of CN115543535A publication Critical patent/CN115543535A/en
Application granted granted Critical
Publication of CN115543535B publication Critical patent/CN115543535B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present disclosure relates to an android Zhuo Rongqi system, an android container construction method and apparatus, and an electronic device, where the system is applied to a server, and the system includes: the system comprises a plurality of android containers, a plurality of Graphics Processing Units (GPUs) and a GPU driver; the GPU driver comprises: a user-side GPU driver and a server-side GPU driver; aiming at any one android container, the android container generates a graphic processing instruction based on the corresponding user side GPU drive, and sends the graphic processing instruction to the server; and the server calls the corresponding GPU to process the image processing instruction based on the server GPU drive, and returns an instruction processing result obtained after processing to the android container. The android container can be effectively connected to the hardware GPU in an butting mode, and therefore the hardware GPU is effectively utilized to accelerate a plurality of android display effects.

Description

Android container system, android container construction method and device and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an android Zhuo Rongqi system, an android container construction method and apparatus, and an electronic device.
Background
The container technology is one of virtualization technologies, and means that a user can create a plurality of simulation environments or dedicated resources on the basis of a single physical hardware system, so that one system can be divided into a plurality of different, independent and safe environments, and the multiplexing rate of hardware can be provided by a virtualization mode to achieve the purpose of hardware multiplexing. In the related technology, only the construction of the android container, the use of the android container and the use of the application layer of the android container are realized. Currently, in order to meet the use requirements of users, a method for effectively implementing multiple android containers is urgently needed.
Disclosure of Invention
The disclosure provides an android Zhuo Rongqi system, an android container construction method and device and an electronic device.
According to an aspect of the present disclosure, there is provided an ann Zhuo Rongqi system, the system being applied to a server, the system comprising: the system comprises a plurality of android containers, a plurality of GPUs and a GPU driver; the GPU driver comprises: a user-side GPU driver and a server-side GPU driver; aiming at any one android container, the android container generates a graphic processing instruction based on the corresponding user-side GPU driver, and sends the graphic processing instruction to the server; and the server calls the corresponding GPU to process the image processing instruction based on the server GPU drive, and returns an instruction processing result obtained after processing to the android container.
In a possible implementation manner, for any one of the GPUs, the GPU corresponds to n android containers, where n is less than or equal to m, m is the maximum number of android containers that the GPU can support, and both n and m are positive integers.
In one possible implementation, in a case that the number of the GPUs is N, the maximum number of the android containers is M = M × N, where M is the maximum number of android containers that each GPU can support, and N, M, M are positive integers.
In one possible implementation manner, for any one of the android containers, the android container includes therein: the android system and the user GPU driver; the user-side GPU driver comprises: the system comprises a first driving interface and a second driving interface, wherein the first driving interface is in butt joint with an Application Program Interface (API) interface of the android system, and the second driving interface is in butt joint with the server GPU driver.
In one possible implementation, the system further includes: the device comprises a first sending module and a second sending module; the first sending module is used for sending the graphics processing instructions sent by each android container to the server in an instruction queue form according to a sending time sequence; the GPUs execute the graphics processing instructions in the instruction queue to obtain a result queue comprising a plurality of instruction processing results, wherein the sequence of the instruction processing results in the result queue corresponds to the sequence of the graphics processing instructions in the instruction queue; and the second sending module is used for sequentially sending the instruction processing results to the corresponding android containers.
In a possible implementation manner, the invoking, by the server, the corresponding GPU to process the image processing instruction based on the server-side GPU driver includes: and aiming at the graphic processing instruction sent by any one android container, the server sends the graphic processing instruction to the GPU corresponding to the android container based on the GPU drive of the server side.
In one possible implementation, the system further includes: a scheduling module; the scheduling module is used for acquiring the current operation load of each GPU; the scheduling module is further configured to schedule, for a first GPU of the multiple GPUs, the graphics processing instruction in the unprocessed state corresponding to the first GPU to a second GPU of the multiple GPUs when a current operating load of the first GPU exceeds a preset threshold, where the current operating load of the second GPU does not exceed the preset threshold.
In one possible implementation manner, the server is a server adopting an ARM architecture.
In a possible implementation manner, the server-side GPU driver is installed in an operating system running on the server, where the operating system is a Linux system.
In one possible implementation, the graphics processing instructions include: rendering instructions and encoding instructions.
According to an aspect of the present disclosure, there is provided a method for constructing ann Zhuo Rongqi, comprising: for one GPU, n android containers corresponding to the GPU are created by mirror-loading one android container for n times, wherein n is less than or equal to m, m is the maximum number of the android containers which can be supported by the GPU, and both n and m are positive integers.
In one possible implementation, the method further includes: in the case that a plurality of GPUs exist and the number of the GPUs is N, mirror-loading N android containers for multiple times respectively, and creating a plurality of android containers corresponding to the N GPUs, wherein the maximum number of the android containers is M = M × N, M is the maximum number of the android containers that each GPU can support, and N, M, M are positive integers.
According to an aspect of the present disclosure, there is provided an ann Zhuo Rongqi construction apparatus, comprising: the loading module is used for creating n android containers corresponding to the GPU by mirroring one android container for n times aiming at one GPU, wherein n is less than or equal to m, m is the maximum number of the android containers which can be supported by the GPU, and both n and m are positive integers.
In a possible implementation manner, the loading module is further configured to: in the case that a plurality of GPUs exist and the number of the GPUs is N, mirror-loading N android containers for multiple times respectively, and creating a plurality of android containers corresponding to the N GPUs, wherein the maximum number of the android containers is M = M × N, M is the maximum number of the android containers that each GPU can support, and N, M, M are positive integers.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to operate the above-described system and/or to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, operate the above system and/or, implement the above method.
In the embodiment of the disclosure, a plurality of android containers are deployed on a server with a hardware GPU, the android containers are effectively connected to the hardware GPU in an abutting mode based on a user side GPU and a server side GPU, the android containers are driven to generate a graphic processing instruction based on the corresponding user side GPU and send the graphic processing instruction to the server, the server calls the corresponding hardware GPU to process the graphic processing instruction based on the server side GPU, and an instruction processing result obtained after processing is returned to the android container, so that the hardware GPU is effectively utilized to accelerate a plurality of android display effects.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a hierarchical schematic diagram of an android container system in accordance with an embodiment of the present disclosure;
FIG. 2 shows a schematic structural diagram of an android container system in accordance with an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of an android container installation package, in accordance with an embodiment of the present disclosure;
FIG. 4 shows a corresponding schematic diagram of an android container and a hardware GPU, according to an embodiment of the disclosure;
FIG. 5 shows a flow diagram for graphics processing by multiple android containers, in accordance with an embodiment of the disclosure;
FIG. 6 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a variety or any combination of at least two of a variety, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Due to openness, customizability and wide use, the android system has been applied to various embedded device products, such as: mobile phones, tablets, car machines, and the like. Since both individual users and cloud computing platform vendors have scenarios where multiple accounts are used for login and low-end hardware resources are used for high-performance computing applications, if each account has one hardware device or has to use a high-performance device, a huge waste of resources is caused. The container virtualization technology can enable a plurality of android systems to be simultaneously operated on one hardware device, and high-performance computing is transferred to the characteristic of cloud operation, so that the problems are effectively solved.
In the related art, only the construction of an android container, the use of the android container and the use of an application layer of the android container are realized, but for a plurality of android containers, how to interface to a hardware GPU, and how to realize an android display effect by using the hardware GPU are problems to be solved urgently at present.
The embodiment of the disclosure provides an android Zhuo Rongqi system, which can realize that a plurality of android containers are docked to a hardware GPU and the hardware GPU is used for accelerating the android display effect. Fig. 1 shows a hierarchical schematic diagram of an android container system in accordance with an embodiment of the disclosure. As shown in fig. 1, the android container system includes three layers, from top to bottom: a container application layer, a driver layer, and a hardware layer.
The container application layer is a software application layer and faces to a user side. A plurality of android containers can be operated in the container application layer, a user can perform user operation based on the container application layer, and a graphic processing instruction generated by the user operation is issued to the drive layer from the container application layer.
The driving layer is positioned in the middle layer and plays a role in starting and stopping. The drive layer is upwards butted with the android container layer and downwards butted with the hardware layer. The driver layer is also a software application layer, and is maintained by the GPU manufacturer.
The hardware layer includes a hardware GPU that interfaces with an operating system running in the server. And the hardware layer responds to the graphic processing instruction issued by the container application layer through the driving layer. In addition, the hardware layer also comprises: a Central Processing Unit (CPU), a memory, an input/output controller, a power supply, etc., which are not limited in this disclosure.
Based on the three layers included in the android container system shown in fig. 1, the following describes the structure of the android container system in detail. The android container system provided by the embodiment of the disclosure can be applied to a plurality of android container scenes, and the plurality of android containers use hardware GPUs for graphics rendering and encoding of rendered contents during desktop display.
Fig. 2 shows a schematic structural diagram of an android container system according to an embodiment of the present disclosure. As shown in fig. 2, the android container system is applied to a server having a hardware GPU, and the android container system includes a plurality of android containers, a plurality of Graphics Processors (GPUs), and a GPU driver, where the GPU driver includes: a user GPU driver and a server GPU driver; for any android container, the android container generates a graphic processing instruction based on the corresponding user side GPU driver, and sends the graphic processing instruction to a server; and the server calls the corresponding GPU to process the image processing instruction based on the GPU drive of the server side, and returns an instruction processing result obtained after processing to the android container.
In the embodiment of the disclosure, a plurality of android containers are deployed on a server with a hardware Graphics Processing Unit (GPU), based on a user side GPU and a server side GPU, the android containers are effectively connected to the hardware GPU in an abutting mode, aiming at any one of the android containers, the android container generates a graphics processing instruction based on corresponding user side GPU driving, the graphics processing instruction is sent to the server, the server is based on the server side GPU driving, the corresponding hardware GPU is called to process the graphics processing instruction, and an instruction processing result obtained after processing is returned to the android container, so that the hardware GPU is effectively utilized to accelerate a plurality of android display effects.
In one possible implementation, the graphics processing instructions include: rendering instructions and encoding instructions.
The graphics processing instruction issued by the android container may be a rendering instruction for displaying and an encoding instruction for encoding rendered content.
The graphics processing instructions may be other graphics processing instructions besides rendering instructions and encoding instructions, and the disclosure is not limited thereto.
In one possible implementation, the server is a server using an ARM architecture.
The server adopts the server of ARM framework, because tall and erect system of ann adopts the ARM framework equally, consequently, the graphics processing instruction that the tall and erect container of adoption ARM framework was issued can directly be operated to the server of adoption, when avoiding adopting the server of other frameworks, can only operate after needing to carry out translation conversion to the graphics processing instruction that tall and erect container was issued to effectively reduced graphics processing instruction's transmission time, under the graphics processing instruction is for rendering, under the condition of coding instruction, can effectively improve tall and erect container's display efficiency.
In one possible implementation manner, the server-side GPU driver is installed in an operating system running on the server, where the operating system is a Linux system.
The Linux hair style version may be Debian, fedora, SUSE, genoo, ubuntu, or the like, or may be another version, which is not specifically limited in this disclosure.
In one possible implementation manner, for any one android container, the android container includes: driving by an android system and a GPU (graphics processing Unit) of a user side; the user GPU driver comprises: the system comprises a first driving Interface and a second driving Interface, wherein the first driving Interface is in butt joint with an Application Program Interface (API) Interface of the android system, and the second driving Interface is in butt joint with a GPU driver of a server side.
FIG. 3 shows a schematic diagram of an android container installation package, in accordance with an embodiment of the present disclosure. As shown in fig. 3, the android container includes two parts, which are respectively driven by an android system and a client GPU. The init of the first process to be started in the android system is loaded by a container, and necessary parameters for starting are transmitted to an init process. The init process can start various services, modules and sub-processes required by android in the container, and can deploy android environment variables and operating environments, so that an android system can normally operate.
By integrating the GPU driver of the user side in the android container, the user can use and transfer deployment conveniently, and the situation that the GPU driver of the client side needs to be installed independently for docking the hardware GPU after the container is deployed at the server side is avoided. In addition, because the android system and the user side GPU are integrated together, the android system and the user side GPU can be driven to update the GPU drive version and the android version in real time, namely, the android version is updated according to market requirements, and/or after the GPU drive version is updated according to actual hardware requirements, the android container version can be reconstructed quickly, so that a plurality of android containers have the characteristics of customizable characteristics and function increasing, the docking development period of the android containers and the hardware GPU is shortened, and the product market competitiveness of a server manufacturer using the android containers is enhanced.
Two sets of interfaces are arranged inside a user-side GPU driver, a first driving interface of a standard API interface of an android system is upwards butted, and a second driving interface of a server-side GPU driver is downwards butted. Based on the first driving interface, a graphics processing instruction is generated by operating graphics processing tools such as OpenGL and the like in the android container, and the graphics processing instruction is sent to the server side through the second driving interface, so that the server side can call a hardware GPU to process the graphics processing instruction.
By adopting the layered docking mode, a complex function processing flow can be shielded from a user side, the user only needs to pay attention to the use of a standard API (application program interface) of the android system, and the docking between the client GPU driver and the server GPU only needs to be designed by a GPU manufacturer and then is provided for the user to install.
Because the operating system environments of different users are inconsistent, in order To effectively apply the implementation modes of the android containers of the embodiment of the disclosure, the android system can be loaded based on container tools such as docker, source-To-Image, jib, bazel and the like, so as To build a uniform operating environment.
In a possible implementation manner, for any one GPU, the GPU corresponds to n android containers, where n is less than or equal to m, m is the maximum number of android containers that the GPU can support, and both n and m are positive integers;
for a hardware GPU, multiple android containers not exceeding the maximum number of android containers that it can support can be deployed for it according to the maximum number of android containers that it can support. Fig. 4 shows a corresponding schematic diagram of an android container and a hardware GPU, according to an embodiment of the disclosure. As shown in fig. 4, the maximum number of android containers that GPU1 can support is m, and therefore, m android containers corresponding to GPU1 may be deployed: android container 1 to android container m.
In one possible implementation, where the number of GPUs is N, the maximum number of android containers is M = M × N, where M is the maximum number of android containers that each GPU can support, and N, M, M are each positive integers.
In the case that there are multiple GPUs in the server, a corresponding plurality of android containers may be deployed for each GPU, and at this time, a correspondence relationship between the multiple GPUs and the multiple android containers may be that a maximum number M = mxn of the multiple android containers, where M is the maximum number of android containers that each GPU can support, and N, M, M is a positive integer, that is, each GPU deploys no more than the maximum number of android containers that it can support.
In an example, for multiple GPUs, corresponding one or more android containers may be deployed for each GPU, or corresponding one or more android containers may be deployed for only a part of the GPUs, which is not specifically limited by the present disclosure.
In one possible implementation, the android container system further includes: the device comprises a first sending module and a second sending module; the first sending module is used for sending the graphics processing instructions sent by each android container to the server in an instruction queue form according to the sending time sequence; the GPU is used for executing a plurality of graphics processing instructions in the instruction queue to obtain a result queue comprising a plurality of instruction processing results, wherein the sequence of the instruction processing results in the result queue corresponds to the sequence of the graphics processing instructions in the instruction queue; and the second sending module is used for sequentially sending the instruction processing results to the corresponding android containers.
The graphics processing instructions sent by each android container are sent to the server in an instruction queue mode according to the sending time sequence, the multiple GPUs execute the multiple graphics processing instructions, after a result queue comprising multiple instruction processing results is obtained, the multiple instruction processing results are returned to the corresponding android container according to the sequence corresponding to the multiple graphics processing instructions in the instruction queue, due to the strict order preservation of the whole process, each android container can receive the correct instruction processing results, and the correct display of the multiple android containers can be achieved under the condition that the graphics processing instructions are rendering and encoding instructions.
In a possible implementation manner, the server invokes a corresponding GPU to process the image processing instruction based on a server-side GPU driver, including: aiming at a graphic processing instruction sent by any one android container, the server sends the graphic processing instruction to a GPU corresponding to the android container based on the GPU drive of the server side.
Under the condition that each GPU is normally loaded, aiming at a graphic processing instruction sent by any one android container, the server sends the graphic processing instruction to the GPU corresponding to the android container based on the GPU drive of the server side.
Taking the above fig. 4 as an example, the server sends the graphics processing instructions sent by the android containers 1 to m to the GPU1 for processing based on the server-side GPU driver.
In one possible implementation, the android container system further includes: a scheduling module; the scheduling module is used for acquiring the current operation load of each GPU; the scheduling module is further configured to schedule, for a first GPU in the multiple GPUs, a graphics processing instruction in an unprocessed state corresponding to the first GPU to a second GPU in the multiple GPUs when a current operating load of the first GPU exceeds a preset threshold, where the current operating load of the second GPU does not exceed the preset threshold.
Due to the fact that the video memory use condition and the load use condition of each GPU and the operation condition of the corresponding android container are different, under the scene of a plurality of android containers, if GPU resources for operating each android Zhuo Rongqi instruction are not scheduled, the situations that one GPU is overloaded, other GPUs are unloaded and do not sleep exist, and the like exist, and energy consumption is increased, and computing power and energy cost are wasted.
In order to ensure that the computation balance reduces the energy consumption, when the computation amount of the graphics processing instruction of a certain android container is large, in order to prevent the processing efficiency of the graphics processing instructions of other android containers from being affected, the current running load exceeds a preset threshold value through a scheduler in the server, that is, the graphics processing instruction with the large computation amount is in an unprocessed state corresponding to the GPU, and the computation amount is small, and is scheduled to the other GPUs with the running load not exceeding the preset threshold value for processing. Under the condition that the graphic processing instruction is a rendering and encoding instruction, the display frame rate of a plurality of corresponding android containers can be prevented from being reduced due to the fact that one GPU is large in operation load.
Fig. 5 is a schematic flow diagram illustrating a process of performing graphics processing on a plurality of android containers according to an embodiment of the present disclosure, and as shown in fig. 5, a plurality of graphics processing instructions sent by the plurality of android containers are sent to a server in an instruction queue according to a sending time sequence, the server sends the plurality of graphics processing instructions to a corresponding GPU for processing based on a scheduler, and a plurality of obtained instruction processing results still return to the corresponding android containers in the sequence.
For example, with reference to fig. 4 and 5, the graphics processing instructions sent by the android containers form an instruction queue: graphics processing instruction 1 (android container 1), graphics processing instruction 2 (android container 3), graphics processing instruction 3 (android container m), graphics processing instruction 4 (android container N), and the like. Because the android container 1, the android container 3 and the android container m correspond to the GPU1, the graphics processing instruction 2 and the graphics processing instruction 3 are sequentially sent to the GPU1 to be processed under a normal condition. However, when the amount of computation of the GPU1 is large, the operation load of the GPU1 is larger than the preset threshold, and in order to avoid a long waiting period of the GPU2 and the GPU 3, at this time, the server schedules the GPU2 whose operation load does not exceed the preset threshold to process based on the GPU2 and/or the GPU 3 whose scheduler is in an unprocessed state and whose priority is later, so as to avoid the instruction processing efficiency from being affected by the resource shortage on the GPU 1.
After graphics processing instruction 1 through graphics processing instruction 4 are all processed, a result queue is obtained: and (3) sequentially sending the instruction processing results 1 to 4 in the result queue to corresponding android containers: the instruction processing result 1 is sent to an android container 1, the instruction processing result 2 is sent to the android container 2, the instruction processing result 3 is sent to an android container m, and the instruction processing result 4 is sent to an android container N.
When the method is applied to rendering and coding scenes displayed by the desktop of the android containers, the rendering and coding instructions of the android containers are processed by using the hardware GPU, so that the delay from a server side to the whole link of a user side can be reduced, and the indexes of 30fps and 60fps of the refresh rate of the android desktop under an embedded platform can be met under the condition that the network delay meets the requirement.
Based on this disclosed embodiment, can dock the multiple android containers that establish to hardware GPU, use hardware GPU's the rendering and coding channel to accelerate the display effect of android application layer, in addition, because with the integration of client GPU drive in android container, thereby can follow GPU's drive version and android version update in real time, according to market demand and actual hardware demand come rebuild android container version, effectively shorten under the scene of multiple android containers, the butt joint development cycle of multiple android containers and hardware GPU for user's deployment speed and convenience of customers operation, utilize hardware GPU to render and encode and improve the display effect of multiple android containers, the product market competitiveness that the reinforcing server firm used multiple containers.
The embodiment of the disclosure also provides a construction method of an A Zhuo Rongqi, which comprises the following steps: for one GPU, mirror-loading is carried out on one android container for n times, and n android containers corresponding to the GPU are created, wherein n is smaller than or equal to m, m is the maximum number of the android containers which can be supported by the GPU, and both n and m are positive integers.
By setting different container names, a plurality of android containers can be created by loading one android container mirror image for a plurality of times, and one android Zhuo Rongqi mirror image is provided without creating one android container, so that the deployment efficiency of a user for deploying a plurality of containers is improved.
For a hardware GPU, multiple android containers not exceeding the maximum number of android containers that it can support can be deployed for it according to the maximum number of android containers that it can support.
Taking the above fig. 4 as an example, as shown in fig. 4, the maximum number of android containers that GPU1 can support is m, so that m android containers corresponding to GPU1 can be deployed by loading android container mirror image 1 for m times: android container 1 to android container m.
In one possible implementation, the method further includes: in the case that a plurality of GPUs exist and the number of the GPUs is N, mirror-loading is carried out on the N android containers for multiple times respectively, and a plurality of android containers corresponding to the N GPUs are created, wherein the maximum number of the android containers is M = M × N, M is the maximum number of the android containers which can be supported by each GPU, and N, M, M are positive integers.
In the case that multiple GPUs exist, corresponding multiple android containers can be deployed for each GPU, and at this time, the correspondence between the multiple GPUs and the multiple android containers may be that the maximum number of the multiple android containers M = mxn, where M is the maximum number of the android containers that each GPU can support, and N, M, M is a positive integer, that is, each GPU deploys no more than the maximum number of the android containers that it can support. .
Taking the above fig. 4 as an example, as shown in fig. 4, based on the android container mirror image 1 loading m times, m android containers corresponding to the GPU1 are deployed; loading for multiple times based on the android container mirror image 2, and deploying multiple android containers corresponding to the GPU 2; and so on.
In an example, for multiple GPUs, corresponding one or more android containers may be deployed for each GPU, or corresponding one or more android containers may be deployed for only a part of the GPUs, which is not specifically limited by the present disclosure.
After the android container construction method constructs the android containers for the hardware GPUs, the android containers can be applied to the android container system, and the detailed process can refer to the detailed description of the android container system, which is not repeated herein.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an android container construction apparatus, an electronic device, a computer-readable storage medium, and a program, which may be used to implement any of the android container construction methods/android container systems provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
The embodiment of the present disclosure further provides an ann Zhuo Rongqi construction apparatus, the apparatus includes: the loading module is used for creating n android containers corresponding to the GPU by mirroring one android container for n times aiming at the GPU, wherein n is less than or equal to m, m is the maximum number of the android containers which can be supported by the GPU, and n and m are positive integers.
In one possible implementation, the loading module is further configured to: in the case that a plurality of GPUs exist and the number of the GPUs is N, mirror-loading is carried out on the N android containers for multiple times respectively, and a plurality of android containers corresponding to the N GPUs are created, wherein the maximum number of the android containers is M = M × N, M is the maximum number of the android containers which can be supported by each GPU, and N, M, M are positive integers.
The method has specific technical relevance with the internal structure of the computer system, and can solve the technical problems of how to improve the hardware operation efficiency or the execution effect (including reducing data storage capacity, reducing data transmission capacity, improving hardware processing speed and the like), thereby obtaining the technical effect of improving the internal performance of the computer system according with the natural law.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and the computer program instructions, when executed by a processor, run the above system and/or implement the above method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to operate the above-described system and/or to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device runs the above system and/or performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure. Referring to fig. 6, the electronic device 800 may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like.
Referring to fig. 6, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G), a long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or an identifier, a marker, or a sand table, a display area, a display item, etc. associated with an object, or a venue. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. Referring to fig. 7, the electronic device 1900 may be provided as a server or a terminal device. Referring to fig. 7, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932 TM ) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X) TM ) Multi-user, multi-process computer operating system (Unix) TM ) Free and open native code Unix-like operating System (Linux) TM ) Open native code Unix-like operating System (FreeBSD) TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (16)

1. An ann Zhuo Rongqi system, for use in a server, comprising: the system comprises a plurality of android containers, a plurality of Graphics Processing Units (GPUs) and a GPU driver;
the GPU driver comprises: a user-side GPU driver and a server-side GPU driver;
aiming at any one android container, the android container generates a graphic processing instruction based on the corresponding user side GPU drive, and sends the graphic processing instruction to the server;
and the server calls the corresponding GPU to process the image processing instruction based on the GPU drive of the server side, and returns an instruction processing result obtained after processing to the android container.
2. The system according to claim 1, wherein for any one of the GPUs, the GPU corresponds to n android containers, where n is less than or equal to m, m is the maximum number of android containers that the GPU can support, and n and m are both positive integers.
3. The system of claim 1, wherein the maximum number of the plurality of android containers is M = M x N where M is the maximum number of android containers that each GPU can support, and N, M, M are positive integers, where the number of the plurality of GPUs is N.
4. The system according to any one of claims 1 to 3, wherein for any one of the android containers, the android container comprises: the android system and the user GPU driver;
the user-side GPU driver comprises: the system comprises a first driving interface and a second driving interface, wherein the first driving interface is in butt joint with an Application Program Interface (API) interface of the android system, and the second driving interface is in butt joint with the server GPU driver.
5. The system of any one of claims 1 to 4, further comprising: the device comprises a first sending module and a second sending module;
the first sending module is used for sending the graphics processing instructions sent by each android container to the server in an instruction queue form according to a sending time sequence;
the GPUs execute the graphics processing instructions in the instruction queue to obtain a result queue comprising a plurality of instruction processing results, wherein the sequence of the instruction processing results in the result queue corresponds to the sequence of the graphics processing instructions in the instruction queue;
and the second sending module is used for sequentially sending the instruction processing results to the corresponding android containers.
6. The system according to any one of claims 2 to 6, wherein the server, based on the server-side GPU driver, invokes the corresponding GPU to process the image processing instruction, and includes:
and aiming at the graphic processing instruction sent by any one android container, the server sends the graphic processing instruction to the GPU corresponding to the android container based on the GPU drive of the server side.
7. The system of claim 6, further comprising: a scheduling module;
the scheduling module is used for acquiring the current operation load of each GPU;
the scheduling module is further configured to schedule, for a first GPU of the multiple GPUs, the graphics processing instruction in the unprocessed state corresponding to the first GPU to a second GPU of the multiple GPUs when a current operating load of the first GPU exceeds a preset threshold, where the current operating load of the second GPU does not exceed the preset threshold.
8. The system according to any one of claims 1 to 7, wherein the server is an ARM architecture server.
9. The system according to any one of claims 1 to 8, wherein the server-side GPU driver is installed in an operating system running on the server, wherein the operating system is a Linux system.
10. The system of any of claims 1 to 9, wherein the graphics processing instructions comprise: rendering instructions and encoding instructions.
11. An Ann Zhuo Rongqi construction method is characterized by comprising the following steps:
for one GPU, n android containers corresponding to the GPU are created by mirror-loading one android container for n times, wherein n is less than or equal to m, m is the maximum number of the android containers which can be supported by the GPU, and both n and m are positive integers.
12. The method of claim 11, further comprising:
in the case that a plurality of GPUs exist and the number of the GPUs is N, mirror-loading N android containers for multiple times respectively, and creating a plurality of android containers corresponding to the N GPUs, wherein the maximum number of the android containers is M = M × N, M is the maximum number of the android containers that each GPU can support, and N, M, M are positive integers.
13. An ampere Zhuo Rongqi construction apparatus, comprising:
the loading module is used for creating n android containers corresponding to the GPU by mirroring one android container for n times aiming at one GPU, wherein n is less than or equal to m, m is the maximum number of the android containers which can be supported by the GPU, and both n and m are positive integers.
14. The method of claim 13, wherein the load module is further configured to:
in the case that a plurality of GPUs exist and the number of the GPUs is N, mirror-loading N android containers for multiple times respectively, and creating a plurality of android containers corresponding to the N GPUs, wherein the maximum number of the android containers is M = M × N, M is the maximum number of the android containers that each GPU can support, and N, M, M are positive integers.
15. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to execute the system of any one of claims 1 to 10 and/or to perform the method of any one of claims 11 to 12.
16. A computer-readable storage medium, on which computer program instructions are stored, which, when executed by a processor, run a system according to any one of claims 1 to 10 and/or implement a method according to any one of claims 11 to 12.
CN202211215483.8A 2022-09-30 2022-09-30 Android container system, android container construction method and device and electronic equipment Active CN115543535B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211215483.8A CN115543535B (en) 2022-09-30 2022-09-30 Android container system, android container construction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211215483.8A CN115543535B (en) 2022-09-30 2022-09-30 Android container system, android container construction method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN115543535A true CN115543535A (en) 2022-12-30
CN115543535B CN115543535B (en) 2024-04-09

Family

ID=84731623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211215483.8A Active CN115543535B (en) 2022-09-30 2022-09-30 Android container system, android container construction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115543535B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116466958A (en) * 2023-03-21 2023-07-21 摩尔线程智能科技(北京)有限责任公司 Construction method and device of An Zhuo Rongqi, electronic equipment and storage medium
CN117075984A (en) * 2023-10-17 2023-11-17 海马云(天津)信息技术有限公司 Module processing method, device, communication equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110750282A (en) * 2019-10-14 2020-02-04 支付宝(杭州)信息技术有限公司 Method and device for running application program and GPU node
CN112995532A (en) * 2021-02-03 2021-06-18 上海哔哩哔哩科技有限公司 Video processing method and device
CN112999653A (en) * 2021-01-21 2021-06-22 腾讯科技(深圳)有限公司 Cloud game processing method, device, equipment and medium
CN114237826A (en) * 2021-12-21 2022-03-25 北京蔚领时代科技有限公司 High-speed rendering method and device for android container
CN114253671A (en) * 2021-12-21 2022-03-29 北京蔚领时代科技有限公司 GPU pooling method of Android container
WO2022178076A1 (en) * 2021-02-22 2022-08-25 Activevideo Networks, Inc. Systems and methods of alternative networked application services
CN115018693A (en) * 2022-05-31 2022-09-06 西安万像电子科技有限公司 Docker image acceleration method and system based on software-defined graphics processor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110750282A (en) * 2019-10-14 2020-02-04 支付宝(杭州)信息技术有限公司 Method and device for running application program and GPU node
CN112999653A (en) * 2021-01-21 2021-06-22 腾讯科技(深圳)有限公司 Cloud game processing method, device, equipment and medium
CN112995532A (en) * 2021-02-03 2021-06-18 上海哔哩哔哩科技有限公司 Video processing method and device
WO2022178076A1 (en) * 2021-02-22 2022-08-25 Activevideo Networks, Inc. Systems and methods of alternative networked application services
CN114237826A (en) * 2021-12-21 2022-03-25 北京蔚领时代科技有限公司 High-speed rendering method and device for android container
CN114253671A (en) * 2021-12-21 2022-03-29 北京蔚领时代科技有限公司 GPU pooling method of Android container
CN115018693A (en) * 2022-05-31 2022-09-06 西安万像电子科技有限公司 Docker image acceleration method and system based on software-defined graphics processor

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116466958A (en) * 2023-03-21 2023-07-21 摩尔线程智能科技(北京)有限责任公司 Construction method and device of An Zhuo Rongqi, electronic equipment and storage medium
CN116466958B (en) * 2023-03-21 2023-11-21 摩尔线程智能科技(北京)有限责任公司 Construction method and device of An Zhuo Rongqi, electronic equipment and storage medium
CN117075984A (en) * 2023-10-17 2023-11-17 海马云(天津)信息技术有限公司 Module processing method, device, communication equipment and computer readable storage medium
CN117075984B (en) * 2023-10-17 2023-12-26 海马云(天津)信息技术有限公司 Module processing method, device, communication equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN115543535B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN115543535B (en) Android container system, android container construction method and device and electronic equipment
EP3989177A1 (en) Method for controlling multiple virtual characters, device, apparatus, and storage medium
US11790612B2 (en) Information display method and device, terminal, and storage medium
CN112785672B (en) Image processing method and device, electronic equipment and storage medium
US20220206581A1 (en) Communication interface with haptic feedback response
US11934814B2 (en) Application porting method and apparatus, device, and medium
CN113806054A (en) Task processing method and device, electronic equipment and storage medium
US11989348B2 (en) Media content items with haptic feedback augmentations
US20220206584A1 (en) Communication interface with haptic feedback response
CN111401230A (en) Attitude estimation method and apparatus, electronic device, and storage medium
EP4268060A1 (en) Recentering ar/vr content on an eyewear device
CN110968362B (en) Application running method, device and storage medium
EP4315005A1 (en) Interface with haptic and audio feedback response
WO2022212175A1 (en) Interface with haptic and audio feedback response
WO2022147449A1 (en) Electronic communication interface with haptic feedback response
CN114035902A (en) Application program development platform and method, electronic device and storage medium
CN113379896A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN116708696B (en) Video processing method and electronic equipment
CN114638949A (en) Virtual object display method and device, electronic equipment and storage medium
CN112437090B (en) Resource loading method and device, electronic equipment and storage medium
CN114356529A (en) Image processing method and device, electronic equipment and storage medium
CN115272151A (en) Image processing method, device, equipment and storage medium
CN114020264A (en) Operator processing method and device, electronic equipment and storage medium
CN111754635A (en) Texture fusion method and device, electronic equipment and storage medium
CN115589488B (en) Video transcoding system, method, GPU, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant