CN115599529B - Edge cloud function computing system and method - Google Patents

Edge cloud function computing system and method Download PDF

Info

Publication number
CN115599529B
CN115599529B CN202211428947.3A CN202211428947A CN115599529B CN 115599529 B CN115599529 B CN 115599529B CN 202211428947 A CN202211428947 A CN 202211428947A CN 115599529 B CN115599529 B CN 115599529B
Authority
CN
China
Prior art keywords
function
edge cloud
function computing
program
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211428947.3A
Other languages
Chinese (zh)
Other versions
CN115599529A (en
Inventor
张洪江
任庚
杨敬宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211428947.3A priority Critical patent/CN115599529B/en
Publication of CN115599529A publication Critical patent/CN115599529A/en
Application granted granted Critical
Publication of CN115599529B publication Critical patent/CN115599529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Stored Programmes (AREA)

Abstract

The present disclosure relates to an edge cloud function computing system and method. The system comprises a plurality of function computing modules deployed on a plurality of working machines in a plurality of edge cloud function computing nodes, and a scheduling system configured to receive requests for function computing tasks and distribute the received function computing tasks to one of the plurality of function computing modules for execution according to at least memory usage conditions of the plurality of working machines. Thus, migration and cold start can be reduced by scheduling the function calculation task in consideration of the memory usage status of the plurality of work machines.

Description

Edge cloud function computing system and method
Technical Field
The present disclosure relates to the field of edge cloud function computing, and in particular, to an architecture of an edge cloud function computing system.
Background
With the development of cloud computing, the cloud service of an infrastructure is developed all the way, and a cloud service mode of functions, namely services, is promoted, but most of function computing provides services in a mode of central clustering. However, for many lightweight function services that are low in consumption, require a fast response speed, and the like, the implementation of the central cloud is not ideal.
And the edge cloud is closer to the user and has a large amount of small distributed node resources. Therefore, it is desirable to provide function computing services on edge cloud nodes by fully utilizing the distance advantages and computing resources of the edge cloud distributed nodes.
Disclosure of Invention
One technical problem to be solved by the present disclosure is to provide an edge cloud function computing system capable of providing a function computing service by making full use of the computing power of edge nodes.
According to a first aspect of the present disclosure, there is provided an edge cloud function computing system, including: the scheduling system is configured to receive a request for a function computing task and distribute the received function computing task to one of the plurality of function computing modules to execute according to at least memory usage conditions of the plurality of working machines.
Optionally, the scheduling system includes a node scheduling device, a scheduler that schedules a plurality of work machines in the edge cloud function computing node, and a control module that is deployed on the work machines and schedules a plurality of function computing modules; the node scheduling device is configured to receive a request for a function computing task and distribute the received function computing task to one of the edge cloud function computing nodes to execute according to at least the load conditions of the edge cloud function computing nodes; the scheduler is configured to allocate the function computing task to one of a plurality of working machines in the allocated edge cloud function computing node to execute according to at least the memory usage conditions of the working machines; and the control module is configured to distribute the function calculation task to one of the plurality of function calculation modules to be executed at least according to the memory use conditions of the plurality of function calculation modules in the distributed working machine.
Optionally, the node scheduling apparatus includes a resource consumption evaluation module configured to evaluate at least an amount of memory to be consumed by the function computation task. The node scheduling device further allocates the function calculation task according to the memory amount evaluated by the resource consumption evaluation module.
Optionally, the scheduler collects memory usage conditions of the working machines from the plurality of working machines, respectively, the memory usage conditions including information of programs retained in memories of the working machines. The scheduler assigns the function calculation task to one of the work machines in a case where it is determined that a program that the function calculation task needs to use is being reserved in a memory of the work machine.
Optionally, the control module determines whether a program that the function calculation task needs to use is being reserved in a memory of one of the function calculation modules, and allocates the function calculation task to the function calculation module in a case where the program that the function calculation task needs to use is being reserved in the memory of one of the function calculation modules.
Optionally, the function computing task is implemented using a program deployed by a tenant on the edge cloud function computing system, the at least one working machine adopts a program interpreter to run the program of the function computing task, and the function computing module is a thread of the program interpreter. The function computation module creates one or more program virtual machines, wherein each program virtual machine loads a program. The function calculation task is performed by a program virtual machine that runs its corresponding program by the function calculation module.
Optionally, the control module is configured to receive a CPU occupation time preset by a tenant for the function computation task, and terminate the function computation task when a time for the function computation task to use the CPU exceeds the CPU occupation time.
Optionally, the control module is configured to adjust the number of program virtual machines loading a program according to the number of requests of function computation tasks currently performed on the program.
Optionally, the control module is configured to set a memory sandbox for each procedural virtual machine, and/or the control module is configured to adaptively dynamically adjust the memory garbage collection policy and the memory management policy according to a real-time memory status of the working machine and each procedural virtual machine.
Optionally, at least one of the edge cloud function computing nodes further includes a code cache, where the code cache stores code of a program used by a function computing task executed by the edge cloud function computing node.
Optionally, at least one of the edge cloud function computing nodes further comprises an edge memory. The edge memory stores an execution state of a function computing task executed by the edge cloud function computing node, and provides the execution state to a subsequent function computing task in response to a requirement of the subsequent function computing task.
Optionally, the edge cloud function computing system further comprises a programming module providing a WEB editor or CLI based programming interface and a debugging means, wherein the programming module is configured to allow the tenant to publish programs to the edge cloud function computing system through the WEB editor or CLI.
Optionally, the edge cloud function computing system further comprises a management center and a code storage device. Wherein the code storage stores code of programs issued by tenants. The management center is configured to: the method comprises the steps of storing tenant information and information of a program issued by the tenant, informing an edge cloud function computing node when code or information of the program of the tenant is updated, and providing the code and the information of the program to be run by the edge cloud function computing node to the edge cloud function computing node in response to the requirement of the edge cloud function computing node.
Optionally, at least one of the edge cloud function computing nodes further includes a CDN cache device, and the node scheduling device further includes a CDN scheduling device, and the tenant deploys a program, for example, a JavaScript program, on the edge cloud function computing system. The function calculation task comprises performing page rendering of a webpage using the program, and the request for the function calculation task is an access request for the webpage initiated by a netizen. And the function calculation module executing the function calculation task directly obtains the resources required by the webpage from the source station of the webpage or obtains the resources required by the webpage as a proxy through the CDN cache device.
According to a second aspect of the present disclosure, there is provided a method of using an edge cloud function computing system, comprising: receiving a request to perform a function computation task using the edge cloud function computing system, wherein the edge cloud function computing system comprises a plurality of function computation modules deployed on a plurality of work machines in a plurality of edge cloud function computing nodes; and distributing the received function calculation tasks to one of the function calculation modules for execution at least according to the memory use conditions of the plurality of working machines.
Optionally, the step of assigning the received function calculation task to one of the plurality of function calculation modules for execution includes: distributing the received function computing task to one of the plurality of edge cloud function computing nodes for execution at least according to the load conditions of the plurality of edge cloud function computing nodes; distributing the function computing task to one of a plurality of working machines to be executed at least according to the memory use conditions of the working machines in the distributed edge cloud function computing node; and distributing the function calculation task to one of the function calculation modules to be executed at least according to the memory use conditions of the function calculation modules in the distributed working machine.
Optionally, the step of allocating the received function computing task to one of the plurality of edge cloud function computing nodes according to at least the load condition of the plurality of edge cloud function computing nodes includes: evaluating at least an amount of memory to be consumed by the function calculation task; and distributing the function computing tasks according to at least the evaluated memory amount and the load conditions of the plurality of edge cloud function computing nodes.
Optionally, the step of allocating the function computing task to one of the plurality of working machines for execution according to at least the memory usage status of the plurality of working machines in the allocated edge cloud function computing node includes: collecting memory use conditions of the working machines from the plurality of working machines respectively, wherein the memory use conditions comprise information of programs reserved in memories of the working machines; and assigning the function calculation task to one of the work machines in a case where it is determined that the program that the function calculation task needs to use is being reserved in the memory of the work machine.
Optionally, the step of allocating the function computing task to one of the plurality of function computing modules in the allocated work machine for execution according to at least the memory usage status of the plurality of function computing modules includes: determining whether a program required to be used by the function calculation task is being reserved in a memory of one of the function calculation modules; and distributing the function calculation tasks to one of the function calculation modules in the case where a program that the function calculation tasks need to use is being reserved in a memory of the function calculation module.
According to a third aspect of the present disclosure, there is provided a computing device comprising: a processor; and a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of the second aspect as described above.
According to a fourth aspect of the present disclosure, there is provided a computer program product comprising executable code which, when executed by a processor of an electronic device, causes the processor to perform the method according to the second aspect as described above.
According to a fifth aspect of the present disclosure, there is provided a non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the method of the second aspect as described above.
Therefore, the present disclosure provides an architecture of an edge cloud function computing system, which can reduce migration and cold start by scheduling function computing tasks in consideration of memory usage conditions of a plurality of working machines.
In addition, in some embodiments, the global and local resource allocation of the edge cloud nodes is preferably realized through a hierarchical scheduling model.
In addition, in some embodiments, the edge cloud function computing system of the present disclosure may employ a program virtual machine of a program interpreter to achieve lightweight isolation between tenants, thereby reducing resources occupied by each tenant and reducing cold boot time.
In addition, in some embodiments, the edge cloud function computing system of the present disclosure may further provide a WEB editor/CLI-based programming interface and a debugging means for tenants, and/or may provide an API interoperating with CDN devices and other edge components in the edge node, thereby improving response speed of website access and/or enhancing an ability to complete complex service logic at the edge node.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following more particular descriptions of exemplary embodiments of the disclosure as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the disclosure.
Fig. 1 shows a schematic block diagram of an edge cloud function computing system according to an embodiment of the present disclosure, and shows one exemplary usage scenario of the edge cloud function computing system.
Fig. 2 shows a schematic flow diagram of a method of using an edge cloud function computing system according to one embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of a computing device that can be used to implement the edge cloud function computing method according to an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As described above, the present disclosure provides an edge cloud function computing system, which can fully utilize the computing power of edge nodes to provide a function computing service with shorter latency for a user, for example, page processing, such as page rendering, etc., at a front end.
For example, in the early development stage of the internet, whether the resource accessed by the user is a dynamic resource or a static resource, most of the resources are page renderings generated at the server side, that is, early server side renderings. With the development of front-end technology, the complexity of a page to be displayed is also improved, the design concept of front-end and back-end separation is widely spread, and a client rendering mode appears. However, the terminal differentiation causes great difference in the experience of client rendering on different terminals, so that a new demand scene appears in server rendering. Under the condition, the edge rendering realized on the edge cloud becomes a good choice, the edge rendering is close to the client, and the rendering can be completed from the server side to improve the user experience. The edge cloud function computing service of the present disclosure is particularly suitable for edge rendering.
In addition, in some embodiments, the edge cloud function computing system of the present disclosure may be used with a CDN (Content Delivery Network), so as to further improve a response speed of user access. For example, the edge cloud function computing service of the present disclosure may also be deployed on an edge node where the CDN is deployed, that is, the CDN edge node may be deployed as an edge cloud function computing node that doubles as the present disclosure, and the CDN scheduling device may also be deployed as a node scheduling device that doubles as the present disclosure.
The edge cloud function computing system of the present disclosure is described in detail below with reference to fig. 1 by taking a function computing platform built on the CDN edge cloud infrastructure as an example. However, it should be understood by those skilled in the art that the present invention is not limited thereto, and the edge cloud function computing system of the present disclosure may be built on other existing edge cloud infrastructures, or may be specifically built for the edge cloud function computing system of the present disclosure.
The edge cloud function computing system of the present disclosure includes a plurality of edge cloud function computing nodes and a node scheduling device, but fig. 1 shows a structural block diagram of only one edge cloud function computing node 100 as an example. Those skilled in the art should understand that the edge cloud function computing nodes of the present disclosure are only edge nodes deployed with corresponding edge cloud function computing services, and the structures of these edge nodes are not necessarily the same as the nodes 100 of fig. 1, nor are they necessarily the same as each other, but may be adjusted according to actual situations. Depending on the actual application, in some embodiments, some edge cloud function computing nodes may also include only one working machine for providing function computing services, without requiring a working machine scheduler.
Fig. 1 shows that an edge cloud function compute node 100 includes a plurality (N, N being a positive integer greater than 1) of work machines 110 and a scheduler 120, which may also be referred to as a "work machine scheduler" in the context of the present disclosure, that schedules the plurality of work machines 110. As is well known to those skilled in the art, a work machine refers to a computer that participates in processing in a system, and in the context of this disclosure, may be a computer that provides function computation services in a system.
Fig. 1 also shows that one of the working machines 110 includes a plurality of (M, M being a positive integer greater than 1) function calculation modules 112 and a control module 111. The function calculation module 112 is an execution unit in the working machine 110, which is used for executing calculation tasks of calculation functions, and may be a hardware execution unit, a software execution unit, or in some embodiments, an execution unit combining software and hardware. In one particular example, the function computation module 112 may be a thread running on a work machine.
Those skilled in the art will understand that the structures of the N working machines are not required to be the same as those of the working machine 110 shown in fig. 1, and may be physical machines with corresponding edge cloud function computing services deployed thereon in nodes, and in some cases, other working machines may include only one function computing module without a control module. Fig. 1 also shows a gateway 130, a code buffer 140, an edge memory 150, and a CDN cache device 160 connected to the worker 110 within the node 100 according to some embodiments. The above-described respective devices/modules will be described later in detail with reference to practical application examples.
As shown in fig. 1, in some embodiments, the edge cloud function computing system of the present disclosure may further include a central node for managing the entire system, i.e., an edge cloud function computing management central node 300, which includes a programming module 310, a program management module 320, a management center 330, a KV (Key-Value) memory 340, an edge node interface 350, and a code storage 360. Therefore, the edge cloud function computing system disclosed by the invention can integrate core capabilities such as a Programming/debugging Interface, program management, an Open Application Programming Interface (OpenAPI), cache management, scheduling of all levels including resource global scheduling, core function computing service and the like to form a complete platform service.
An example of the edge cloud function computing system of the present disclosure is explained in detail below with reference to fig. 1 by taking an example of a whole flow of deploying a webpage program by a tenant on the edge cloud function computing system of the present disclosure and accessing a webpage of the tenant by a netizen.
1. Tenant deployment program
The programming module 310 may provide a WEB editor or CLI (Command-Line Interface) based programming Interface and debugging means for the tenant. The WEB editor herein may use, for example, a WEB editor function of WEB integration development Environment (WEB editor). Tenants can write and submit code (e.g., javaScript code) directly on the WEB editor online, or can submit the written code through the CLI command line tool. The CLI program provided by the programming module 310 may also provide a built-in debugger, which supports syntax debugging, and may verify the code before the tenant uploads it. The CLI can call OpenAPI to implement various functions.
In some embodiments, the entry of the program issued by the tenant is a function for listening to FETCH events, and rendering and other processing of the web page content are implemented in the function.
In addition, when the tenant issues the program, the tenant may set a corresponding resource configuration for the program itself, for example, CPU time occupied by single execution of the program, and the like. The edge cloud function computing system can charge the tenants according to the corresponding resource allocation, the actual program execution times and the like. In addition, the tenant may also specify a CDN acceleration domain name associated with the program, so that the CDN may be matched to provide a faster web page response speed for the customer.
Tenant-submitted program code may undergo a security scan as necessary by program management module 320 before being stored to code store 360. The program management module 320 may be a set of APIs that perform the add-drop and modify-check. After the program management module 320 confirms the submission as being legitimate, the program code is stored to the code storage 360 and the program related information (including the tenant set configuration information) is sent to the management center 330 for management.
The management center 330 may utilize the KV storage 340 to store information of the tenant, information (including resource configuration information) of a program issued by the tenant, configuration of a domain name (including a CDN acceleration domain name) related to the tenant, and notify the edge cloud function computing node when the code or configuration of the program of the tenant is updated. For example, version management of the code may be supported on a programming interface provided by the programming module 310, once the tenant uploads the new version code, the relevant information of the program is transmitted to the management center 330, the management center 330 sends a notification to each edge cloud function computing node to enable the edge cloud function computing node to refresh the old program, and when a new request comes afterwards, the corresponding edge node will acquire the new program code again and execute the new program code.
The edge node interface 350 may be a set of communication interfaces for the edge node 100 to interact with the central node 300. The edge node 100 can obtain information such as program code and configuration of the tenant through the interface 350.
In some embodiments, after the tenant successfully issues the program, the tenant may obtain information such as code from the central node to deploy the program in real time when the edge node actually needs to execute a function computation request for the program, or the tenant may select to actively deploy the issued program to a part or all of the edge nodes, so that cold start time is further reduced, and response speed is improved.
The cold start refers to links of code downloading, instance (for example, virtual machine creation), initialization operation and the like in the starting process when the function computing platform deploys and runs the function computing program for the first time.
Although only one central node 300 is shown in fig. 1, those skilled in the art will appreciate that the present invention is not so limited. In some embodiments, different area management center nodes may also be deployed in a plurality of areas, respectively.
In addition, it should be understood by those skilled in the art that although fig. 1 shows programming module 310, program management module 320, and management center 330, etc. deployed in the same management center node 300, the present invention is not limited thereto. In some embodiments, programming module 310 and program management module 320 may also be deployed in another node (either a central node or an edge node). In addition, although fig. 1 shows the code storage 360 located within the management center node 300, the present invention is not limited thereto. In some embodiments, storage outside of the management center node 300 (e.g., cloud storage) may also be used to store the code.
2. Netizen accessing webpage
In the case where the tenant issues the program and associates the program with the acceleration domain name in the CDN as described above, when the netizen requests access to the web page of the tenant, execution of the program is triggered, that is, the access request of the netizen corresponds to a request for executing the function calculation task of the program.
First, the netizens have access to the scheduling system of the CDN, which also doubles as the node scheduling appliance 200 of the present disclosure. A conventional CDN scheduling system may schedule nodes according to a load condition of a current entire system, for example, the nodes may be scheduled based on traffic of each node, a CPU usage condition, and the like, but generally a memory usage condition of the node does not need to be considered. However, the inventor of the present application has noticed that the edge cloud function computing service is sensitive to memory resource consumption, and once the control is not good, the situation of memory resource shortage is easy to occur, thereby affecting the service. Therefore, in order to improve the precision of global resource scheduling, the function calculation task to be executed can be evaluated, and a memory resource consumption model can be abstracted and used as a reference basis for scheduling. Based on this, the node scheduling apparatus 200 of the present disclosure may further include a resource consumption evaluation module 210 configured to evaluate at least an amount of memory to be consumed by the function computation task. The node scheduling apparatus 200 may also allocate the function calculation task according to the memory amount evaluated by the resource consumption evaluation module 210. The node scheduling device 200 performs a first-level scheduling, which is responsible for global load balancing.
Then, according to the scheduling result of the node scheduling apparatus 200, the netizen request is analytically allocated (or redirected) to the specified edge cloud function computing node 100. The request first reaches the gateway 130, which then forwards the request to one of the appropriate work machines 110 according to the policy calculated by the work machine scheduler 120.
In particular, CDN edge nodes are typically configured with a set of physical machines, and deploying the edge cloud function computing service of the present disclosure on these physical machines may become the working machine of the present disclosure. The worker scheduler 120 is responsible for scheduling tasks among the workers to achieve intra-node resource balancing. For example, different function calculation tasks have different requirements on CPU and memory resources, and the cooling and heating degrees of different types of function calculation tasks are different. The work machine scheduler 120 may perform scheduling according to the resource demand and the cooling and heating degree of the function calculation task, so as to achieve optimal matching of resources and demands on multiple work machines, and bear as many tasks as possible under the condition of limited resources as possible without causing congestion. The working machine scheduler 120 implements a second level of scheduling for the system.
In addition, the working machine scheduler 120 may also actively collect important measurement index data (such as CPU, memory, or traffic usage rate) of each working machine by using a measurement index collector, calculate a new scheduling policy according to the collected data and a historical scheduling policy, and then use the new scheduling policy to instruct the gateway how to distribute tasks to the working machines, so as to reduce migration and cold start as much as possible and accommodate more tasks as much as possible under certain resource conditions. The historical scheduling policy may adopt a conventional round-robin scheduling policy or the like.
Preferably, the work machine scheduler 120 may also collect memory usage of the work machines, including information of programs being retained in the memory of the work machine, from the plurality of work machines 110, respectively. Then, the work machine scheduler 120 may determine, based on the information, in which memory of the work machine the program that needs to be used by the currently requested function calculation task is being retained, and allocate the function calculation task to this work machine. This is because there are usually other requests for the function calculation task of the same program before the request, and the working machine usually keeps the program in the memory for a certain period of time after loading the program and executing other requested tasks, so as to facilitate the processing of the possible future requests for the same program and reduce the cold start. The retention time of the program in the memory may be set and the program released if after this time the program has not been run to perform a computational task. Of course, in some cases, the worker scheduler 120 may find that no one of the workers' memory is holding the program, at which point the worker scheduler 120 may specify a worker according to the scheduling policy described in the preceding paragraph, and then deploy the program by the worker to perform the task.
For example, before a request for accessing a web page is received from the netizen, there is usually an access request for the same web page, and after the processing is completed, the working machine will keep the container/virtual machine running the web page program for a certain period of time, so as to facilitate the quick processing of a large number of access requests for the web page coming in the same period of time.
Then, after the request reaches the work machine 110, the request may be received by the control module 111 and distributed to the function calculation module 112. The control module 111 implements a third level of scheduling within the overall system to achieve load balancing among the plurality of function computation modules 112. For example, the requested function computation task may be distributed according to the configuration of the tenant program to which the request relates, the resource utilization of the current function computation module 112, and the like. Preferably, similar to the above-described work machine scheduling, the control module 111 may also determine in which function computation module 112 memory the requested netpage program is being retained, and accordingly assign the function computation task to this function computation module 112.
In some embodiments, the program published by the tenant on the edge cloud function computing system is JavaScript code, and the worker 110 runs the program by using a program interpreter. The program interpreter may be, for example, a V8 engine, which is an open source JavaScript and WebAssembly engine. For example, the work machine 110 creates a process for providing a Runtime Environment (Runtime Environment) for the function calculation service, and the function calculation module 112 may be a thread (for example, there may be 4 threads, that is, 4 function calculation modules) within the process that employs a program interpreter. The control module 111 manages a plurality of threads and may further include a Garbage Collection (GC) module to control a Garbage Collection process. A thread of the function calculation module 112 may create and run a program virtual machine loaded with a program corresponding to a task to perform a function calculation task. The program virtual machine is an example of a virtual machine provided by a program interpreter, and is responsible for creating an execution environment for Javascript source code, managing all components such as stack, compilation, execution, context (context), and the like. In the case where the program interpreter is a V8 engine, the program virtual machine may be, for example, an isolate virtual machine of the V8 engine. In some embodiments, one thread may create multiple program virtual machines, each program virtual machine loading program code of a different tenant, and alternately running these virtual machines according to the request to provide services for multiple tenants, while also achieving tenant isolation. Specifically, the program virtual machine may have its own independent heap and independent GC, and program codes written by different tenants run in the program virtual machines that are isolated from each other, thereby implementing tenant isolation.
Compared with the scheme of adopting the container to isolate the tenants in the prior art, the method adopts the program virtual machine to isolate the tenants, the resource consumed by each tenant is much smaller, and the cold start time is much shorter. Specifically, in the prior art, tenant isolation is performed based on a container, for example, node.js (a JavaScript operating environment based on a V8 engine) is deployed in the container to provide a function computing service, a single container may occupy a plurality of CPU cores and an upper G memory, resource consumption is large, a cold start speed is slow, such a scheme is only suitable for a central cloud computing scenario, and for an edge cloud distributed computing scenario, traffic of the service may be scheduled and reallocated at any time, so start performance and real-time resource allocation of the service become extremely important. Therefore, the light-weight tenant isolation scheme is realized by adopting the program virtual machine in the process, the resource consumption is low (for example, the memory only occupies dozens of to more than one hundred megameters), the cold start speed is high, and the requirements of the edge computing scene are met. Moreover, the processes of the present disclosure consume little idle, and thus in some cases the processes may be started in advance, waiting for the arrival of a computing service request, which further reduces the time for process start and virtual machine cold start.
In addition, in some embodiments, the present disclosure also provides some improvements to the control and isolation of program virtual machines.
For example, as described previously, a tenant may set the CPU time it takes for a single execution of its program when publishing the program. The control module 111 may be further configured to receive the CPU occupation time set by the tenant in advance, and terminate the function calculation task each time the function calculation task uses the CPU exceeds the set CPU occupation time.
In addition, the control module 111 may be further configured to adjust the number of program virtual machines running a program according to the number of requests of the function calculation task currently to the program. For example, currently, more than ten access requests to a webpage of one tenant need to be processed, and one program virtual machine supports at most 10 requests, so that one program virtual machine can be adaptively added to process the requests; similarly, when the access request to the webpage is reduced, the number of program virtual machines for running the programs can be reduced in a self-adaptive manner, so that the expansion and contraction capacity in the nodes can be automatically realized to meet the requirements of tenants.
In addition, since the default memory management of the program interpreter is not always friendly, the present disclosure may also improve memory and GC management as needed. For example, to better achieve tenant isolation, the control module 111 may also be configured to set a memory sandbox for each program virtual machine. In addition, the control module 111 may be further configured to adaptively and dynamically adjust the memory garbage collection policy and the memory management policy according to the real-time memory status of the working machine 110 and each program virtual machine.
In addition, in some embodiments, the working machine 110 may further provide a set of streaming HTML (hypertext Markup Language) parsers, which can receive page data in real time and complete page rendering, and compared to a conventional HTML parsing manner, the working machine 110 of the present disclosure has smaller memory usage and better performance, which is particularly advantageous when the function computation of the present disclosure serves application scenarios (such as page rendering and other page processing) related to a front end.
In addition, the working machine 110 can also provide two working modes for the function computing service to cooperate with the CDN. One mode is CDN bypass, that is, the working machine 110 does not search for web content from the CDN cache device 160, but independently returns the web content to the source station, pulls the web content, processes the web page, and returns the web content to the netizen's client through the gateway 130; another mode is a CDN proxy mode, in which after the request reaches the working machine 110, the working machine 110 obtains the web content from the source station by using the CDN as a proxy or finds the cached web content from the CDN cache device 160, and returns the web content to the client through the gateway 130 after being processed by the working machine 110. Note that fig. 1 uses a separate module 160 to represent CDN cache means only for the purpose of clearly indicating that CDN cache means also exist in the edge node, but does not mean that the CDN cache means 160 must be a separate component from the working machines 110. It should be understood by those skilled in the art that the CDN caching device 160 may be deployed on the same work machine 110 as the function calculation module 112, or may be deployed on other work machines or other devices.
In addition to being able to interact with the CDN, the worker 110 may also provide a set of APIs for communicating directly with the code cache 140 and edge storage 150 (e.g., KV storage) to reduce cold boot time and help tenants implement complex service logic.
In some embodiments, to reduce the time for loading the program code in the cold boot, the edge cloud function computing node 100 may further include a code cache 140 for storing the code of the program used by the function computing task executed by the edge node 100.
In some embodiments, the worker 110 of the present disclosure may also utilize the edge memory 150 already present at the edge node to implement stateful services. The work machine 110 may store the execution state of the executed function calculation task into the edge memory 150 and fetch the previous execution state from the edge memory 150 if a subsequent function calculation task request is needed. For example, a web page access request is typically stateless, and the request is processed and then completed, with subsequent requests having no relationship to previous processing. In some cases, however, tenants may need to implement stateful interactions, and subsequent requests may use the processing results of previous requests. In view of this, edge memory 150 may be utilized to temporarily store previously requested processing results to handle more complex service logic.
Then, as described above, after the request reaches the function computation module 112, a program virtual machine loaded with a program corresponding to the task may be created and run to perform its function computation task. Therefore, the processing of the webpage accessed by the netizen, such as page rendering, can be realized.
If the program virtual machine loaded with the program already exists in the function calculation module 112, the program virtual machine may be directly run without creating.
If a program virtual machine needs to be created, the de-code buffer 140 is triggered to load program code. If the code buffer 140 is available, the program code is directly returned; if the code cache 140 is not located in the code cache 140, the code cache 140 will pull the configuration information and code related to the program from the central node through the edge node interface 350, and then cache the configuration information and code in the code cache 140 and return the configuration information and code to the function computation module 112. The function computation module 112 creates a corresponding program virtual machine to load and run the program code.
As mentioned above, the function calculation task for web page processing may have two working modes, namely, a CDN bypass mode and a CDN proxy mode. If the program sets a CDN bypass mode, the program directly obtains webpage resources from a source station through a public network. If the program is configured in the CDN proxy mode, the web page resource is obtained through the CDN link, that is, if the CDN cache device 160 does not have the resource, the source station is accessed and the resource is cached through the logic of the conventional CDN, and the next request can be directly hit in the CDN cache device 160. Then, by performing function calculation using the obtained web page resources, page processing such as page rendering requested by netizens is completed.
In summary, the edge cloud function computing system of the present disclosure provides an edge Serverless (Serverless) computing environment constructed on a distributed edge cloud infrastructure, and supports a tenant to directly write codes in languages such as JavaScript on an edge cloud Serverless computing environment platform and execute the codes on globally distributed edge cloud nodes. Because the edge cloud node is closer to the user, the computing service with shorter time delay can be provided. The tenants do not need to care about background equipment for running the service program, resource configuration and the like, but pay for the platform according to the actual usage amount of the resources, so that the utilization rate of the service resources can be greatly improved, and the cost of the tenants is reduced. The edge cloud function computing system can integrate core capabilities of programming/debugging interfaces, program management, openAPI (open application programming interface), cache management, scheduling of all levels including resource global scheduling, core function computing services and the like to form a complete platform service.
In addition, the present disclosure provides an edge cloud function computing system capable of reducing migration and cold start by performing scheduling of function computing tasks in consideration of memory usage conditions of a plurality of working machines. In addition, the global and local resource allocation of the edge cloud nodes can be achieved through the hierarchical scheduling model. In addition, in some embodiments, the edge cloud function computing system of the present disclosure may employ a program virtual machine (e.g., an isoplate virtual machine of a V8 engine) of a program interpreter to achieve light-weight isolation among tenants, thereby reducing the resources occupied by each tenant and reducing cold boot time. In addition, in some embodiments, the edge cloud function computing system of the present disclosure may also provide an API for the tenant to interoperate with CDN devices and other edge components in the edge node, thereby increasing the response speed of website access and/or enhancing the ability to complete complex service logic at the edge node.
Fig. 2 shows a schematic flow diagram of a method of using an edge cloud function computing system such as described above according to one embodiment of the present disclosure.
As shown in fig. 2, at step S210, a request to perform a function computation task using an edge cloud function computing system as discussed above is received, where the edge cloud function computing system includes a plurality of function computation modules deployed on a plurality of work machines in a plurality of edge cloud function computing nodes, such as shown in fig. 1.
At step S220, the received function calculation task is allocated to one of the function calculation modules for execution according to at least the memory usage status of the plurality of working machines.
In some embodiments, step S220 may include:
distributing the received function computing task to one of the plurality of edge cloud function computing nodes for execution at least according to the load conditions of the plurality of edge cloud function computing nodes;
distributing the function computing task to one of a plurality of working machines to be executed at least according to the memory use conditions of the working machines in the distributed edge cloud function computing node; and
and distributing the function calculation task to one of the function calculation modules to be executed at least according to the memory use conditions of the function calculation modules in the distributed working machine.
For example, the step of allocating the received function computing task to one of the plurality of edge cloud function computing nodes according to at least the load condition of the plurality of edge cloud function computing nodes may include: evaluating at least the amount of memory to be consumed by the function calculation task; and distributing the function computing tasks according to at least the evaluated memory amount and the load conditions of the plurality of edge cloud function computing nodes.
For example, the step of allocating the function computing task to one of the plurality of working machines in the allocated edge cloud function computing node for execution at least according to the memory usage of the plurality of working machines may include: collecting memory use conditions of the working machines from the plurality of working machines respectively, wherein the memory use conditions comprise information of programs reserved in memories of the working machines; and assigning the function calculation task to one of the work machines in a case where it is determined that the program that the function calculation task needs to use is being reserved in the memory of the work machine.
For example, the step of allocating the function calculation task to one of the function calculation modules to be executed at least according to the memory usage status of the function calculation modules in the allocated work machine may include: determining whether a program required to be used by the function calculation task is being reserved in a memory of one of the function calculation modules; and distributing the function calculation tasks to one of the function calculation modules in the case where a program that the function calculation tasks need to use is being reserved in a memory of the function calculation module.
By the method, migration and cold start can be reduced, and the response speed of the edge cloud function computing system is improved.
Fig. 3 is a schematic structural diagram of a computing device that can be used to implement the edge cloud function computing method according to an embodiment of the present invention.
Referring to fig. 3, computing device 300 includes memory 310 and processor 320.
Processor 320 may be a multi-core processor or may include multiple processors. In some embodiments, processor 320 may include a general-purpose host processor and one or more special coprocessors such as, for example, a Graphics Processor (GPU), a Digital Signal Processor (DSP), or the like. In some embodiments, processor 320 may be implemented using custom circuitry, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA).
The memory 310 may include various types of storage units, such as system memory, read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions for the processor 320 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, memory 310 may comprise any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash, programmable read only memory), magnetic and/or optical disks may also be used. In some embodiments, memory 310 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 310 has stored thereon executable code that, when processed by the processor 320, causes the processor 320 to perform the methods described above.
The edge cloud function computing system and method according to the present invention have been described in detail above with reference to the accompanying drawings.
Furthermore, the method according to the invention may also be implemented as a computer program or computer program product comprising computer program code instructions for carrying out the above-mentioned steps defined in the above-mentioned method of the invention.
Alternatively, the invention may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the steps of the above-described method according to the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (18)

1. An edge cloud function computing system, comprising:
a plurality of function computation modules deployed on at least one of the plurality of work machines in at least one of the plurality of edge cloud function computation nodes; and
the scheduling system is configured to receive requests for function calculation tasks and distribute the received function calculation tasks to one of the function calculation modules to be executed at least according to the memory use conditions of the plurality of working machines;
wherein, the first and the second end of the pipe are connected with each other,
the scheduling system comprises node scheduling equipment, a scheduler for scheduling a plurality of working machines in the edge cloud function computing node, and a control module which is deployed on the working machines and schedules a plurality of function computing modules;
the node scheduling device is configured to receive a request for a function computing task and distribute the received function computing task to one of the edge cloud function computing nodes to execute according to at least the load conditions of the edge cloud function computing nodes;
the scheduler is configured to allocate the function computing task to one of a plurality of working machines in the allocated edge cloud function computing node to execute according to at least the memory usage conditions of the working machines; and
the control module is configured to distribute the function calculation task to one of the plurality of function calculation modules to execute at least according to the memory use conditions of the plurality of function calculation modules in the distributed working machine.
2. The edge cloud function computing system of claim 1,
the node scheduling equipment comprises a resource consumption evaluation module;
the resource consumption evaluation module is configured to evaluate at least an amount of memory to be consumed by the function computation task; and is provided with
The node scheduling device further allocates the function calculation task according to the memory amount evaluated by the resource consumption evaluation module.
3. The edge cloud function computing system of claim 1,
the scheduler collects memory usage conditions of the working machines from the plurality of working machines, respectively, the memory usage conditions including information of programs retained in memories of the working machines; and is provided with
The scheduler assigns the function calculation task to one of the work machines in a case where it is determined that a program that the function calculation task needs to use is being reserved in a memory of the work machine.
4. The edge cloud function computing system of claim 1,
the control module determines whether or not a program that the function calculation task needs to use is being reserved in a memory of one of the function calculation modules, and allocates the function calculation task to the function calculation module in a case where the program that the function calculation task needs to use is being reserved in the memory of one of the function calculation modules.
5. The edge cloud function computing system of claim 1,
the function computing task is realized by using a program which is deployed on the edge cloud function computing system by a tenant;
the at least one working machine adopts a program interpreter to run a program of the function calculation task;
the function calculation module is a thread of a program interpreter;
the function calculation module creates one or more program virtual machines, wherein each program virtual machine loads a program; and is provided with
The function calculation task is performed by the function calculation module running a program virtual machine that loads its corresponding program.
6. The edge cloud function computing system of claim 5,
the control module is configured to receive CPU occupation time preset by a tenant and aiming at a function computing task, and terminate the function computing task when the CPU occupation time is exceeded by the time of using the function computing task.
7. The edge cloud function computing system of claim 5,
the control module is configured to adjust the number of program virtual machines loading a program according to the number of requests of function computation tasks currently to the program.
8. The edge cloud function computing system of claim 5,
the control module is configured to set a memory sandbox for each program virtual machine; and/or
The control module is configured to adaptively and dynamically adjust the memory garbage collection policy and the memory management policy according to the real-time memory status of the working machine and each of the procedural virtual machines.
9. The edge cloud function computing system of claim 1,
at least one of the edge cloud function compute nodes further comprises a code cache,
the code buffer stores codes of programs used by the function computing tasks executed by the edge cloud function computing nodes.
10. The edge cloud function computing system of claim 1,
at least one of the edge cloud function compute nodes further comprises an edge memory,
the edge memory stores an execution state of a function computing task executed by the edge cloud function computing node, and provides the execution state to a subsequent function computing task in response to a requirement of the subsequent function computing task.
11. The edge cloud function computing system of claim 1, further comprising a programming module that provides a WEB editor or CLI based programming interface and debugging means, wherein,
the programming module is configured to allow a tenant to publish programs to the edge cloud function computing system through a WEB editor or CLI.
12. The edge cloud function computing system of claim 1, further comprising a management center and a code store, wherein,
the code storage stores codes of programs issued by tenants,
and wherein the management center is configured to:
saving information of the tenant and information of a program issued by the tenant;
when codes or information of a tenant program are updated, informing an edge cloud function computing node; and
responding to the requirement of the edge cloud function computing node, and providing the code and the information of the program to be run by the edge cloud function computing node to the edge cloud function computing node.
13. The edge cloud function computing system of claim 1,
at least one edge cloud function computing node further comprises a CDN cache device;
the node scheduling equipment also comprises a CDN scheduling device;
a tenant deploys a program on the edge cloud function computing system;
the function computing task comprises the step of using the program to execute page rendering of a webpage, and the request for the function computing task is an access request initiated by netizens for the webpage; and is
And the function calculation module executing the function calculation task directly obtains the resources required by the webpage from the source station of the webpage or obtains the resources required by the webpage as a proxy through the CDN cache device.
14. A method of using an edge cloud function computing system, comprising:
receiving a request to perform a function computation task using the edge cloud function computing system, wherein the edge cloud function computing system comprises a plurality of function computation modules deployed on at least one of a plurality of work machines in at least one of a plurality of edge cloud function computing nodes; and
distributing the received function calculation task to one of the plurality of function calculation modules for execution at least according to the memory use conditions of the plurality of working machines;
wherein, the first and the second end of the pipe are connected with each other,
the step of distributing the received function calculation task to one of the plurality of function calculation modules for execution comprises:
distributing the received function computing task to one of the edge cloud function computing nodes to be executed at least according to the load conditions of the edge cloud function computing nodes;
distributing the function computing task to one of a plurality of working machines to be executed at least according to the memory use conditions of the working machines in the distributed edge cloud function computing node; and
and distributing the function calculation task to one of the function calculation modules to be executed at least according to the memory use conditions of the function calculation modules in the distributed working machine.
15. The method of claim 14, wherein,
the step of distributing the received function computing task to one of the plurality of edge cloud function computing nodes according to at least the load condition of the plurality of edge cloud function computing nodes comprises the following steps:
evaluating at least an amount of memory to be consumed by the function calculation task; and
distributing the function computing tasks according to at least the evaluated memory amount and the load conditions of the plurality of edge cloud function computing nodes.
16. The method of claim 14, wherein,
the step of allocating the function computing task to one of a plurality of working machines in the distributed edge cloud function computing node for execution at least according to the memory usage conditions of the working machines comprises the following steps:
collecting memory use conditions of the working machines from the plurality of working machines respectively, wherein the memory use conditions comprise information of programs reserved in memories of the working machines; and
the function calculation task is allocated to one of the work machines in a case where it is determined that a program that the function calculation task needs to use is being reserved in a memory of the work machine.
17. The method of claim 14, wherein,
the step of distributing the function calculation task to one of a plurality of function calculation modules in the distributed working machine to be executed at least according to the memory use conditions of the function calculation modules comprises the following steps:
determining whether a program required for use by the function calculation task is being reserved in a memory of one of the function calculation modules; and
distributing the function calculation task to one of the function calculation modules in a case where a program that the function calculation task needs to use is being reserved in a memory of the function calculation module.
18. A non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the method of any of claims 14 to 17.
CN202211428947.3A 2022-11-15 2022-11-15 Edge cloud function computing system and method Active CN115599529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211428947.3A CN115599529B (en) 2022-11-15 2022-11-15 Edge cloud function computing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211428947.3A CN115599529B (en) 2022-11-15 2022-11-15 Edge cloud function computing system and method

Publications (2)

Publication Number Publication Date
CN115599529A CN115599529A (en) 2023-01-13
CN115599529B true CN115599529B (en) 2023-03-10

Family

ID=84853463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211428947.3A Active CN115599529B (en) 2022-11-15 2022-11-15 Edge cloud function computing system and method

Country Status (1)

Country Link
CN (1) CN115599529B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111475274A (en) * 2020-04-20 2020-07-31 北京邮电大学 Cloud collaborative multi-task scheduling method and device
CN114090251A (en) * 2021-11-23 2022-02-25 浪潮云信息技术股份公司 Method and system for adaptively and dynamically scheduling edge computing resources
CN114253710A (en) * 2021-11-22 2022-03-29 北京百度网讯科技有限公司 Processing method of computing request, intelligent terminal, cloud server, equipment and medium
CN115297171A (en) * 2022-07-08 2022-11-04 南京邮电大学 Edge calculation unloading method and system for cellular Internet of vehicles hierarchical decision

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766889B (en) * 2017-10-26 2021-06-04 浪潮集团有限公司 Cloud edge computing fused deep learning computing system and method
CN109240821B (en) * 2018-07-20 2022-01-14 北京航空航天大学 Distributed cross-domain collaborative computing and service system and method based on edge computing
CN112052092B (en) * 2020-09-07 2024-04-26 中山大学 Risk-aware edge computing task allocation method
US20220232739A1 (en) * 2021-01-21 2022-07-21 Nvidia Corporation Intelligent cold plate system with active and passive features for a datacenter cooling system
CN113254146A (en) * 2021-04-25 2021-08-13 西安电子科技大学 Cloud platform service trust value calculation, task scheduling and load balancing system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111475274A (en) * 2020-04-20 2020-07-31 北京邮电大学 Cloud collaborative multi-task scheduling method and device
CN114253710A (en) * 2021-11-22 2022-03-29 北京百度网讯科技有限公司 Processing method of computing request, intelligent terminal, cloud server, equipment and medium
CN114090251A (en) * 2021-11-23 2022-02-25 浪潮云信息技术股份公司 Method and system for adaptively and dynamically scheduling edge computing resources
CN115297171A (en) * 2022-07-08 2022-11-04 南京邮电大学 Edge calculation unloading method and system for cellular Internet of vehicles hierarchical decision

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
IEEE 17th International Conference on Smart City *
IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS)》.2019, *
Luo Yuchong等.Task Scheduling in Mobile Edge Computing with Stochastic Requests and M/M/1 Servers.《2019 IEEE 21st International Conference on High Performance Computing and Communications *
边缘计算可信协同服务策略建模;乐光学等;《计算机研究与发展》(第05期);全文 *
面向实时视频流分析的边缘计算技术;杨铮等;《中国科学:信息科学》;第52卷(第1期);全文 *

Also Published As

Publication number Publication date
CN115599529A (en) 2023-01-13

Similar Documents

Publication Publication Date Title
Lin et al. A threshold-based dynamic resource allocation scheme for cloud computing
US9135060B2 (en) Method and apparatus for migrating task in multicore platform
US9256448B2 (en) Process grouping for improved cache and memory affinity
US8832174B2 (en) System and method for dynamic task migration on multiprocessor system
US11231955B1 (en) Dynamically reallocating memory in an on-demand code execution system
US11526434B1 (en) Network-level garbage collection in an on-demand code execution system
US20160306680A1 (en) Thread creation method, service request processing method, and related device
TWI604378B (en) Memory management model and interface for new applications
EP2472398B1 (en) Memory-aware scheduling for NUMA architectures
JP5980916B2 (en) Computer-implemented method and computer system
US20120011254A1 (en) Network-aware virtual machine migration in datacenters
CN103067425A (en) Creation method of virtual machine, management system of virtual machine and related equipment thereof
CN108900626B (en) Data storage method, device and system in cloud environment
US10824339B1 (en) Snapshot-based garbage collection in an on-demand code execution system
US9471387B2 (en) Scheduling in job execution
Suo et al. Tackling cold start of serverless applications by efficient and adaptive container runtime reusing
US8296552B2 (en) Dynamically migrating channels
CN114546587A (en) Capacity expansion and reduction method of online image recognition service and related device
CN115599529B (en) Edge cloud function computing system and method
JP2007328413A (en) Method for distributing load
CN110447019B (en) Memory allocation manager and method for managing memory allocation performed thereby
CN117632457A (en) Method and related device for scheduling accelerator
KR20220089447A (en) Method and Apparatus for Controlling Resource Sharing in Real-time Data Transmission System
Peng Gscheduler: Reducing mobile device energy consumption
CN117785484B (en) Shared Cache resource allocation method, system, computer equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant