CN114866612A - Electric power micro-service unloading method and device - Google Patents

Electric power micro-service unloading method and device Download PDF

Info

Publication number
CN114866612A
CN114866612A CN202210332436.5A CN202210332436A CN114866612A CN 114866612 A CN114866612 A CN 114866612A CN 202210332436 A CN202210332436 A CN 202210332436A CN 114866612 A CN114866612 A CN 114866612A
Authority
CN
China
Prior art keywords
service
power
micro
microservice
unloading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210332436.5A
Other languages
Chinese (zh)
Other versions
CN114866612B (en
Inventor
郭屾
王鹏
白帅涛
张冀川
林佳颖
张明宇
张治明
谭传玉
秦四军
孙浩洋
姚志国
张永芳
吕琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
State Grid Shandong Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, China Electric Power Research Institute Co Ltd CEPRI, State Grid Shandong Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202210332436.5A priority Critical patent/CN114866612B/en
Priority claimed from CN202210332436.5A external-priority patent/CN114866612B/en
Publication of CN114866612A publication Critical patent/CN114866612A/en
Application granted granted Critical
Publication of CN114866612B publication Critical patent/CN114866612B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y10/00Economic sectors
    • G16Y10/35Utilities, e.g. electricity, gas or water
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention relates to the technical field of power Internet of things, and particularly provides a power micro-service unloading method and device, which comprise the following steps: determining an unloading decision of electric power micro-service unloading based on a pre-constructed electric power micro-service unloading model; offloading power microservices into containers of respective edge nodes for execution based on the offloading decisions; wherein the pre-built power microservice offload model comprises: the method comprises the following steps of configuring an objective function for power micro-service unloading, wherein the objective function aims at minimizing power service execution time delay, and configuring a constraint condition for power micro-service unloading. The technical scheme provided by the invention realizes the aim of minimizing the service execution time delay while ensuring the success of service scheduling.

Description

Electric power micro-service unloading method and device
Technical Field
The invention relates to the technical field of power internet of things, in particular to a power micro-service unloading method and device.
Background
With the rapid development of the power system, the mass access terminals lead to the explosive growth of heterogeneous multi-source data information, and the demand of power services is diversified and ecological. Application systems built with traditional monolithic architectures have been unable to cope with explosive traffic growth. How to construct a flexible and easily extensible service system becomes a problem to be considered urgently. Unlike a traditional monolithic solution, the microservice is an architectural solution for building applications, and can split an application into a plurality of core functions, each of which is called a service and can be built and deployed separately. And by combining the characteristics of rapid container deployment and rapid iteration, the micro-service can be directly deployed in the container for execution. Therefore, the invention introduces micro-services and containers, provides a new way for executing the power business, and is more flexible in deployment, expansion and migration.
In order to better meet the requirement of time delay sensitive service, the service request is quickly responded, an edge computing idea is introduced, and a server is deployed in an edge or terminal equipment to provide low-delay and efficient service. However, the traditional edge computing service offloading algorithm usually only considers the problem of allocation of computing resources and communication resources, and cannot handle the continuity problem of service execution when a large-scale concurrent service traffic request and an edge node fail. Therefore, how to construct an efficient and reliable power service scheduling decision model in the edge calculation is the key point considered by the invention. The invention designs a service unloading decision model based on the dependency relationship among the electric power service subtasks. Meanwhile, the problem of service rescheduling under the condition that the edge equipment or the communication link between the equipment fails is considered.
Service rescheduling often needs to quickly sense the state of the edge device, so that the problem of service execution failure caused by overlong rescheduling time delay is avoided. The problem can be well solved by deep reinforcement learning, interaction can be carried out between the perception capability of the deep learning and the environment, the available resources, the physical state, the connection relation between the devices and the communication link state of the edge devices can be obtained, and then the service unloading decision is made by the reinforcement learning, so that the requirement of minimizing the service execution delay can be met, and the problem of continuity during service execution can be solved.
In summary, it is worth to develop a container in the edge node to carry the micro-service, and solve the problem of the micro-service unloading and rescheduling decision model in the container by using a deep reinforcement learning algorithm. In order to solve the development situation of the prior art, the existing papers and patents are searched, compared and analyzed, and the following technical information with high relevance to the invention is screened out:
prior art scheme 1: a distribution method, a distribution device, equipment and a storage medium of a power grid edge calculation task of patent No. CN113268341A provide the distribution method, the distribution device, the distribution equipment and the storage medium of the power grid edge calculation task, wherein the distribution method of the power grid edge calculation task comprises the steps that a software defined network controller receives respective task information of a plurality of intelligent terminals; establishing an optimization model with the minimum total time delay of the intelligent terminals according to the task information of the intelligent terminals and the resource information of the edge computing server; generating task allocation information of the target intelligent terminal according to the optimization model with the minimum total time delay; the target intelligent terminal is any one of a plurality of intelligent terminals; sending the allocation information to the target intelligent terminal under the condition that the allocation information is processed by the target intelligent terminal; and under the condition that the distribution information is processed by the edge computing server, simultaneously sending the distribution information to the target intelligent terminal and the edge computing server. The invention can reduce the time delay of the task processing of the intelligent terminal.
Prior art scheme 2: patent number CN112764835A discloses an edge computing-based micro-service system and method for configuring sensing equipment of the internet of things for electric power, and the micro-service system includes an edge internet of things agent architecture, a cloud service center and a terminal device, wherein the edge internet of things agent architecture processes data of the terminal device, and performs computing task cooperative processing between the cloud service center and the terminal device according to priority of the data of the terminal device. The invention utilizes the advantages of low delay of edge data processing and distributed architecture of micro-service, and reduces the complexity of the existing Internet of things agent equipment for configuring service for heterogeneous equipment.
Prior art scheme 3: patent No. CN111885137A discloses "an edge container resource allocation method based on deep reinforcement learning", which includes Actor network, Critic network, and echo state network ESN, and effectively solves the problem of container resource allocation in the edge computing environment of delay sensitive application. The resource allocation method provided by the invention obtains the end-to-end time delay of the service flow s data packet on the container n by establishing an end-to-end time delay model on the basis of an M/D/1 queuing model, adopts a deep reinforcement learning-based model to solve the problem of edge container resource allocation, and improves the traditional A3C algorithm through an echo state network ESN to obtain a resource allocation method of the edge container for time delay sensitive application, namely an EC-A3C network, allocates resource allocation strategies At to different container clusters Sz and t in a resource pool z to solve the problem of container resource allocation of an edge computing environment, and the EC-A3C network adapts to various edge computing environments by changing the reward value rt of the end-to-end time delay.
The technical scheme 1 mainly considers that the state information of a plurality of intelligent terminals is acquired through a software defined network controller, and a task allocation model with the aim of minimizing time delay is established. The invention does not consider the dependency relationship between tasks, and meanwhile, the software defined controller has insufficient interaction real-time performance with the environment, and can not deal with the dynamically changing edge computing environment state.
Technical scheme 2 focuses on researching an interaction flow among modules in a micro service system, reduces service execution delay by using edge calculation, focuses on the whole execution flow of the power service, but does not describe the resource allocation process in the edge calculation in detail, does not consider the actual situation, selects a service execution place through a preset task priority judgment mode in a single memory, and has no flexibility.
Technical scheme 3 focuses on resource allocation of the container in the edge computing environment, the dependency relationship among tasks is not considered enough, and the resource flexibility of the container is not considered, namely the matching degree between task resource requirements and container allocation resources is not considered. The emergency situation that the service is smoothly implemented, such as the failure of the edge device, is not considered, and the success rate of service execution is not concerned.
Disclosure of Invention
In order to overcome the defects, the invention provides a power micro-service unloading method and a power micro-service unloading device.
In a first aspect, a power microservice offloading method is provided, the power microservice offloading method comprising:
determining an unloading decision of electric power micro-service unloading based on a pre-constructed electric power micro-service unloading model;
offloading power microservices into containers of respective edge nodes for execution based on the offloading decisions;
wherein the pre-built power microservice offload model comprises: the method comprises the following steps of configuring an objective function for power micro-service unloading, wherein the objective function aims at minimizing power service execution time delay, and configuring a constraint condition for power micro-service unloading.
Preferably, the determining an offloading decision of power microservice offloading based on a pre-constructed power microservice offloading model includes:
and solving the pre-constructed power micro-service unloading model by adopting an A3C algorithm to obtain an unloading decision of power micro-service unloading.
Preferably, the objective function is calculated as follows:
Figure BDA0003573538760000031
in the above formula, m is an element of [1, n ] g ],n g For the number of power traffic, t m The execution delay of the mth power service.
Further, the execution delay of the mth power service is calculated as follows:
Figure BDA0003573538760000032
in the above formula, the first and second carbon atoms are,
Figure BDA0003573538760000033
for the m-th power service including the execution time of the micro service,
Figure BDA0003573538760000034
for the start execution time of the mth power service, n m The number of microservices contained in the mth power service.
Further, the execution time of the nth microservice included in the mth power service is calculated as follows:
Figure BDA0003573538760000035
in the above formula, the first and second carbon atoms are,
Figure BDA0003573538760000036
for the mth power service including the execution time of the nth microservice,
Figure BDA0003573538760000037
for the calculated time delay of the mth power service including the nth microservice,
Figure BDA0003573538760000038
for the starting execution time of the nth microservice contained in the mth electric power business, n is the [1, n ∈ ] m ]。
Further, the calculation formula of the calculation time delay of the nth microservice included in the mth power service is as follows:
Figure BDA0003573538760000041
the m-th power service including the initial execution time of the n-th micro-service is calculated according to the following formula:
Figure BDA0003573538760000042
in the above formula, D m,n,i,j In order to make an unloading decision for unloading the nth microservice contained in the mth electric power service to the jth container of the ith edge node, if D m,n,i,j When 1, the nth power service is included in the mth power serviceUnloading the service to the jth container of the ith edge node for execution, otherwise, ending the operation, r m,n The number of CPU cycles, dr, required for the nth microservice execution included in the mth power service i,j The resource space provided for the jth container of the ith edge node,
Figure BDA0003573538760000043
the m power business comprises the execution time of the (n-1) th micro service, d m,n-1 Data volume, tr, for the mth electric power service, containing the (n-1) th microservice transmitted to the next microservice n-1,n For the data transmission rate between the (n-1) th micro service and the nth micro service contained in the mth power service,
Figure BDA0003573538760000044
for the queuing delay of the mth power service including the nth microservice,
Figure BDA0003573538760000045
and rescheduling time delay of the nth micro service is contained in the mth power business.
Further, the mathematical expression of the constraint is as follows:
t m ≤t max
RD m,n,i,j ≤RD max
Figure BDA0003573538760000046
type m,n =dt i,j ,if D m,n,i,j =1
in the above formula, t max For maximum value of execution delay, RD, of power service m,n,i,j Resource deviation ratio, RD, for offloading the mth electric power service including the nth microservice to the jth container of the ith edge node max For vertical telescopic threshold of the container, n b Is the number of edge nodes, n i For the number of containers, type, created in the ith edge node m,n Is the m < th > oneType, dt, of the n-th micro-service included in the power traffic i,j The type of the micro-service carried by the jth container of the ith edge node.
Further, the microservice type includes: generic microservices and non-generic microservices.
Further, the general microservice comprises at least one of: positioning service, pushing service, short message service, log service, file service and communication service;
the non-generic microservices include at least one of: authentication service, work order service, two services, ledger service.
Further, the calculation formula of the resource deviation rate executed in the jth container for offloading the nth micro-service to the ith edge node in the mth power service is as follows:
Figure BDA0003573538760000051
further, the A3C algorithm includes: state space, action space and reward function;
said state space s t The mathematical expression of (a) is:
Figure BDA0003573538760000052
the motion space a t The mathematical expression of (a) is:
Figure BDA0003573538760000053
the mathematical expression of the reward function r is as follows:
Figure BDA0003573538760000054
wherein, WS is a micro-service set to be scheduled and sequenced according to the time of the power service arriving at the edge network, RS is a micro-service set to be rescheduled and loaded in a container of a failed edge node, BS is a resource set of edge computing nodes, and C is a resource set among edge nodesConnected state of (D) W Offload decision set for microservices in WS, D R Set of offload decisions for microservices in RS, t max For the maximum value of the execution time delay of the power service, m belongs to [1, n ∈ ] g ],n g For the number of power traffic, t m The execution delay of the mth power service.
In a second aspect, a power microservice offload device is provided, the power microservice offload device comprising:
the determining module is used for determining an unloading decision of the electric power micro-service unloading based on a pre-constructed electric power micro-service unloading model;
an offload module to offload power microservices to containers of respective edge nodes for execution based on the offload decision;
wherein the pre-built power microservice offload model comprises: the method comprises the following steps of configuring an objective function for power micro-service unloading, wherein the objective function aims at minimizing power service execution time delay, and configuring a constraint condition for power micro-service unloading.
Preferably, the determining module is specifically configured to:
and solving the pre-constructed power micro-service unloading model by adopting an A3C algorithm to obtain an unloading decision of power micro-service unloading.
Preferably, the objective function is calculated as follows:
Figure BDA0003573538760000055
in the above formula, m is an element of [1, n ] g ],n g For the number of power traffic, t m The execution delay of the mth power service.
Further, the A3C algorithm includes: state space, action space and reward function;
said state space s t The mathematical expression of (a) is:
Figure BDA0003573538760000056
the movementAs space a t The mathematical expression of (a) is:
Figure BDA0003573538760000057
the mathematical expression of the reward function r is as follows:
Figure BDA0003573538760000058
wherein, WS is a micro-service set to be scheduled and sequenced according to the time of the power service arriving at the edge network, RS is a micro-service set to be rescheduled and loaded in a container of a failed edge node, BS is a resource set of edge computing nodes, C is the connection state between the edge nodes, D W Offload decision set for microservices in WS, D R Set of offload decisions for microservices in RS, t max For the maximum execution time delay of the power service, m belongs to [1, n ] g ],n g For the number of power traffic, t m The execution delay of the mth power service.
In a third aspect, a computer device is provided, comprising: one or more processors;
the processor to store one or more programs;
the one or more programs, when executed by the one or more processors, implement the power microservice offload method.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed, implements the power microservice offload method.
One or more technical schemes of the invention at least have one or more of the following beneficial effects:
the invention relates to the technical field of power Internet of things, and particularly provides a power micro-service unloading method and device, which comprise the following steps: determining an unloading decision of electric power micro-service unloading based on a pre-constructed electric power micro-service unloading model; offloading power microservices into containers of respective edge nodes for execution based on the offloading decisions; wherein the pre-built power microservice offload model comprises: the method comprises the following steps of configuring an objective function for power micro-service unloading, wherein the objective function aims at minimizing power service execution time delay, and configuring a constraint condition for power micro-service unloading. The technical scheme provided by the invention can effectively improve the association relationship of strong coupling between services in the traditional service application mode by introducing the micro-service architecture and selecting the container for carrying, and reduces the complexity by dividing and checking. Considering that sub-services which are common often exist among the power services, the reusability and the development efficiency of resources can be improved by adopting micro-services; by introducing the concept of vertical elastic expansion of the container, container resources can be better matched with the demand of service resources, resource waste is prevented, and service execution time delay is effectively reduced; the aim of minimizing the service execution delay is achieved while the success of service scheduling is guaranteed.
Drawings
FIG. 1 is a flow chart illustrating the main steps of a power microservice offloading method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of an electrical microservice architecture according to an embodiment of the present invention;
FIG. 3 is a DAG relationship diagram for a power service according to an embodiment of the invention;
fig. 4 is a schematic diagram of a power outage rush-repair service offloading and rescheduling scenario according to an embodiment of the present invention;
fig. 5 is a main block diagram of the electric power micro-service uninstalling apparatus according to the embodiment of the present invention.
Detailed Description
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1, fig. 1 is a flow chart illustrating the main steps of a power microservice offloading method according to an embodiment of the invention. As shown in fig. 1, the power microservice offloading method in the embodiment of the present invention mainly includes the following steps:
step S101: determining an unloading decision of electric power micro-service unloading based on a pre-constructed electric power micro-service unloading model;
step S102: offloading power microservices into containers of respective edge nodes for execution based on the offloading decisions;
wherein the pre-built power microservice offload model comprises: the method comprises the following steps of configuring an objective function for power micro-service unloading, wherein the objective function aims at minimizing power service execution time delay, and configuring a constraint condition for power micro-service unloading.
Specifically, the determining an offloading decision of power microservice offloading based on a pre-constructed power microservice offloading model includes:
and solving the pre-constructed power micro-service unloading model by adopting an A3C algorithm to obtain an unloading decision of power micro-service unloading.
In this embodiment, the objective function is calculated as follows:
Figure BDA0003573538760000071
in the above formula, m is an element of [1, n ] g ],n g For the number of power traffic, t m The execution delay of the mth power service.
In one embodiment, a power service bearer architecture based on micro-services and containers is first constructed, as shown in fig. 2, different power services are split into multiple micro-services, and due to similarities among some power services, the micro-services may overlap, that is, the same function is completed, and these types of micro-services are collectively referred to as general micro-services. Meanwhile, differences exist between different power services, and the micro-service for executing the differential function becomes a special micro-service. In order to efficiently meet the requirements of delay-sensitive services, servers are deployed in edge nodes such as power terminals or communication gateways to execute micro services, and meanwhile, in order to better approach the characteristic of loose coupling between the micro services, a container technology is introduced, a plurality of containers are created in each edge node to bear different micro services, and one power service is completed through communication between the containers. Each container is created in consideration of the differentiated requirements of the general micro service and the special micro service for computing and caching resources, and usually, one edge node includes both a container for carrying the general micro service and a container for carrying the special micro service.
On the basis of the architecture, the invention designs a micro-service unloading scheduling model considering the dependency relationship among micro-services, and the optimization aim is to maximize the execution success rate of the power service. The specific model is constructed as follows:
when considering the micro-service offload scheduling problem with dependencies, each power service may be represented by a DAG graph G ═ (v, epsilon), where v ═ US ═ SS, US is a set of general micro-services, and SS is a set of special micro-services, as shown in fig. 3.
Defining a set of power services as
Figure BDA0003573538760000081
Wherein each power service
Figure BDA0003573538760000082
n g 、n m Respectively representing the number of power services and the number of micro-services. Defining each microservice as a triple comprising a service type, a computational resource demand and a computational result data volume
Figure BDA0003573538760000083
Figure BDA0003573538760000084
Wherein m and n respectively represent the serial number of the power service to which the micro service belongs and the sequence to be scheduled in the power service. When type m,n When the number is us, the nth microservice representing the mth electric power service is a general microservice, and when the number is ss, the microservice represents the general microserviceThe micro-service is a special micro-service; r is m,n Representing the number of CPU cycles required for the execution of the micro-service; d m,n The data size indicating the execution result of the micro service, i.e., the amount of data transferred to the next micro service.
Defining the resources of each edge node as
Figure BDA0003573538760000085
Wherein (dt) i,j ,dr i,j ) The micro service type (with the value of us or ss) borne by the jth container representing the ith edge node and the provided resource size, n i Indicating the number of containers created in the ith edge node. So that the edge calculation nodes are aggregated
Figure BDA0003573538760000086
Figure BDA0003573538760000087
Wherein n is b Indicating the number of edge nodes. Defining the connection relation between the edge nodes as follows:
Figure BDA0003573538760000088
when C is present i,j When 1, represents BS i And BS j The communication is performed by a physical link, and when the value is 0, the communication is disabled, and when i is equal to j, the value is 1.
When micro-service scheduling offloading is performed, the following three aspects are mainly considered: (1) whether the edge computing node comprises a container for bearing the micro service to be unloaded or not; (2) because there is a dependency relationship between the micro-services, the container to be unloaded finally needs to be able to communicate with the container in which the predecessor micro-service (if any) of the micro-service exists, and the communication path requirement is shortest; (3) whether the container matches the resource demand of the microservice to be offloaded.
For micro service ms m,n In terms of, firstly, in
Figure BDA0003573538760000089
In searching canBearing type m,n As a primary candidate offload object, and then using a shortest path algorithm to select a sum ms from the primary candidates m,n-1 The container with the shortest path at the edge node is used as an alternative unloading object, and finally, the container with the resource deviation rate smaller than the container vertical expansion threshold value RD is selected from the alternative unloading objects max As an unloading object, i.e. unloading decision D m,n,i,j 1 denotes ms m,n And unloading to the jth container of the ith edge node, and otherwise, obtaining 0.
Wherein, the calculation formula of the resource deviation rate executed in the jth container for offloading the nth micro-service to the ith edge node in the mth power service is as follows:
Figure BDA0003573538760000091
the calculation formula of the execution delay of the mth power service is as follows:
Figure BDA0003573538760000092
in the above formula, the first and second carbon atoms are,
Figure BDA0003573538760000093
for the m-th power service including the execution time of the micro service,
Figure BDA0003573538760000094
for the start execution time of the mth power service, n m The number of microservices contained in the mth power service.
The computing formula of the execution time of the nth microservice included in the mth power business is as follows:
Figure BDA0003573538760000095
in the above formula, the first and second carbon atoms are,
Figure BDA0003573538760000096
for the mth power service including the execution time of the nth microservice,
Figure BDA0003573538760000097
for the calculated time delay of the mth power service including the nth microservice,
Figure BDA0003573538760000098
for the starting execution time of the nth microservice contained in the mth electric power business, n is the [1, n ∈ ] m ]。
Let us consider dependencies between micro-services ms m,n Is pms m,n Micro service ms is only completed after all micro services in the front-driving micro service set are executed m,n Can be executed, and also needs to consider queuing delay when large-scale service flow bursts
Figure BDA0003573538760000099
If the edge node fails in the process of waiting for execution of the micro-service, the rescheduling time delay needs to be considered
Figure BDA00035735387600000910
Therefore, the calculation formula of the calculation time delay of the mth power service including the nth microservice is as follows:
Figure BDA00035735387600000911
the m-th power service including the initial execution time of the n-th micro-service is calculated according to the following formula:
Figure BDA00035735387600000912
in the above formula, D m,n,i,j To offload the nth micro-service of the mth power service to the jth edge nodeUnloading decision of individual container, if D m,n,i,j If the current situation is 1, the nth micro service contained in the mth power service is unloaded to the jth container of the ith edge node for execution, otherwise, the operation is ended, and r is m,n The number of CPU cycles, dr, required for the nth microservice execution included in the mth power service i,j The resource space provided for the jth container of the ith edge node,
Figure BDA0003573538760000101
the m power business comprises the execution time of the (n-1) th micro service, d m,n-1 Data volume, tr, for the mth electric power service, containing the (n-1) th microservice transmitted to the next microservice n-1,n For the data transmission rate between the (n-1) th micro service and the nth micro service contained in the mth power service,
Figure BDA0003573538760000102
for the queuing delay of the mth power service including the nth micro-service,
Figure BDA0003573538760000103
and rescheduling time delay of the nth micro service is contained in the mth power business.
Wherein the mathematical expression of the constraint is as follows:
s.t.C1:t m ≤t max
C2:RD m,n,i,j ≤RD max
C3:
Figure BDA0003573538760000104
C4:type m,n =dt i,j ,if D m,n,i,j =1
in the above formula, t max For maximum value of execution delay, RD, of power service m,n,i,j Resource deviation ratio, RD, for offloading the nth microservice to the jth container of the ith edge node in the mth power service max For vertical telescopic threshold of the container, n b Is the number of edge nodesAmount, n i For the number of containers, type, created in the ith edge node m,n Type, dt, for the mth electric power service, containing the nth microservice i,j The type of the micro-service carried by the jth container of the ith edge node.
Wherein the constraint C1 indicates that the execution delay of the micro service cannot exceed its maximum allowable delay t _ m, otherwise, the micro service is considered as a scheduling failure; constraint C2 indicates that the vertical elastic stretch threshold of the container cannot be exceeded; constraint C3 indicates ms (m, n) can be carried only by one container; constraint C4 indicates that the service type of the container that ultimately carries the microservice is to be consistent with the microservice service type.
Further, the microservice types include: generic microservices and non-generic microservices. The general microservice includes at least one of: positioning service, pushing service, short message service, log service, file service and communication service; the non-generic microservices include at least one of: authentication service, work order service, two services, ledger service.
In one embodiment, the A3C algorithm introduced in the deep reinforcement learning algorithm in the present invention solves the problem model, where the algorithm mainly includes a state space, an action space, and a reward function, and is defined as follows:
said state space s t The time slot model is adopted, the network state is sensed by an agent in deep reinforcement learning in each time slot tau, and the mathematical expression is as follows:
Figure BDA0003573538760000105
the motion space a t The method mainly comprises unloading decision, and the mathematical expression is as follows:
Figure BDA0003573538760000106
the mathematical expression of the reward function r is as follows:
Figure BDA0003573538760000107
wherein, WS is a micro-service set to be scheduled and sequenced according to the time of the power service arriving at the edge network, the micro-services of each power service are sequenced according to the dependency relationship, RS is the micro-service set to be rescheduled and borne in the failed edge node container, BS is the edge computing node resource set, C is the connection state between the edge nodes, D is the connection state between the edge nodes W Offload decision set for microservices in WS, D R Set of offload decisions for microservices in RS, t max For the maximum value of the execution time delay of the power service, m belongs to [1, n ∈ ] g ],n g For the number of power traffic, t m The execution time delay for the mth power service.
In an optimal implementation manner, the technical scheme provided by the present invention may be used to unload and schedule a power service with a DAG structure, and select a power outage rush-repair service as an application scenario, as shown in fig. 4. The general micro-service US related to the power failure emergency repair service comprises a positioning service, a pushing service, a short message service, a log service, a file service and a communication service, the special micro-service SS comprises an authentication service, a work order service, two services and a standing book service, and all the micro-services have an association relation.
At time T, firstly, acquiring container types and resource sets created in edge computing nodes and connection matrixes among the edge computing nodes by using A3C agents deployed in edge clouds, then operating a micro-service unloading algorithm, making an optimal unloading decision through multiple iterations, issuing an action decision to an edge environment under the condition of ensuring minimum service execution delay, and then unloading micro-services in the power outage first-aid repair service into containers in sequence according to the dependency relationship among tasks for execution according to the unloading decision.
At time T +1, as shown in the figure, a failure occurs in one edge computing node, and therefore, the micro-service waiting to be executed in the edge node needs to be migrated to the adjacent and intact edge node for execution. At this time, the A3C Agent acquires the edge node resources and the connection state again, makes a rescheduling decision according to the model, issues the rescheduling decision to the environment, performs micro-service migration, and ensures that the service is implemented smoothly, thereby improving the user satisfaction.
Example 2
Based on the same inventive concept, the present invention further provides an electric power microservice unloading device, as shown in fig. 5, the electric power microservice unloading device includes:
the determining module is used for determining an unloading decision of the electric power micro-service unloading based on a pre-constructed electric power micro-service unloading model;
an offload module to offload power microservices to containers of respective edge nodes for execution based on the offload decision;
wherein the pre-built power microservice offload model comprises: the method comprises the following steps of configuring an objective function for power micro-service unloading, wherein the objective function aims at minimizing power service execution time delay, and configuring a constraint condition for power micro-service unloading.
Preferably, the determining an offloading decision of power microservice offloading based on a pre-constructed power microservice offloading model includes:
and solving the pre-constructed power micro-service unloading model by adopting an A3C algorithm to obtain an unloading decision of power micro-service unloading.
Preferably, the objective function is calculated as follows:
Figure BDA0003573538760000121
in the above formula, m is equal to [1, n ] g ],n g For the number of power traffic, t m The execution delay of the mth power service.
Further, the execution delay of the mth power service is calculated as follows:
Figure BDA0003573538760000122
in the above formula, the first and second carbon atoms are,
Figure BDA0003573538760000123
execution of micro-services contained in for mth power serviceIn the middle of the furnace, the gas-liquid separation chamber,
Figure BDA0003573538760000124
for the start execution time of the mth power service, n m The number of microservices contained in the mth power service.
Further, the execution time of the nth microservice included in the mth power service is calculated as follows:
Figure BDA0003573538760000125
in the above formula, the first and second carbon atoms are,
Figure BDA0003573538760000126
for the mth power service including the execution time of the nth microservice,
Figure BDA0003573538760000127
for the calculated time delay of the mth power service including the nth microservice,
Figure BDA0003573538760000128
for the starting execution time of the nth microservice contained in the mth electric power business, n is the [1, n ∈ ] m ]。
Further, the calculation formula of the calculation time delay of the nth microservice included in the mth power service is as follows:
Figure BDA0003573538760000129
the m-th power service including the initial execution time of the n-th micro-service is calculated according to the following formula:
Figure BDA00035735387600001210
in the above formula, D m,n,i,j For offloading the mth micro-service to the ith edge nodeUnloading decision of jth container, if D m,n,i,j If the current situation is 1, the nth micro service contained in the mth power service is unloaded to the jth container of the ith edge node for execution, otherwise, the operation is ended, and r is m,n The number of CPU cycles, dr, required for the nth microservice execution included in the mth electric power service i,j The resource space provided for the jth container of the ith edge node,
Figure BDA00035735387600001211
the m power business comprises the execution time of the (n-1) th micro service, d m,n-1 Data volume, tr, for the mth electric power service, containing the (n-1) th microservice transmitted to the next microservice n-1,n For the data transmission rate between the (n-1) th micro service and the nth micro service contained in the mth power service,
Figure BDA00035735387600001212
for the queuing delay of the mth power service including the nth microservice,
Figure BDA00035735387600001213
and rescheduling time delay of the nth micro service is contained in the mth power business.
Further, the mathematical expression of the constraint is as follows:
t m ≤t max
RD m,n,i,j ≤RD max
Figure BDA0003573538760000131
type m,n =dt i,j ,if D m,n,i,j =1
in the above formula, t max For maximum value of execution delay, RD, of power service m,n,i,j Resource deviation ratio, RD, for offloading the mth electric power service including the nth microservice to the jth container of the ith edge node max For vertical telescopic threshold of the container, n b Is the number of edge nodes, n i For the number of containers created in the ith edge node, dt i,j Type of micro-service carried by jth container of ith edge node m,n The mth power business contains the nth micro-service type.
Further, the microservice types include: generic microservices and non-generic microservices.
Further, the general microservice comprises at least one of: positioning service, pushing service, short message service, log service, file service and communication service;
the non-generic microservices include at least one of: authentication service, work order service, two services, ledger service.
Further, the calculation formula of the resource deviation rate executed in the jth container for offloading the nth micro-service to the ith edge node in the mth power service is as follows:
Figure BDA0003573538760000132
further, the A3C algorithm includes: state space, action space and reward function;
said state space s t The mathematical expression of (a) is:
Figure BDA0003573538760000133
the motion space a t The mathematical expression of (a) is:
Figure BDA0003573538760000134
the mathematical expression of the reward function r is as follows:
Figure BDA0003573538760000135
wherein, WS is a micro-service set to be scheduled and sequenced according to the time of the power service reaching the edge network, RS is the failed oneA micro-service set to be rescheduled and loaded in an edge node container, wherein BS is an edge computing node resource set, C is a connection state between edge nodes, and D W Offload decision set for microservices in WS, D R Set of offload decisions for microservices in RS, t max For the maximum value of the execution time delay of the power service, m belongs to [1, n ∈ ] g ],n g For the number of power traffic, t m The execution delay of the mth power service.
Example 3
Based on the same inventive concept, the present invention also provides a computer apparatus comprising a processor and a memory, the memory being configured to store a computer program comprising program instructions, the processor being configured to execute the program instructions stored by the computer storage medium. The Processor may be a Central Processing Unit (CPU), or may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable gate array (FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, etc., which is a computing core and a control core of the terminal, and is specifically adapted to implement one or more instructions, and to load and execute one or more instructions in a computer storage medium to implement a corresponding method flow or a corresponding function, so as to implement the steps of the power micro-service unloading method in the foregoing embodiments.
Example 4
Based on the same inventive concept, the present invention further provides a storage medium, in particular, a computer-readable storage medium (Memory), which is a Memory device in a computer device and is used for storing programs and data. It is understood that the computer readable storage medium herein can include both built-in storage media in the computer device and, of course, extended storage media supported by the computer device. The computer-readable storage medium provides a storage space storing an operating system of the terminal. Also, one or more instructions, which may be one or more computer programs (including program code), are stored in the memory space and are adapted to be loaded and executed by the processor. It should be noted that the computer-readable storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory. One or more instructions stored in the computer-readable storage medium may be loaded and executed by a processor to implement the steps of a power microserver offloading method in the above embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (17)

1. A method of power microservice offloading, the method comprising:
determining an unloading decision of electric power micro-service unloading based on a pre-constructed electric power micro-service unloading model;
offloading power microservices into containers of respective edge nodes for execution based on the offloading decisions;
wherein the pre-built power microservice offload model comprises: the method comprises the following steps of configuring an objective function for power micro-service unloading, wherein the objective function aims at minimizing power service execution time delay, and configuring a constraint condition for power micro-service unloading.
2. The method of claim 1, wherein determining an offload decision for power microservice offload based on a pre-built power microservice offload model comprises:
and solving the pre-constructed power micro-service unloading model by adopting an A3C algorithm to obtain an unloading decision of power micro-service unloading.
3. The method of claim 1, wherein the objective function is calculated as follows:
Figure FDA0003573538750000011
in the above formula, m is an element of [1, n ] g ],n g For the number of power traffic, t m The execution delay of the mth power service.
4. The method of claim 3, wherein the execution delay of the mth power service is calculated as follows:
Figure FDA0003573538750000012
in the above formula, the first and second carbon atoms are,
Figure FDA0003573538750000015
for the m-th power service including the execution time of the micro service,
Figure FDA0003573538750000016
for the start execution time of the mth power service, n m The number of microservices contained in the mth power service.
5. The method of claim 4, wherein the execution time of the mth power service including the nth microservice is calculated as follows:
Figure FDA0003573538750000013
in the above formula, the first and second carbon atoms are,
Figure FDA0003573538750000017
for the mth power service including the execution time of the nth microservice,
Figure FDA0003573538750000018
for the calculated time delay of the mth power service including the nth microservice,
Figure FDA0003573538750000019
for the starting execution time of the nth microservice contained in the mth electric power business, n is the [1, n ∈ ] m ]。
6. The method of claim 5, wherein the calculation of the calculated delay for the mth power traffic comprising the nth microservice is calculated as follows:
Figure FDA0003573538750000014
the m-th power service including the initial execution time of the n-th micro-service is calculated according to the following formula:
Figure FDA0003573538750000021
in the above formula, D m,n,i,j In order to make an unloading decision for unloading the nth microservice contained in the mth electric power service to the jth container of the ith edge node, if D m,n,i,j If the current situation is 1, the nth micro service contained in the mth power service is unloaded to the jth container of the ith edge node for execution, otherwise, the operation is ended, and r is m,n The number of CPU cycles, dr, required for the nth microservice execution included in the mth power service i,j The resource space provided for the jth container of the ith edge node,
Figure FDA0003573538750000023
the m power business comprises the execution time of the (n-1) th micro service, d m,n-1 Data volume, tr, for the mth electric power service, containing the (n-1) th microservice transmitted to the next microservice n-1,n For the data transmission rate between the (n-1) th micro service and the nth micro service contained in the mth power service,
Figure FDA0003573538750000024
for the queuing delay of the mth power service including the nth microservice,
Figure FDA0003573538750000025
and rescheduling time delay of the nth micro service is contained in the mth power business.
7. The method of claim 6, wherein the mathematical expression of the constraint is as follows:
t m ≤t max
RD m,n,i,j ≤RD max
Figure FDA0003573538750000022
type m,n =dt i,j ,if D m,n,i,j =1
in the above formula, t max For maximum value of execution delay, RD, of power service m,n,i,j Resource deviation ratio, RD, for offloading the mth electric power service including the nth microservice to the jth container of the ith edge node max For vertical telescopic threshold of the container, n b Is the number of edge nodes, n i For the number of containers, type, created in the ith edge node m,n Type, dt, for the mth electric power service, containing the nth microservice i,j The type of the micro-service carried by the jth container of the ith edge node.
8. The method of claim 7, wherein the microservice types comprise: generic microservices and non-generic microservices.
9. The method of claim 8, wherein the generic microservice comprises at least one of: positioning service, pushing service, short message service, log service, file service and communication service;
the non-generic microservice includes at least one of: authentication service, work order service, two services, ledger service.
10. The method according to claim 7, wherein the resource deviation ratio for offloading the nth micro-service to the jth container of the mth power service comprising the ith edge node is calculated as follows:
Figure FDA0003573538750000031
11. the method of claim 2, wherein the A3C algorithm comprises: state space, action space and reward function;
said state space s t The mathematical expression of (a) is:
Figure FDA0003573538750000032
the motion space a t The mathematical expression of (a) is:
Figure FDA0003573538750000033
the mathematical expression of the reward function r is as follows:
Figure FDA0003573538750000034
wherein WS is a micro-service set to be scheduled ordered according to the time of arrival of the power service at the edge networkRS is a micro service set to be rescheduled and loaded in a container of the edge node with a fault, BS is a resource set of the edge computing node, C is a connection state between the edge nodes, and D W Offload decision set for microservices in WS, D R Set of offload decisions for microservices in RS, t max For the maximum value of the execution time delay of the power service, m belongs to [1, n ∈ ] g ],n g For the number of power traffic, t m The execution delay of the mth power service.
12. An electrical microservice offload device, the device comprising:
the determining module is used for determining an unloading decision of the electric power micro-service unloading based on a pre-constructed electric power micro-service unloading model;
an offload module to offload power microservices to containers of respective edge nodes for execution based on the offload decision;
wherein the pre-built power microservice offload model comprises: the method comprises the following steps of configuring an objective function for power micro-service unloading, wherein the objective function aims at minimizing power service execution time delay, and configuring a constraint condition for power micro-service unloading.
13. The apparatus of claim 12, wherein the determination module is specifically configured to:
and solving the pre-constructed power micro-service unloading model by adopting an A3C algorithm to obtain an unloading decision of power micro-service unloading.
14. The apparatus of claim 12, wherein the objective function is calculated as follows:
Figure FDA0003573538750000035
in the above formula, m is an element of [1, n ] g ],n g For the number of power traffic, t m The execution delay of the mth power service.
15. The apparatus of claim 13, wherein the A3C algorithm comprises: state space, action space and reward function;
said state space s t The mathematical expression of (a) is:
Figure FDA0003573538750000036
the motion space a t The mathematical expression of (a) is:
Figure FDA0003573538750000037
the mathematical expression of the reward function r is as follows:
Figure FDA0003573538750000038
wherein, WS is a micro-service set to be scheduled and sequenced according to the time of the power service arriving at the edge network, RS is a micro-service set to be rescheduled and loaded in a container of a failed edge node, BS is a resource set of edge computing nodes, C is the connection state between the edge nodes, D W Offload decision set for microservices in WS, D R Set of offload decisions for microservices in RS, t max For the maximum value of the execution time delay of the power service, m belongs to [1, n ∈ ] g ],n g For the number of power traffic, t m The execution delay of the mth power service.
16. A computer device, comprising: one or more processors;
the processor to store one or more programs;
the one or more programs, when executed by the one or more processors, implement the power microservice offload method of any of claims 1-11.
17. A computer-readable storage medium, having stored thereon a computer program which, when executed, implements the power microservice offload method of any of claims 1-11.
CN202210332436.5A 2022-03-30 Electric power micro-service unloading method and device Active CN114866612B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210332436.5A CN114866612B (en) 2022-03-30 Electric power micro-service unloading method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210332436.5A CN114866612B (en) 2022-03-30 Electric power micro-service unloading method and device

Publications (2)

Publication Number Publication Date
CN114866612A true CN114866612A (en) 2022-08-05
CN114866612B CN114866612B (en) 2024-05-31

Family

ID=

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109995713A (en) * 2017-12-30 2019-07-09 华为技术有限公司 Service processing method and relevant device in a kind of micro services frame
US20190335004A1 (en) * 2016-07-22 2019-10-31 Cisco Technology, Inc. Scaling service discovery in a micro-service environment
CN112165721A (en) * 2020-08-28 2021-01-01 山东师范大学 Multi-service task unloading and service migration method based on edge computing
WO2021139537A1 (en) * 2020-01-08 2021-07-15 上海交通大学 Power control and resource allocation based task offloading method in industrial internet of things
US20210243247A1 (en) * 2021-04-23 2021-08-05 Intel Corporation Service mesh offload to network devices
CN113709249A (en) * 2021-08-30 2021-11-26 北京邮电大学 Safe balanced unloading method and system for driving assisting service
CN113760541A (en) * 2021-07-29 2021-12-07 国网河南省电力公司信息通信公司 Method and device for distributing edge resources

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190335004A1 (en) * 2016-07-22 2019-10-31 Cisco Technology, Inc. Scaling service discovery in a micro-service environment
CN109995713A (en) * 2017-12-30 2019-07-09 华为技术有限公司 Service processing method and relevant device in a kind of micro services frame
WO2021139537A1 (en) * 2020-01-08 2021-07-15 上海交通大学 Power control and resource allocation based task offloading method in industrial internet of things
CN112165721A (en) * 2020-08-28 2021-01-01 山东师范大学 Multi-service task unloading and service migration method based on edge computing
US20210243247A1 (en) * 2021-04-23 2021-08-05 Intel Corporation Service mesh offload to network devices
CN113760541A (en) * 2021-07-29 2021-12-07 国网河南省电力公司信息通信公司 Method and device for distributing edge resources
CN113709249A (en) * 2021-08-30 2021-11-26 北京邮电大学 Safe balanced unloading method and system for driving assisting service

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIAYING LIN; PENG WANG; SHEN GUO; JICHUAN ZHANG; YINBO SHENG: "Power Distribution Network Management based on Edge Computing", 2021 CHINA INTERNATIONAL CONFERENCE ON ELECTRICITY DISTRIBUTION (CICED), 8 October 2021 (2021-10-08), pages 1 - 5 *
卢海峰;顾春华;罗飞;丁炜超;杨婷;郑帅;: "基于深度强化学习的移动边缘计算任务卸载研究", 计算机研究与发展, no. 07, 7 July 2020 (2020-07-07), pages 1 - 16 *
张治明,王鹏,秦四军: "配电物联网微应用管控*** 技术研究及应用", 供用电, 30 June 2021 (2021-06-30), pages 1 - 7 *

Similar Documents

Publication Publication Date Title
Ge et al. GA-based task scheduler for the cloud computing systems
Muthuvelu et al. A dynamic job grouping-based scheduling for deploying applications with fine-grained tasks on global grids
Zhu et al. Scheduling stochastic multi-stage jobs to elastic hybrid cloud resources
Kaur et al. A systematic review on task scheduling in Fog computing: Taxonomy, tools, challenges, and future directions
CN109669768A (en) A kind of resource allocation and method for scheduling task towards side cloud combination framework
CN103944997B (en) In conjunction with the load-balancing method of random sampling and Intel Virtualization Technology
CN114610474B (en) Multi-strategy job scheduling method and system under heterogeneous supercomputing environment
CN112114950A (en) Task scheduling method and device and cluster management system
CN111026519B (en) Distributed task priority scheduling method and system and storage medium
CN115454589A (en) Task scheduling method and device and Kubernetes scheduler
CN113132456B (en) Edge cloud cooperative task scheduling method and system based on deadline perception
CN112000388A (en) Concurrent task scheduling method and device based on multi-edge cluster cooperation
Gu et al. Maximizing workflow throughput for streaming applications in distributed environments
Qian et al. A workflow-aided Internet of things paradigm with intelligent edge computing
Han et al. EdgeTuner: Fast scheduling algorithm tuning for dynamic edge-cloud workloads and resources
AlOrbani et al. Load balancing and resource allocation in smart cities using reinforcement learning
CN112506658B (en) Dynamic resource allocation and task scheduling method in service chain
Pedarsani et al. Scheduling tasks with precedence constraints on multiple servers
CN111049900B (en) Internet of things flow calculation scheduling method and device and electronic equipment
Ma et al. Maximizing container-based network isolation in parallel computing clusters
CN110958192B (en) Virtual data center resource allocation system and method based on virtual switch
CN114866612A (en) Electric power micro-service unloading method and device
CN114866612B (en) Electric power micro-service unloading method and device
Stavrinides et al. Resource allocation and scheduling of real-time workflow applications in an iot-fog-cloud environment
Yang et al. Resource reservation for graph-structured multimedia services in computing power network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant