CN115766722A - Computing power network task scheduling method and device based on information center network - Google Patents

Computing power network task scheduling method and device based on information center network Download PDF

Info

Publication number
CN115766722A
CN115766722A CN202211461675.7A CN202211461675A CN115766722A CN 115766722 A CN115766722 A CN 115766722A CN 202211461675 A CN202211461675 A CN 202211461675A CN 115766722 A CN115766722 A CN 115766722A
Authority
CN
China
Prior art keywords
node
network
energy consumption
action
computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211461675.7A
Other languages
Chinese (zh)
Inventor
谢人超
崔佳怡
任语铮
邹壮
韩璐
唐琴琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202211461675.7A priority Critical patent/CN115766722A/en
Publication of CN115766722A publication Critical patent/CN115766722A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a computing power network task scheduling method and a computing power network task scheduling device based on an information center network, wherein the method comprises the following steps: the method comprises the steps that a forwarding node receives a processing request of a terminal user and sends the processing request to a control node, the control node is provided with a pre-trained reinforcement learning model, a target network in the reinforcement learning model performs an action of allocating a computing node for a processing task based on the processing task corresponding to the processing request, and the computing node is provided with computing resources for executing the processing task; the control node calculates total action time delay and total action energy consumption based on actions of the distributed computing nodes made by the reinforcement learning model, calculates a reward function based on the total action time delay and the total action energy consumption, and calculates a loss function based on the reward function; and updating parameters of the value network in the reinforcement learning model based on the loss function, and synchronizing the parameters in the value network to the target network every preset step number.

Description

Computing power network task scheduling method and device based on information center network
Technical Field
The invention relates to the technical field of computing power networks, in particular to a computing power network task scheduling method and device based on an information center network.
Background
In recent years, many researchers have proposed the concept of CFN (computer first networking) and built CFN architecture deeply converged with "cloud-edge-end" computing networks.
CFN can sense the resource status of the infrastructure layer and can orchestrate the computing, network, and storage resources in the network more finely than traditional edge computing. Also, the CFN can locate services on the data plane and perform reasonable task scheduling and routing planning after receiving computation requests from consumers. However, in the CFN, the computing resources are heterogeneous and distributed in different computing nodes, and thus cannot be effectively perceived and coordinated uniformly.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a method for computing power network task scheduling based on an information-centric network, so as to obviate or mitigate one or more of the disadvantages in the related art.
One aspect of the present invention provides a computing power network task scheduling method based on an information center network, the method is applied to a computing power network, the computing power network comprises a forwarding node, a control node and a computing node, and the method comprises the following steps:
the method comprises the steps that a forwarding node receives a processing request of a terminal user and sends the processing request to a control node, the control node is provided with a pre-trained reinforcement learning model, a target network in the reinforcement learning model performs an action of allocating a computing node to a processing task based on the processing task corresponding to the processing request, and the computing node is provided with computing resources used for executing the processing task;
the control node calculates total action time delay and total action energy consumption based on actions of the distributed computing nodes made by the reinforcement learning model, calculates a reward function based on the total action time delay and the total action energy consumption, and calculates a loss function based on the reward function;
and updating parameters of the value network in the reinforcement learning model based on the loss function, and synchronizing the parameters in the value network to the target network at intervals of preset steps.
By adopting the scheme, when the task arrives, overall planning is performed on the task, the global optimization can be ensured, and the scheme considers two aspects of total time delay and total energy consumption at the same time, consumes the lowest energy while ensuring the service quality of the user, so that under the scene, the scheme is a more optimal selection which is closer to the global optimization target of an actual scene, can uniformly process resources distributed in different computing nodes, and effectively and uniformly senses and coordinates.
In some embodiments of the present invention, the step of the target network in the reinforcement learning model making an action of allocating a computing node to a processing task based on the processing task corresponding to the processing request comprises:
the control node system calculates state parameters of the power network, wherein the state parameters comprise forwarding node state parameters and calculation node state parameters, and the forwarding node state parameters and the calculation node state parameters are combined into state vectors;
and inputting the state vector into a reinforcement learning model, outputting an action vector by the reinforcement learning model, extracting the dimension corresponding to the largest parameter in the parameters of the plurality of dimensions of the action vector, and taking the action corresponding to the dimension as the output action.
In some embodiments of the present invention, the forwarding node status parameter is used to mark whether each forwarding node is sending a processing request to a control node, if so, the forwarding node is marked as a first parameter, otherwise, the forwarding node is marked as a second parameter; the node state parameters comprise service function parameters, node resource parameters and residual time parameters, the service function parameters are used for marking whether each computing node executes the processing request, if yes, the computing node is marked as a first parameter, and if not, the computing node is marked as a second parameter; the node resource parameter is the size of the computing resource of the computing node; the remaining time parameter is the remaining time required by the computing node to execute the current processing request.
In some embodiments of the present invention, the action includes a transmission path of a processing task and a pull path of a container corresponding to the processing task, the total action latency includes transmission latency, queuing latency, container pull latency and computation latency, the total action energy consumption includes transmission energy consumption, container pull energy consumption and computation energy consumption, and the step of the control node computing the total action latency and the total action energy consumption based on the action of the distribution computation node made by the reinforcement learning model includes:
calculating transmission delay based on the bandwidth parameters in the transmission path, calculating container pull delay based on the bandwidth parameters in the pull path, calculating calculation delay required by the processing task based on CPU resources and GPU resources in a target calculation node, and calculating queuing delay of the processing task based on the calculation delay of the processing task which is earlier than the arrival of the processing task; calculating the total action time delay based on the transmission time delay, the queuing time delay, the container pulling time delay and the calculated time delay;
calculating transmission energy consumption based on the transmission power of the transmitting end router in the transmission path, calculating container pulling energy consumption based on the transmission power of the transmitting end router in the pulling path, and calculating calculation energy consumption based on CPU resources and GPU resources in the destination calculation node; and calculating the total energy consumption of the action based on the transmission energy consumption, the container pulling energy consumption and the calculation energy consumption.
In some embodiments of the present invention, in the step of calculating the transmission delay based on the bandwidth parameter in the transmission path, the transmission delay is calculated based on the following formula:
Figure BDA0003955594020000031
wherein, T trans Representing transmission delay, R representing a set of routers in a transmission path, i and j representing any two adjacent routers in the transmission path, B (i,j) Indicating the bandwidth between router i and router j,
Figure BDA0003955594020000032
represents the size of processing task k;
in the step of calculating the container pull delay based on the bandwidth parameter in the pull path, the pull delay is calculated based on the following formula:
Figure BDA0003955594020000033
wherein, T pull Representing pull delay, U representing the set of routers in the pull path, x and y representing any two adjacent routers in the pull path, B (x,y) Representing the bandwidth between router x and router y, Z docker Representing the size of the corresponding container of the processing task;
in the step of calculating the calculation time delay required by the processing task based on the CPU resource and the GPU resource in the target calculation node, calculating the calculation time delay according to the following formula:
Figure BDA0003955594020000034
wherein, T comp Representing the calculation of time delay, max represents taking the maximum value,
Figure BDA0003955594020000035
which represents the size of the processing task k,
Figure BDA0003955594020000036
representing destination computational nodeThe size of the GPU resources is such that,
Figure BDA0003955594020000037
the method comprises the steps of representing the CPU resource size of a target computing node, wherein lambda is a preset task parameter, epsilon is a preset GPU positive correlation coefficient parameter, and mu is a preset CPU positive correlation coefficient parameter;
in the step of calculating the queuing time delay of the processing task based on the calculation time delay of the processing task which arrives earlier than the processing task, calculating the sum of the calculation time delays of the processing tasks which are not finished when the processing task arrives and arrive earlier than the processing task to obtain the queuing time delay;
in the step of calculating the total action delay based on the transmission delay, the queuing delay, the container pull delay and the calculation delay, the total action delay is calculated based on the following formula:
T total =T trans +T queue +T pull +T comp
wherein, T total Indicating the total time delay of the movement, T queue Indicating queuing delay.
In some embodiments of the present invention, in the step of calculating the transmission energy consumption based on the transmission power of the transmitting end router in the transmission path, the transmission energy consumption is calculated based on the following formula:
Figure BDA0003955594020000038
wherein, E trans Representing transmission energy consumption, i and j representing any two adjacent routers in the transmission path, P (i,j) Indicates the transmission power, T, of the transmitting-side router of the adjacent routers i and j trans Representing a transmission delay;
in the step of calculating the container pulling energy consumption based on the transmission power of the transmitting end router in the pulling path, the container pulling energy consumption is calculated based on the following formula:
Figure BDA0003955594020000041
wherein E is pull Representing container pull energy consumption, U representing the set of routers in the pull path, x and y representing any two adjacent routers in the pull path, P (x,y) Indicates the transmission power, T, of the transmitting-side router of the adjacent router x and router y pull Representing a transmission delay;
in the step of calculating the calculation energy consumption based on the CPU resources and the GPU resources in the destination computing node, the calculation energy consumption is calculated based on the following formula:
E comp =P com T comp
wherein E is comp Representing the calculated energy consumption, P com Representing a computing power parameter, T, of the destination computing node comp Representing a calculated time delay;
in the step of calculating the total energy consumption of the action based on the transmission energy consumption, the container pulling energy consumption and the calculation energy consumption, the total energy consumption of the action is calculated according to the following formula:
E total =E trans +E pull +E comp
wherein, E total Representing the total energy consumption of the action.
In some embodiments of the invention, the computing power parameter of the destination computing node is calculated according to the following formula:
Figure BDA0003955594020000042
wherein, P com Representing a computing power parameter, η, of the destination computing node G Representing the pre-set GPU power factor, eta, of the destination computing node C Represents the CPU power factor preset by the destination computing node,
Figure BDA0003955594020000043
representing the GPU resource size of the destination compute node,
Figure BDA0003955594020000044
indicating the CPU resource size of the destination compute node.
In some embodiments of the present invention, in the step of the control node calculating the reward function based on the total action time delay and the total action energy consumption, the reward function value is calculated according to the following formula:
R=ξ-ρ(αT total +βE total );
wherein R represents the value of the reward function, alpha, xi and rho are all preset calculation parameters, E total Indicating total energy consumption of action, T total Representing the total time delay of the action.
In some embodiments of the invention, obtaining a state parameter of the computational force network after performing the action, the step of calculating the loss function based on the reward function comprises:
encoding the state parameters, the actions and the reward functions of the pre-action computing power network and the state parameters of the post-action computing power network, combining the state parameters and the actions and the reward functions into a quadruple, and adding the quadruple into a memory space;
randomly receiving any quadruple from the memory space, obtaining the state parameters of the computational force network after the action in the quadruple,
acquiring a corresponding state vector based on state parameters of the post-action computational force network in the extracted quadruple, inputting the state vector into a target network in a reinforcement learning model to obtain an action vector corresponding to the state vector, and outputting the maximum parameter in the parameters of multiple dimensions of the action vector as a training dimension parameter;
calculating a transition function based on the reward function values and the training dimension parameters in the extracted quadruple according to the following formula:
Figure BDA0003955594020000051
wherein, y δ The transition function is represented by a transition function,
Figure BDA0003955594020000052
representing the value of the reward function in the extracted quadruple delta,
Figure BDA0003955594020000053
Representing a training dimension parameter, wherein gamma is a preset training parameter;
the loss function is calculated based on the transition function according to the following formula:
L=[y δ -q] 2
wherein L represents a loss function, y δ And representing a transition function, and q represents a dimension parameter value corresponding to the action in the extracted quadruple.
In another aspect, the present invention further provides a computer-aided network task scheduling apparatus based on an information-centric network, which includes a computer device, the computer device includes a processor and a memory, the memory stores computer instructions, the processor is configured to execute the computer instructions stored in the memory, and when the computer instructions are executed by the processor, the apparatus implements the steps implemented by the method.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to what has been particularly described hereinabove, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention.
FIG. 1 is a schematic diagram of an embodiment of a computing power network task scheduling method based on an information center network according to the present invention;
FIG. 2 is a schematic diagram of an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the following embodiments and the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the structures and/or processing steps closely related to the scheme according to the present invention are shown in the drawings, and other details not so relevant to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein, is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
It is also noted herein that the term "coupled," if not specifically stated, may refer herein to not only a direct connection, but also an indirect connection in which an intermediate is present.
Introduction of the prior art:
prior art 1 adopts the concept of Named Function Network (NFN), which acts as a parser, and designs a general parser for λ -expressions to solve the layout problem of information access.
Prior art 2 employs a naming function as a service (NFaaS) architecture, where functions can be deployed in a network and moved between nodes according to user requirements.
Prior art 3 employs efficient resources in the AR scenario of NFN and NFaaS, discovery, computational reuse, mobility management, and security.
Prior art 4 employs a unified approach to remote function calls.
Prior art 5 addresses the problem of service placement in edge computing scenarios and proposes a service caching policy through the ICN paradigm that considers caching service instances in network nodes so that users can respond quickly when repeatedly requesting services.
The above-described prior art facilitates the deployment, discovery and invocation of function services in edge and core networks.
Prior art 6 proposes a named CFN, namely the NCFN scheme, in conjunction with ICN and CFN.
In the NCFN scheme, firstly, a naming addressing mode realizes flexible computing resource discovery in a network layer; secondly, the service invocation delay is effectively shortened by a computing mechanism in the network, and the quality of experience (QoE) of a user is improved, wherein a lightweight container for bearing a computing service copy can be easily migrated and deployed in the network; finally, the in-network caching mechanism can avoid repeated service invocation by reusing the calculation results. Based on the NFCN scheme, researchers have proposed a naming service access control scheme (NSACS-PS) based on proxy signatures. In this scenario, the origin server provides an authorized certificate of the subscription service to an authorized user, who may sign an interest package invoking the computing service, where the signature has the same validity as the signature of the origin server. At the router side, the deployed service copy only needs to hold the public key of the active server to verify who is the authorized user.
There are three participants in the system model: an origin server, an authorized user, and a copy of a service deployed on a router. To support intra-network computing, an origin server clones and distributes service copies into the network, and an NCFN router loads service copies at a high request frequency according to a Least Frequently Used (LFU) policy. Since the service replica deployed on the router tends to be closer to the user, it can provide computing services to authorized users in the vicinity.
The prior art has the following defects:
1. the prior art focuses more on the analysis of service call security in a network, and ignores the overhead generated by task scheduling problems and the imbalance of resource utilization rate;
2. the task scheduling scheme in the prior art caches the calculation results in the network, and in an actual scene, different users rarely request the same calculation results, so the scheme has a limitation in the actual scene.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals denote the same or similar parts, or the same or similar steps.
In order to solve the above problems, the present invention provides a method for scheduling a task of a computational power network based on an information center network, wherein the method is applied to the computational power network, the computational power network comprises a forwarding node, a control node and a computation node, and the method comprises the steps of:
in the implementation process, the architecture of the present solution is shown in fig. 2, and the architecture is divided into a control plane and a data plane. The data plane is a set of network entities that perform basic operations including forwarding, caching, and computation; the control plane centrally controls and manages these underlying physical resources based on state information of the computing resources and network resources collected from the data plane.
Under the network architecture, the control plane senses data plane resources, makes an intelligent decision on a user request, and schedules a task to a corresponding container of a computing node for computing.
Computing nodes are network devices with computing capabilities, the primary function being to provide various computing services. The service provider starts a container with different resources allocated and deploys different service functions in the container. The computing resources allocated to each container are different, and the computing resources mainly comprise a CPU (central processing unit) for executing general computing tasks, a GPU (graphics processing unit) for executing neural network training and graphic rendering, and a memory for quickly reading, writing and caching intermediate computing results. In addition, the compute node may pull containers from the network for warm boot to ensure that all user service requests are satisfied.
The forwarding node is a router with a CS (memory storage), a PIT (pending interest table) and a FIB (forwarding database), and has a main function of forwarding a user task request and caching a container providing services in a network. The CS caches containers that may be pulled by the compute node; the PIT stores all container names and request interfaces requested by the computing nodes; the FIB records the container forwarding interface.
The end user will initiate a computing task request. According to different users with different requirements, tasks are divided into delay sensitive tasks, calculation intensive tasks and I/O intensive tasks. In order to meet the task request initiated by the user, the reinforcement learning model selects a proper computing node to provide computing service for the user.
The control node is used for collecting the calculation of the data plane and the state information of the network resources, then making a decision on a task request initiated by a user, and issuing a routing forwarding table to the forwarding node.
In particular embodiments, the present invention contemplates that combining a CFN with an ICN may effectively solve the above-described problems, and compared to conventional IP networks, ICN is more concerned about whether service content can be successfully obtained, rather than from where. Therefore, the CFN can make intelligent decisions and scheduling to ensure user quality of service. Second, the hierarchical naming mechanism in ICN may well represent user requirements and computational network resources, which facilitates perception by the perception nodes in CFN. In addition, the routing node in the ICN has a cache Table to cache services and the PIT (Pending Interest Table) and FIB (Forward Information database) can be automatically routed to obtain service resources in the network.
The ICN is an Information-Centric network (Information-Centric Networking).
In some embodiments of the present invention, the forwarding node may be a router, the computing node may be a computer, and the control node may be a computer or a server, and each forwarding node and each computing node are connected to a control node in the computational power network, and each computing node has a forwarding node connected thereto.
As shown in fig. 1, in step S100, a forwarding node receives a processing request of an end user, and sends the processing request to a control node, where the control node is provided with a pre-trained reinforcement learning model, a target network in the reinforcement learning model performs an action of allocating a computing node to a processing task based on the processing task corresponding to the processing request, and the computing node is provided with computing resources for executing the processing task;
in some embodiments of the present invention, the actions include instructions to transmit the processing task to the computing node and instructions to transmit the container corresponding to the processing task to the computing node, and a transmission path to transmit the processing task to the computing node and a pull path to transmit the container corresponding to the processing task to the computing node, the container corresponding to the processing task being provided in any one of the forwarding nodes.
The transmission path and the pull path can both comprise a plurality of routers connected with each other, and the routers are connected with each other for forwarding.
In some embodiments of the invention, the reinforcement learning model is a DQN model, deep Q-Network; the computing resources of the compute node include CPU resources and GPU resources.
Step S200, the control node calculates total action time delay and total action energy consumption based on the actions of the distributed computing nodes made by the reinforcement learning model, calculates a reward function based on the total action time delay and the total action energy consumption, and calculates a loss function based on the reward function;
in some embodiments of the present invention, the total action latency includes transmission latency, queuing latency, container pull latency, and computational latency, and the total action energy consumption includes transmission energy consumption, container pull energy consumption, and computational energy consumption.
And step S300, updating parameters of the value network in the reinforcement learning model based on the loss function, and synchronizing the parameters in the value network to the target network every preset step number.
In some embodiments of the present invention, in the step of updating the parameters of the value network in the reinforcement learning model based on the loss function, the parameters of the value network in the reinforcement learning model are updated by a gradient descent method based on the loss function.
In some embodiments of the present invention, in the step of synchronizing the parameters in the value network to the target network every preset number of steps, the parameters in the value network are synchronized to the target network after updating the parameters of the value network every time, that is, every preset number of steps, and the parameters in the value network are synchronized to the target network after updating the parameters for the preset number of times, where the value network and the target network have the same structure.
By adopting the scheme, when the task arrives, overall planning is performed on the task, the global optimum can be ensured, and the scheme considers two aspects of total time delay and total energy consumption at the same time, consumes the lowest energy while ensuring the service quality of the user, so that under the scene, the scheme is a more optimal selection which is closer to the global optimum target of the actual scene, can uniformly process the resources distributed in different computing nodes, and effectively realizes unified perception and coordination.
As shown in fig. 3, the resource-aware phase: the forwarding and compute nodes will periodically register the compute resources, network resources, and cache resource status with the control plane. Then, the control plane establishes a resource topological graph and constructs a network state space so that the control plane can sense and control the computing resources and the network resources of the data plane; request, decision and forwarding phases: the user initiates a task request to the forwarding node. The control plane makes an intelligent decision based on the task type and the parameters, schedules the task to a corresponding computing node container, and issues a routing table to the forwarding node; container pulling request stage: if a computing node does not have a container that can provide computing services to a user, the corresponding container can be pulled from the network. The compute node accesses the cache of the forwarding node. If not, the PIT is searched to see if the same request record exists, and if so, the requesting interface is added to the interface column. Otherwise, the node will create a new tuple and forward the request to other forwarding nodes according to the FIB. Finally, the forwarding node returns the cached container to the computing node; and a task calculation stage: the computing nodes execute the computing service, complete the computation of the user task within the specified time, and return the computation result.
In the network model, the control node needs to make a decision on a user task, match the task with the computing node, then schedule the computing task to the computing node for computing, and finally return a computing result.
In some embodiments of the present invention, the step of the target network in the reinforcement learning model making an action of allocating a computing node to a processing task based on the processing task corresponding to the processing request comprises:
the control node system calculates state parameters of the power network, wherein the state parameters comprise forwarding node state parameters and calculation node state parameters, and the forwarding node state parameters and the calculation node state parameters are combined into state vectors;
and inputting the state vector into a reinforcement learning model, wherein the reinforcement learning model outputs an action vector, the dimension corresponding to the largest parameter in the parameters of the plurality of dimensions of the action vector is extracted, and the action corresponding to the dimension is taken as the output action.
In some embodiments of the present invention, the forwarding node status parameter is used to mark whether each forwarding node is sending a processing request to a control node, that is, whether a forwarding node is triggered by a processing request, if yes, the forwarding node is marked as a first parameter, and if not, the forwarding node is marked as a second parameter; the node state parameters consist of service function parameters, node resource parameters and residual time parameters, the service function parameters are used for marking whether each computing node executes the processing request, if so, the computing node is marked as a first parameter, and if not, the computing node is marked as a second parameter; the node resource parameter is the size of the computing resource of the computing node; the remaining time parameter is the remaining time required by the compute node to execute the current processing request.
In a specific implementation process, the first parameter may be 1, the second parameter may be 0, and in the step where the remaining time parameter is the remaining time required by the computing node to execute the current processing request, the total time delay required by the computing node to execute the current processing request is obtained by calculating the calculation time delay, and the time consumed by the computing node to execute the current processing request is subtracted to obtain the remaining time required by the computing node to execute the current processing request.
In some embodiments of the present invention, the action includes a transmission path of a processing task and a pull path of a container corresponding to the processing task, the total action latency includes transmission latency, queuing latency, container pull latency and computation latency, the total action energy consumption includes transmission energy consumption, container pull energy consumption and computation energy consumption, and the step of the control node computing the total action latency and the total action energy consumption based on the action of the distribution computation node made by the reinforcement learning model includes:
calculating transmission delay based on the bandwidth parameters in the transmission path, calculating container pulling delay based on the bandwidth parameters in the pulling path, calculating the calculation delay required by the processing task based on CPU (Central processing Unit) resources and GPU (graphics processing Unit) resources in a target calculation node, and calculating the queuing delay of the processing task based on the calculation delay of the processing task which arrives earlier than the processing task; calculating the total action time delay based on the transmission time delay, the queuing time delay, the container pulling time delay and the calculated time delay;
calculating transmission energy consumption based on the transmission power of a transmitting end router in the transmission path, calculating container pulling energy consumption based on the transmission power of the transmitting end router in the pulling path, and calculating calculation energy consumption based on CPU resources and GPU resources in a target calculation node; and calculating the total energy consumption of the action based on the transmission energy consumption, the container pulling energy consumption and the calculation energy consumption.
In some embodiments of the present invention, in the step of calculating the transmission delay based on the bandwidth parameter in the transmission path, the transmission delay is calculated based on the following formula:
Figure BDA0003955594020000101
wherein, T trans Representing transmission delay, R representing a set of routers in the transmission path, i and j representing any two adjacent routers in the transmission path, B (i,j) Indicating the bandwidth between router i and router j,
Figure BDA0003955594020000111
represents the size of processing task k;
in some embodiments of the present invention, the router i and the router j forward nodes, and the transmission path is composed of at least one router and a final destination computing node.
By adopting the scheme, the time delay consumed by every two adjacent routers in the transmission path is respectively calculated and overlapped to obtain the transmission time delay, and the calculation accuracy of the transmission time delay is improved.
In the step of calculating the container pull delay based on the bandwidth parameter in the pull path, the pull delay is calculated based on the following formula:
Figure BDA0003955594020000112
wherein, T pull Representing pull delay, U representing the set of routers in the pull path, x and y representing any two adjacent routers in the pull path, B (x,y) Representing the bandwidth between router x and router y, Z docker Representing the size of the corresponding container of the processing task;
in some embodiments of the present invention, the time delays consumed by every two adjacent routers in the pull path are respectively calculated and overlapped to obtain the transmission time delay, thereby improving the calculation accuracy of the transmission time delay.
In a specific implementation process, if the compute node caches a container corresponding to a processing task, the pull delay is 0.
In the step of calculating the calculation time delay required by the processing task based on the CPU resource and the GPU resource in the target calculation node, calculating the calculation time delay according to the following formula:
Figure BDA0003955594020000113
wherein, T comp Representing the calculation of time delay, max represents taking the maximum value,
Figure BDA0003955594020000114
which represents the size of the processing task k,
Figure BDA0003955594020000115
representing the GPU resource size of the destination compute node,
Figure BDA0003955594020000116
the method comprises the steps of representing the CPU resource size of a target computing node, wherein lambda is a preset task parameter, epsilon is a preset GPU positive correlation coefficient parameter, and mu is a preset CPU positive correlation coefficient parameter;
in some embodiments of the present invention, the CPU resource and the GPU resource may run simultaneously in the compute node, and the time for completing the computation task finally in the CPU resource and the GPU resource is the computation delay.
In the step of calculating the queuing time delay of the processing task based on the calculation time delay of the processing task which arrives earlier than the processing task, calculating the sum of the calculation time delays of the processing tasks which are not finished when the processing task arrives and arrive earlier than the processing task to obtain the queuing time delay;
in the step of calculating the total action delay based on the transmission delay, the queuing delay, the container pulling delay and the calculation delay, the total action delay is calculated based on the following formula:
T total =T trans +T queue +T pull +T comp
wherein, T total Indicating the total time delay of the movement, T queue Indicating queuing delay.
In some embodiments of the present invention, in the step of calculating the transmission energy consumption based on the transmission power of the transmitting end router in the transmission path, the transmission energy consumption is calculated based on the following formula:
Figure BDA0003955594020000121
wherein E is trans Representing transmission energy consumption, i and j representing any two adjacent routers in the transmission path, P (i,j) Indicates the transmission power, T, of the transmitting-end router of the adjacent router i and router j trans Representing a transmission delay;
in some embodiments of the present invention, if, in the adjacent router i and router j, the router i is a transmitting end and the router j is a receiving end, the transmitting power of the router i is substituted into the calculation.
In the step of calculating the container pulling energy consumption based on the transmission power of the transmitting end router in the pulling path, the container pulling energy consumption is calculated based on the following formula:
Figure BDA0003955594020000122
wherein E is pull Represents container pull energy consumption, U represents the set of routers in the pull path, x and y represent any two adjacent routers in the pull path, P (x,y) Indicates the transmission power, T, of the transmitting-side router of the adjacent router x and router y pull Representing a transmission delay;
by adopting the mode, the energy consumption consumed by every two adjacent routers in the transmission process is calculated one by one, and the container pulling energy consumption and the transmission energy consumption required by the processing task are accurately calculated.
In the step of calculating the calculation energy consumption based on the CPU resources and the GPU resources in the destination computing node, the calculation energy consumption is calculated based on the following formula:
E comp =P com T comp
wherein, E comp Representing the calculated energy consumption, P com Representing a computing power parameter, T, of the destination computing node comp Representing a calculated time delay;
in the step of calculating the total energy consumption of the action based on the transmission energy consumption, the container pulling energy consumption and the calculation energy consumption, the total energy consumption of the action is calculated according to the following formula:
E total =E trans +E pull +E comp
wherein R is total Representing the total energy consumption of the action.
In some embodiments of the invention, the computing power parameter of the destination computing node is calculated according to the following formula:
Figure BDA0003955594020000123
wherein, P com Representing a computing power parameter, η, of the destination computing node G Representing the pre-set GPU power factor, eta, of the destination computing node C Represents the CPU power factor preset by the destination computing node,
Figure BDA0003955594020000124
representing the GPU resource size of the destination compute node,
Figure BDA0003955594020000131
indicating the CPU resource size of the destination compute node.
By adopting the scheme, the calculation capability parameters of the calculation nodes are calculated through the GPU resources and the CPU resources of the calculation nodes, the calculation energy consumption is calculated based on the calculation capability parameters, and the calculation accuracy of the calculation capability is guaranteed.
In some embodiments of the present invention, in the step of the control node calculating the reward function based on the total action time delay and the total action energy consumption, the reward function value is calculated according to the following formula:
R=ξ-ρ(αT total +βE total );
wherein R represents a value of the reward function, alpha, xi and rho are all preset calculation parameters, E total Indicating total energy consumption of action, T total Representing the total time delay of the action.
With the above scheme, in each training round, the system selects an action from the action space according to the current state of each period to obtain the reward. After multiple rounds of reward accretion, the system will tend to develop routing strategies that aid in objective optimization.
In some embodiments of the invention, obtaining a state parameter of the computational power network after performing the action, the step of calculating a loss function based on a reward function comprises:
encoding the state parameters, the actions and the reward functions of the pre-action computing power network and the state parameters of the post-action computing power network, combining the encoded state parameters, the actions and the reward functions into a quadruple, and adding the quadruple into a memory space;
randomly receiving any quadruple from the memory space, obtaining the state parameters of the computational force network after the action in the quadruple,
acquiring a corresponding state vector based on state parameters of the post-action computational force network in the extracted quadruple, inputting the state vector into a target network in a reinforcement learning model to obtain an action vector corresponding to the state vector, and outputting the maximum parameter in the parameters of multiple dimensions of the action vector as a training dimension parameter;
calculating a transition function based on the reward function value and the training dimension parameter in the extracted quadruple according to the following formula:
Figure BDA0003955594020000132
wherein, y δ The transition function is represented by a function of the transition,
Figure BDA0003955594020000133
representing the value of the reward function in the extracted quadruple delta,
Figure BDA0003955594020000134
representing a training dimension parameter, wherein gamma is a preset training parameter;
the loss function is calculated based on the transition function according to the following formula:
L=[y δ -q] 2
wherein L represents a loss function, y δ And representing a transition function, and q represents dimension parameter values corresponding to the actions in the extracted quadruple.
In a specific implementation process, q is a dimension parameter value corresponding to an action when the reinforcement learning model outputs the action in the extracted quadruple.
In particular implementation, in order to make the consumed energy consumption and time delay smaller, the problem can be modeled as a multi-index optimization problem, aiming to minimize the weighted sum of time delay and energy consumption. The problem can be expressed as follows:
min(αT total +βE total );
Figure BDA0003955594020000141
wherein the content of the first and second substances,
Figure BDA0003955594020000142
indicating a preset upper time limit for processing task k.
The problem is a multi-target non-convex optimization problem, the system state in the next time slot is influenced by the system action of the previous time slot, and the problem belongs to a Markov decision problem, so that a reinforced learning algorithm is used for solving the problem.
In the network model, a control node needs to make a decision on a user task, match the task with a computing node, schedule the computing task to the computing node for computing, and finally return a computing result.
The whole model solving process comprises the following steps: task sequencing, wherein the control node needs to perform priority sequencing on tasks in a period, so that the tasks sensitive to time delay can be preferentially executed; node matching, wherein the network selects an optimal computing node for each task according to task parameters and requirements of each task and computing power states of the computing nodes; selecting a forwarding path, wherein after the node matching is completed, the optimal path reaching the computing node needs to be found from the multiple paths so as to ensure the shortest total time delay; and pulling the container, and after the task reaches the computing node, if the node does not provide the container of the service, pulling the container mirror image from the network.
For node matching and task ordering, a deep reinforcement learning method is considered to be used for uniformly representing network resources, computing resources and cache resources as network states, representing task ordering as actions taken by a system, and representing an optimization objective as a system reward function. For forwarding path selection and pulling of the container, simulation is carried out by using a shortest path algorithm and a greedy algorithm instead of observing the simulation as a state, and the condition that training effect is influenced due to overlarge state space is prevented. Through iterative training of multi-round deep reinforcement learning, the memory space is updated, the neural network is continuously improved, and finally a better training effect is achieved.
The training process can be represented as shown in the following table:
Figure BDA0003955594020000143
Figure BDA0003955594020000151
the invention provides an ICN-based hierarchical CFN architecture, which aims to apply the characteristics of an ICN to a CFN, including a hierarchical naming mechanism, a cache strategy and a workflow, so as to solve the problems of resource perception and task scheduling. The CFN combines heterogeneous computational power information with network information, and improves resource utilization rate and task execution efficiency through resource perception, service positioning and task scheduling. However, it is difficult to uniformly schedule tasks because heterogeneous computing power is difficult to express and perceive and is distributed on each node of the edge network. The architecture of the invention uses a naming mechanism of an ICN to represent computing power and tasks, and uses a cache mechanism and a route forwarding mechanism to solve the problems that heterogeneous computing power is difficult to represent and sense and computing power resource distribution is dispersed in a CFN; on the other hand, the CFN can sense resources, formulate a task forwarding strategy and coordinate computing resources at a data level, solve the flooding problem of the ICN, ensure the service quality of user tasks, improve the computing efficiency in a network and improve the utilization rate of system network resources.
The beneficial effects of the invention include:
1. the CFN framework based on the ICN uses a naming mechanism of the ICN to represent computing power and tasks, and uses a cache mechanism and a route forwarding mechanism to solve the problems that heterogeneous computing power is difficult to express and sense and computing power resources are distributed dispersedly in the CFN;
2. in the scheme provided by the invention, as the number of routing hops increases, more forwarding nodes cache containers in the task forwarding path, which is beneficial to reducing the container pulling time and improving the service hit rate. The network architecture provided by the invention has advantages in large-scale networks, and can improve the processing efficiency of tasks and the utilization rate of computing resources and network resources of the whole network.
3. According to the reinforcement learning model designed by the invention, when multiple tasks arrive in a period, the tasks need to be subjected to priority sequencing, overall planning is performed on the overall tasks, the overall optimization can be ensured, and the lowest energy is consumed while the user service quality is ensured. Therefore, in such a scenario, the DQN scheme designed herein is a better choice to more closely approach the global optimal goal of the actual scenario.
The embodiment of the present invention further provides a computing power network task scheduling apparatus based on an information center network, the apparatus includes a computer device, the computer device includes a processor and a memory, the memory stores computer instructions, the processor is configured to execute the computer instructions stored in the memory, and when the computer instructions are executed by the processor, the apparatus implements the steps implemented by the foregoing method.
The embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps implemented by the foregoing computing power network task scheduling method based on an information center network. The computer readable storage medium may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disks, hard disks, removable storage disks, CD-ROMs, or any other form of storage medium known in the art.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein may be implemented as hardware, software, or combinations of both. Whether this is done in hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments can be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions, or change the order between the steps, after comprehending the spirit of the present invention.
Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments in the present invention.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made to the embodiment of the present invention by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A computing power network task scheduling method based on an information center network is characterized in that the method is applied to a computing power network, the computing power network comprises forwarding nodes, control nodes and computing nodes, and the method comprises the following steps:
the method comprises the steps that a forwarding node receives a processing request of a terminal user and sends the processing request to a control node, the control node is provided with a pre-trained reinforcement learning model, a target network in the reinforcement learning model performs an action of allocating a computing node for a processing task based on the processing task corresponding to the processing request, and the computing node is provided with computing resources for executing the processing task;
the control node calculates total action time delay and total action energy consumption based on actions of the distributed computing nodes made by the reinforcement learning model, calculates a reward function based on the total action time delay and the total action energy consumption, and calculates a loss function based on the reward function;
and updating parameters of the value network in the reinforcement learning model based on the loss function, and synchronizing the parameters in the value network to the target network every preset step number.
2. The information-centric-network-based computational power network task scheduling method according to claim 1, wherein the step of the target network in the reinforcement learning model performing an action of allocating a computational node to a processing task based on the processing task corresponding to the processing request comprises:
the control node system calculates state parameters of the power network, wherein the state parameters comprise forwarding node state parameters and calculation node state parameters, and the forwarding node state parameters and the calculation node state parameters are combined into state vectors;
and inputting the state vector into a reinforcement learning model, wherein the reinforcement learning model outputs an action vector, the dimension corresponding to the largest parameter in the parameters of the plurality of dimensions of the action vector is extracted, and the action corresponding to the dimension is taken as the output action.
3. The information-centric-network-based computational-effort network task scheduling method according to claim 2, wherein the forwarding node state parameters are used to mark whether each forwarding node is sending a processing request to the control node, if so, the forwarding node is marked as a first parameter, otherwise, the forwarding node is marked as a second parameter; the node state parameters consist of service function parameters, node resource parameters and residual time parameters, the service function parameters are used for marking whether each computing node executes the processing request, if so, the computing node is marked as a first parameter, and if not, the computing node is marked as a second parameter; the node resource parameter is the size of the computing resource of the computing node; the remaining time parameter is the remaining time required by the compute node to execute the current processing request.
4. The information-centric-network-based computing power network task scheduling method according to claim 1, wherein the action includes a transmission path of a processing task and a pull path of a container corresponding to the processing task, the total action delay includes a transmission delay, a queuing delay, a container pull delay, and a computation delay, the total action energy consumption includes transmission energy consumption, container pull energy consumption, and computation energy consumption, and the step of the control node computing the total action delay and the total action energy consumption based on the action of the distribution computation node made by the reinforcement learning model includes:
calculating transmission delay based on the bandwidth parameters in the transmission path, calculating container pulling delay based on the bandwidth parameters in the pulling path, calculating the calculation delay required by the processing task based on CPU (Central processing Unit) resources and GPU (graphics processing Unit) resources in a target calculation node, and calculating the queuing delay of the processing task based on the calculation delay of the processing task which arrives earlier than the processing task; calculating the total action time delay based on the transmission time delay, the queuing time delay, the container pulling time delay and the calculation time delay;
calculating transmission energy consumption based on the transmission power of a transmitting end router in the transmission path, calculating container pulling energy consumption based on the transmission power of the transmitting end router in the pulling path, and calculating calculation energy consumption based on CPU resources and GPU resources in a target calculation node; and calculating the total energy consumption of the action based on the transmission energy consumption, the container pulling energy consumption and the calculation energy consumption.
5. The information-centric-network-based computational-effort-network task scheduling method according to claim 4, wherein in the step of calculating a transmission delay based on the bandwidth parameter in the transmission path, the transmission delay is calculated based on a formula:
Figure FDA0003955594010000021
wherein, T trans Representing transmission delay, R representing a set of routers in a transmission path, i and j representing any two adjacent routers in the transmission path, B (i,,j) Indicating the bandwidth between router i and router j,
Figure FDA0003955594010000022
represents the size of the processing task k;
in the step of calculating the container pull delay based on the bandwidth parameter in the pull path, the pull delay is calculated based on the following formula:
Figure FDA0003955594010000023
wherein, T pull Representing pull delay, U representing the set of routers in the pull path, x and y representing any two adjacent routers in the pull path, B (x,y) Representing the bandwidth between router x and router y, Z docker Representing the size of the corresponding container of the processing task;
in the step of calculating the calculation time delay required by the processing task based on the CPU resource and the GPU resource in the target calculation node, calculating the calculation time delay according to the following formula:
Figure FDA0003955594010000024
wherein, T comp Representing the computation of the time delay, max means taking the maximum value,
Figure FDA0003955594010000025
which represents the size of the processing task k,
Figure FDA0003955594010000026
representing the GPU resource size of the destination compute node,
Figure FDA0003955594010000027
the method comprises the steps of representing the size of a CPU resource of a target computing node, wherein lambda is a preset task parameter, epsilon is a preset GPU positive correlation coefficient parameter, and mu is a preset CPU positive correlation coefficient parameter;
in the step of calculating the queuing time delay of the processing task based on the calculation time delay of the processing task which arrives earlier than the processing task, calculating the sum of the calculation time delays of the processing tasks which are not finished when the processing task arrives and arrive earlier than the processing task to obtain the queuing time delay;
in the step of calculating the total action delay based on the transmission delay, the queuing delay, the container pull delay and the calculation delay, the total action delay is calculated based on the following formula:
T total =T trans +T queue +T pull +T comp
wherein, T total Indicating the total time delay of the movement, T queue Indicating queuing delay.
6. The information-centric-network-based computation-effort network task scheduling method according to claim 4, wherein in the step of calculating transmission energy consumption based on the transmission power of the transmitting-end router in the transmission path, the transmission energy consumption is calculated based on the following formula:
Figure FDA0003955594010000031
wherein E is trans Representing transmission energy consumption, i and j representing any two adjacent routers in the transmission path, P (i,j) Indicates the transmission power, T, of the transmitting-end router of the adjacent router i and router j trans Representing a transmission delay;
in the step of calculating the container pulling energy consumption based on the transmission power of the transmission end router in the pulling path, the container pulling energy consumption is calculated based on the following formula:
Figure FDA0003955594010000032
wherein E is pull Represents container pull energy consumption, U represents the set of routers in the pull path, x and y represent any two adjacent routers in the pull path, P (x,y) Indicates the transmission power, T, of the transmitting-side router of the adjacent router x and router y pull Representing a transmission delay;
in the step of calculating the calculation energy consumption based on the CPU resources and the GPU resources in the destination computing node, the calculation energy consumption is calculated based on the following formula:
E comp =P com T comp
wherein E is comp Representing the calculated energy consumption, P com Representing a computing power parameter, T, of the destination computing node comp Representing a calculated time delay;
in the step of calculating the total energy consumption of the action based on the transmission energy consumption, the container pulling energy consumption and the calculation energy consumption, the total energy consumption of the action is calculated according to the following formula:
E total =E trans +E pull +E comp
wherein E is total Representing the total energy consumption of the action.
7. The information-centric-network-based computing power network task scheduling method according to claim 6, wherein the computing power parameter of the destination computing node is computed according to the following formula:
Figure FDA0003955594010000041
wherein, P com Representing a computing power parameter, η, of the destination computing node G Representing the GPU Power factor, η, preset by the destination compute node C Represents the CPU power factor preset by the destination computing node,
Figure FDA0003955594010000042
representing the GPU resource size of the destination compute node,
Figure FDA0003955594010000043
indicating the CPU resource size of the destination compute node.
8. The information-centric-network-based computing power network task scheduling method according to claim 1, wherein in the step of the control node calculating the reward function based on the total action time delay and the total action energy consumption, the reward function value is calculated according to the following formula:
R=ξ-ρ(αT total +βE total );
wherein R represents the value of the reward function, alpha, xi and rho are all preset calculation parameters, E total Indicating total energy consumption of action, T total Representing the total time delay of the action.
9. The information-centric-network-based computing power network task scheduling method according to claim 1, wherein state parameters of the computing power network after the action is performed are acquired, and the step of computing the loss function based on the reward function includes:
encoding the state parameters, the actions and the reward functions of the pre-action computing power network and the state parameters of the post-action computing power network, combining the state parameters and the actions and the reward functions into a quadruple, and adding the quadruple into a memory space;
randomly receiving any quadruple from the memory space, obtaining the state parameters of the force calculation network after the action in the quadruple,
acquiring a corresponding state vector based on state parameters of the post-action computational force network in the extracted quadruple, inputting the state vector into a target network in a reinforcement learning model to obtain an action vector corresponding to the state vector, and outputting the maximum parameter in the parameters of multiple dimensions of the action vector as a training dimension parameter;
calculating a transition function based on the reward function value and the training dimension parameter in the extracted quadruple according to the following formula:
Figure FDA0003955594010000044
wherein, y δ The transition function is represented by a function of the transition,
Figure FDA0003955594010000045
representing the value of the reward function in the extracted quadruple delta,
Figure FDA0003955594010000046
representing a training dimension parameter, wherein gamma is a preset training parameter;
a loss function is calculated based on the transition function according to the following formula:
L=[y δ -q] 2
wherein L represents a loss function, y δ And representing a transition function, and q represents dimension parameter values corresponding to the actions in the extracted quadruple.
10. An information-centric network-based computational-effort network task scheduling apparatus, comprising a computer device comprising a processor and a memory, the memory having stored therein computer instructions, the processor being configured to execute the computer instructions stored in the memory, the apparatus, when executed by the processor, performing the steps as recited in any one of claims 1-9.
CN202211461675.7A 2022-11-17 2022-11-17 Computing power network task scheduling method and device based on information center network Pending CN115766722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211461675.7A CN115766722A (en) 2022-11-17 2022-11-17 Computing power network task scheduling method and device based on information center network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211461675.7A CN115766722A (en) 2022-11-17 2022-11-17 Computing power network task scheduling method and device based on information center network

Publications (1)

Publication Number Publication Date
CN115766722A true CN115766722A (en) 2023-03-07

Family

ID=85334458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211461675.7A Pending CN115766722A (en) 2022-11-17 2022-11-17 Computing power network task scheduling method and device based on information center network

Country Status (1)

Country Link
CN (1) CN115766722A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116846818A (en) * 2023-09-01 2023-10-03 北京邮电大学 Method, system, device and storage medium for dispatching traffic of computing power network
CN117453388A (en) * 2023-07-21 2024-01-26 广东奥飞数据科技股份有限公司 Distributed computing power intelligent scheduling system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117453388A (en) * 2023-07-21 2024-01-26 广东奥飞数据科技股份有限公司 Distributed computing power intelligent scheduling system and method
CN117453388B (en) * 2023-07-21 2024-02-27 广东奥飞数据科技股份有限公司 Distributed computing power intelligent scheduling system and method
CN116846818A (en) * 2023-09-01 2023-10-03 北京邮电大学 Method, system, device and storage medium for dispatching traffic of computing power network
CN116846818B (en) * 2023-09-01 2023-12-01 北京邮电大学 Method, system, device and storage medium for dispatching traffic of computing power network

Similar Documents

Publication Publication Date Title
CN107332913B (en) Optimized deployment method of service function chain in 5G mobile network
Ndikumana et al. Joint communication, computation, caching, and control in big data multi-access edge computing
Ning et al. Joint computing and caching in 5G-envisioned Internet of vehicles: A deep reinforcement learning-based traffic control system
Kazmi et al. Infotainment enabled smart cars: A joint communication, caching, and computation approach
Pu et al. Online resource allocation, content placement and request routing for cost-efficient edge caching in cloud radio access networks
CN115766722A (en) Computing power network task scheduling method and device based on information center network
Zhang et al. Cooperative edge caching: A multi-agent deep learning based approach
Dai et al. Multi-armed bandit learning for computation-intensive services in MEC-empowered vehicular networks
CN107395506B (en) Service function chain deployment method for optimizing transmission delay
Aazam et al. Cloud of things (CoT): cloud-fog-IoT task offloading for sustainable internet of things
Dai et al. A learning-based approach for vehicle-to-vehicle computation offloading
CN110366269A (en) Session establishing method and equipment
Zhang et al. A multidomain virtual network embedding algorithm based on multiobjective optimization for Internet of Drones architecture in Industry 4.0
CN109831548A (en) Virtual content distribution network vCDN node method for building up and server
Wang et al. BC-mobile device cloud: A blockchain-based decentralized truthful framework for mobile device cloud
Sinky et al. Responsive content-centric delivery in large urban communication networks: A LinkNYC use-case
CN111885493B (en) Micro-cloud deployment method based on improved cuckoo search algorithm
Hosseini Bidi et al. A fog‐based fault‐tolerant and QoE‐aware service composition in smart cities
CN113873534A (en) Block chain assisted federal learning active content caching method in fog calculation
Meneguette et al. An efficient green-aware architecture for virtual machine migration in sustainable vehicular clouds
CN114827284A (en) Service function chain arrangement method and device in industrial Internet of things and federal learning system
Ullah et al. Optimizing task offloading and resource allocation in edge-cloud networks: a DRL approach
CN110913430B (en) Active cooperative caching method and cache management device for files in wireless network
CN109495565A (en) High concurrent service request processing method and equipment based on distributed ubiquitous computation
Zhao et al. Neighboring-aware caching in heterogeneous edge networks by actor-attention-critic learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination