CN112346854B - In-network resource scheduling method and system for hierarchical collaborative decision and storage medium - Google Patents

In-network resource scheduling method and system for hierarchical collaborative decision and storage medium Download PDF

Info

Publication number
CN112346854B
CN112346854B CN202011039260.1A CN202011039260A CN112346854B CN 112346854 B CN112346854 B CN 112346854B CN 202011039260 A CN202011039260 A CN 202011039260A CN 112346854 B CN112346854 B CN 112346854B
Authority
CN
China
Prior art keywords
network
network node
computing power
forwarding
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011039260.1A
Other languages
Chinese (zh)
Other versions
CN112346854A (en
Inventor
雷凯
景晓军
李渝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Shenzhen Graduate School
Original Assignee
Peking University Shenzhen Graduate School
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate School filed Critical Peking University Shenzhen Graduate School
Priority to CN202011039260.1A priority Critical patent/CN112346854B/en
Publication of CN112346854A publication Critical patent/CN112346854A/en
Application granted granted Critical
Publication of CN112346854B publication Critical patent/CN112346854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1017Server selection for load balancing based on a round robin mechanism

Abstract

An in-network resource scheduling method, system and storage medium for hierarchical cooperative decision, wherein the in-network resource scheduling method comprises: acquiring a computing power demand interest packet requested by an upstream network node in a computing power network; and judging whether the current network node meets the calculation power requirement of the calculation power requirement interest package, if so, providing calculation power service for the data in the calculation power requirement interest package according to the deployed function, and if not, providing forwarding service for the calculation power requirement interest package. According to the technical scheme, the scheduling decision mechanism of the fine-grained local scheduling layer is optimized by means of global information, so that the utilization rate of resources such as global computing power and storage of a computing power network is improved, and load balance of resources in the network is realized; in addition, the non-end-to-end hierarchical cooperative decision-making in-network resource scheduling capability is provided in a mode of combining a coarse-grained global scheduling layer and a fine-grained local scheduling layer, so that the efficient and balanced in-network resource scheduling function can be realized from the technical level, and the overall performance of the computational network is also improved.

Description

In-network resource scheduling method and system for hierarchical collaborative decision and storage medium
Technical Field
The invention relates to the technical field of network information processing, in particular to an in-network resource scheduling method and system for hierarchical collaborative decision and a storage medium.
Background
The traditional data processing mode is to execute specific tasks in a core cloud, however, the data processing is performed through a centralized arrangement layer in a cloud computing architecture, the round-trip delay of the mode is long, and the high real-time requirements of various novel applications (such as VR, automatic driving and the like) cannot be met. By means of network resources such as computation and storage which are closer to the user, the edge computing pushes the content and the service to a position closer to the user, and the problem of high time delay in a cloud computing architecture is effectively solved. Edge Computing has the advantages of low time delay, high bandwidth, adjacent users and the like, the development of edge Computing promotes the fusion of basic resource level networks and Computing, under the development trends of cloud Computing and edge Computing, a large amount of Computing power with different scales can be spread at different distances close to users in future society, various types of personalized services are provided for the users, and Computing First Network (CFN) is generated accordingly.
The computing power network requires cooperative consideration of the requirements of network and computing fusion evolution, realizes flexible computing power scheduling, and provides new challenges for the existing resource scheduling mechanism. In computational networks, computational scheduling faces several challenges: 1) The real-time performance is a common challenge faced by all current resource scheduling methods, and most of the existing resource scheduling strategies are centralized, so that the problems of slow convergence and poor expandability exist, and a solution obtained through offline training cannot respond to environmental changes in real time. 2) The global optimization of the network and the computing power is realized because the single edge sites in the computing power network have limited resources and the multi-edge sites lack the cooperation, and the scheduling strategy needs to reasonably schedule the comprehensive network and the computing power condition, thereby realizing the global optimization and the load balance of the computing network. 3) The large-scale scheduling, the current scheduling mode has serious limitations when dealing with large-scale resource and task quantities, the centralized end-to-end scheduling strategy has the defects of too complex searching pattern, too many local optimal solutions and the like when solving the large-scale scheduling problem, and how to solve the large-scale scheduling problem is the key of scheduling strategy design. 4) The efficient routing addressing mode, static pre-configuration or based on DNS analysis has the problems of low efficiency and high cost, and is not suitable for light-weight computing resources.
Disclosure of Invention
The invention mainly solves the technical problem of how to efficiently and uniformly schedule resources in the computational power network. In order to solve the technical problem, the present application provides an in-network resource scheduling method and system for hierarchical cooperative decision, and a storage medium.
According to a first aspect, an embodiment provides an intra-network resource scheduling method for hierarchical cooperative decision, which includes: acquiring a computing power demand interest packet requested by an upstream network node in a computing power network; the computing power demand interest packet comprises data and required computing configuration parameters; judging whether the current network node meets the computing power requirement of the computing power requirement interest package, if so, providing computing power service for data in the computing power requirement interest package according to a deployed function, and if not, providing forwarding service for the computing power requirement interest package; in the computing power service, a first computing power service response packet is formed by a computing result, and the first computing power service response packet is sent to an upstream network node through an original path; in the forwarding service, optional network interfaces corresponding to the calculation demand interest packet are searched according to a routing information table of a current network node, one of the network interfaces is determined as a forwarding interface according to a configured network interface preference sequence, and the calculation demand interest packet is forwarded to a downstream network node in the calculation network through the forwarding interface.
The judging whether the current network node meets the computing power requirement of the computing power requirement interest package includes: and judging that the function deployed on the current network node, the spare computing power resource and the spare storage resource are matched with the operation configuration parameters in the computing power demand interest package, and determining that the judgment result is yes under the condition that the matched function does not execute the computing task currently or can finish executing the current computing task.
The searching for the optional network interface corresponding to the calculation demand interest packet according to the routing information table of the current network node comprises: checking a routing information table of the current network node, and finding out a routing entry corresponding to a function matched with the computing power demand interest packet; and selecting one or more network interfaces as optional network interfaces corresponding to the computing power demand interest package according to the routing entries.
The determining one of the network interfaces as a forwarding interface according to the configured preference sequence of the network interfaces includes: configuring preference values of all the network interfaces in the selectable network interfaces; the preference value is obtained by evaluating the comprehensive state evaluation information of the network node connected with the corresponding network interface and the network state information of the adjacent hop; and selecting the network interface corresponding to the maximum preference value as a forwarding interface according to the preference sequence of the network interfaces.
After the forwarding interface is determined, configuring and forming a forwarding information table of the current network node, creating a forwarding entry corresponding to the computing power demand interest packet in the forwarding information table, and recording the forwarding interface through the corresponding forwarding entry.
Before checking the routing information table of the current network node, the method further comprises the following steps: checking whether a forwarding entry matched with the computing power demand interest packet exists in a forwarding information table of the current network node; if not, checking a routing information table of the current network node to find a routing entry corresponding to the function matched with the computing power demand interest packet; if yes, adding a receiving interface in the adaptive forwarding entry, wherein the receiving interface is a network interface for receiving the computing demand interest packet requested by the upstream network node by the current network interface.
After forwarding the computing power demand interest packet to a downstream network node within the computing power network through the forwarding interface, further comprising: acquiring a second computing power service response packet from a downstream network node, wherein the second computing power service response packet comprises a computing power service provided by the downstream network node for the computing power demand interest packet and an obtained computing result; sending the second computing power service response packet to an upstream network node through a receiving interface in a forwarding entry matched with the computing power demand interest packet; clearing the adapted forwarding entry in the forwarding information table after sending the second computing power service response packet.
The configuring the preference value of each network interface in the selectable network interfaces includes: responding to an information collection request of an upper layer controller, and reporting an information collection data packet to the upper layer controller; the information collection data packet comprises comprehensive state evaluation information of any network node and network state information of one adjacent hop; the upper layer controller is used for constructing an in-network resource distribution state diagram according to information collection data packets reported by each network node in the computational power network and calculating a comprehensive state evaluation value of each network node according to the in-network resource distribution state diagram; the in-network resource distribution state diagram comprises a calculation force state of each network node, a connection state between each network node in the network and a link quality state between each network node in the network; and acquiring the comprehensive state evaluation value of the network node adjacent to the current network node from the upper-layer controller, and configuring the comprehensive state evaluation value of each network node as the preference value of the correspondingly connected network interface.
According to a second aspect, an embodiment provides an intra-network resource scheduling system of a computational power network, comprising a fine-grained local scheduling layer comprising a plurality of interconnected network nodes, and a coarse-grained global scheduling layer comprising at least one upper-layer controller; the upper layer controller is in communication connection with at least one network node and forms a management range for the connected network node; for each network node, the network node is configured to provide a computation service or a forwarding service for a computation demand interest packet requested by an upstream network node according to the in-network resource scheduling method in the first aspect; the upper layer controller establishes communication with each network node in the managed range and is used for calculating the comprehensive state evaluation value of each network node according to the information collection data packet reported by each network node; the comprehensive state evaluation value is used for configuring a preference sequence of network interfaces connected with the corresponding network nodes.
According to a third aspect, an embodiment provides a computer-readable storage medium comprising a program executable by a processor to implement the intra-network resource scheduling method described in the first aspect above.
The beneficial effect of this application is:
according to the above embodiments, an intra-network resource scheduling method, system, and storage medium for hierarchical cooperative decision are provided, where the intra-network resource scheduling method includes: acquiring a computing power demand interest packet requested by an upstream network node in a computing power network; judging whether the current network node meets the computing power requirement of the computing power requirement interest package, if so, providing computing power service for data in the computing power requirement interest package according to a deployed function, and if not, providing forwarding service for the computing power requirement interest package; in the computing power service, a first computing power service response packet is formed by the operation result, and the first computing power service response packet is sent to an upstream network node through an original path; in the forwarding service, optional network interfaces corresponding to the calculation power demand interest packet are searched according to a routing information table of a current network node, one network interface is determined as a forwarding interface according to a configured network interface preference sequence, and the calculation power demand interest packet is forwarded to a downstream network node in the calculation power network through the forwarding interface. On the first hand, under the condition that the current network node is judged to be incapable of meeting the computing power requirement of the computing power requirement interest package, forwarding service is provided for the computing power requirement interest package, so that a forwarding strategy of the system has the capability of independent scheduling decision, and rapid and flexible scheduling decision can be carried out according to one-hop nodes and the network state even without the coordination action of an upper-layer controller; in the second aspect, in the forwarding service, according to the routing information table of the current network node, the selectable network interfaces corresponding to the calculation power demand interest packet are searched, and according to the configured preference sequence of the network interfaces, one of the network interfaces is determined to be used as the forwarding interface and the calculation power demand interest packet is forwarded, so that the calculation power condition and the network communication state of each adjacent network node can be comprehensively considered by a fine-grained local scheduling layer of the system, the optimal network node is selected to be used as the forwarding destination node of the calculation power demand interest packet, and the performance of the calculation power network is improved; in the third aspect, the coarse-grained global scheduling layer periodically sends the comprehensive state evaluation value of each network node, so that the network interface preference sequence of each network node in the fine-grained local scheduling layer is configured, a system can optimize the scheduling decision of the fine-grained local scheduling layer according to the global view, and the timeliness of resource scheduling in the network and the opposition and unification of global optimization are realized; in the fourth aspect, the technical scheme of the application optimizes the scheduling decision mechanism of the fine-grained local scheduling layer by means of global information, so that the utilization rate of resources such as global computing power, storage and the like of a computing power network is improved, and load balance of resources in the network is realized; in the fifth aspect, the technical scheme provides the non-end-to-end hierarchical cooperative decision-making in-network resource scheduling capability in a mode of combining the global coarse-grained global scheduling layer and the local fine-grained local scheduling layer, is favorable for realizing an efficient and balanced in-network resource scheduling function from a technical level, and also improves the overall performance of a computational network.
Drawings
FIG. 1 is a schematic diagram illustrating an architecture of an on-line resource scheduling system according to an embodiment;
FIG. 2 is a schematic diagram of a communication principle of a network resource scheduling system;
FIG. 3 is a diagram illustrating an intra-network resource scheduling system according to another embodiment;
FIG. 4 is a timing diagram of an upper level controller;
FIG. 5 is a diagram illustrating a state of distribution of resources within a network;
FIG. 6 is a flowchart illustrating a method for scheduling resources in a network according to a second embodiment;
FIG. 7 is a flow chart for providing forwarding services;
fig. 8 is a flowchart of an intra-network resource scheduling method according to the third embodiment.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous specific details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the described features, operations, or characteristics may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" as used herein includes both direct and indirect connections (couplings), unless otherwise specified.
The invention conception of the application is as follows: most of the existing resource scheduling strategies are centralized, scheduling problems are regarded as a whole, an end-to-end scheduling decision is given through a solution obtained through offline training, and even then, the requirement of novel application real-time performance cannot be met. For the large-scale scheduling problem, a heuristic resource scheduling strategy is often adopted, which includes a resource scheduling mechanism based on a genetic algorithm, a resource scheduling mechanism based on an ant colony optimization algorithm, and a resource scheduling mechanism based on a particle swarm optimization algorithm. The heuristic resource scheduling strategy gives an end-to-end scheduling decision by intensively and iteratively searching for a global optimal solution, and cannot meet the requirement on ubiquitous computation in a computational network. Aiming at the defects in the current computational power network resource scheduling strategy, the invention provides an intra-network resource scheduling method based on hierarchical cooperative decision, which utilizes the forwarding mechanism of a class-named data network in a fine-grained local scheduling layer and provides non-end-to-end hierarchical cooperative decision-based intra-network resource scheduling in a mode of combining global slow control and local fast control, thereby realizing the efficient and balanced intra-network resource scheduling function and improving the overall performance of the computational power network.
The technical solution of the present application will be specifically described with reference to the following examples.
The first embodiment,
Referring to fig. 1, an embodiment discloses an intra-network resource scheduling system of a computational power network, which includes a fine-grained local scheduling layer 2 and a coarse-grained global scheduling layer 1; the fine-grained local scheduling layer 2 comprises a plurality of interconnected network nodes, the coarse-grained global scheduling layer comprises at least one upper-layer controller, and the upper-layer controller is in communication connection with at least one network node and forms a management range for the connected network nodes.
In this embodiment, the fine-grained local scheduling layer 2 comprises at least one network node, such as network nodes 21, 22, 23, 24, 25, 26; coarse-grained global scheduling layer 1 includes at least one upper-level controller, such as upper-level controller 11. Wherein the upper controller is connected to at least one network node in communication and forms a management range for the connected network nodes, since there is only one upper controller 11 in fig. 1, the upper controller 11 can be connected to the network nodes 21, 22, 23, 24, 25, 26 in communication and form a management range for the network nodes. Each network node within fine-grained local scheduling layer 2 includes a lower-layer control unit deployed at the network node, for example, network nodes 21, 22, 23, 24, 25, 26 each include a lower-layer control unit deployed with a corresponding network node. Furthermore, in order to facilitate communication between the plurality of network nodes comprised within the fine-grained local scheduling layer 2, communication connections may be established between the network nodes 21, 22, 23, 24, 25, 26 and communicated in existing network protocols. For example, network nodes 21 and 22 are both communicatively coupled to network node 23, network nodes 23, 24, and 25 are communicatively coupled to each other, and network node 26 is communicatively coupled to network node 25.
In this embodiment, the upper controller 11 is configured to establish communication with each network node within the managed range, and issue, to any network node within the managed range, a comprehensive state evaluation value of an adjacent network node according to a preset intra-network resource scheduling method, where the comprehensive state evaluation value is used to configure a preference order of network interfaces connected to the corresponding network node. For example, the upper layer controller 11 generates a comprehensive state evaluation value of each network node by requesting to acquire information collection packets of each network node 21, 22, 23, 24, 25, 26 in the fine-grained local scheduling layer 2 and then calculating. For example, the lower layer control unit deployed in each network node is configured to receive the comprehensive state evaluation value of the network node adjacent to the deployed network node, and adjust the preference order of each network interface in the routing table according to the received comprehensive state evaluation value, so as to influence the selection of the forwarding path and the task execution node.
In this embodiment, for each network node in the fine-grained local scheduling layer 2, the network node is configured to provide a computation power service or a forwarding service for a computation power demand interest packet requested by an upstream network node according to a preset in-network resource scheduling method. For example, the upstream network node 21 sends the requested computation demand interest packet to the current network node 23, and the current network node 23 determines whether it satisfies the computation demand of the computation demand interest packet, if so, provides computation service for the data in the computation demand interest packet according to the deployed function, and if not, provides forwarding service for the computation demand interest packet (forwards the computation demand interest packet to the downstream network nodes 22, 24, and 25). It should be noted that the upstream network node, the current network node, and the downstream network node are network node names artificially defined according to the transmission direction of the effort requirement interest packet, and there is no strict limitation on which network node must be the upstream, current, or downstream network node; if network node 26 sends the requested computing power demand interest packet to network node and the computing power demand interest packet is forwarded to network node 24, then network nodes 26, 25, and 24 are the upstream network node, the current network node, and the downstream network node, respectively.
It should be noted that, in the process of providing the computing power service for the computing power demand interest package, the current network node forms the operation result into a first computing power service response package, and sends the first computing power service response package to the upstream network node through the original path. In the process that the current network node provides forwarding service for the computing power demand interest packet, the optional network interfaces corresponding to the computing power demand interest packet are searched according to the routing information table of the current network node, one of the network interfaces is determined as a forwarding interface according to the configured preference sequence of the network interfaces, and the computing power demand interest packet is forwarded to the downstream network node in the computing power network through the forwarding interface. For implementation of the computation service and the forwarding service, specific reference may be made to the method embodiments in embodiment two.
It should be noted that, the upper controller 11 in the coarse-grained global scheduling layer 1 may adopt an SDN controller, and the SDN controller may be an application program in a Software Defined Network (SDN) and is mainly responsible for flow control to ensure intelligent network management; SDN controllers are network servers based on protocols like OpenFlow, allowing the servers to tell the switches where to send packets. In the fine-grained local scheduling layer 2, the lower control unit deployed in the network node may be a switch, a router, or other devices.
In order to further clearly illustrate the interaction function between the coarse-grained global scheduling layer and the fine-grained local scheduling layer, specific functions of the coarse-grained global scheduling layer and the fine-grained local scheduling layer will be described here.
Referring to fig. 2, an upper controller 11 is disposed in the coarse-grained global scheduling layer, and has the function of global network resource scheduling control; a plurality of network nodes such as network nodes 21, 22 and 23 are arranged in a fine-grained local scheduling layer, and the fine-grained local scheduling layer has the function of local network resource scheduling control. The interactive process is expressed as: a) The upper controller sends a request of information collection interest packets to each network node in the managed range; b) Each network node responds to the request and reports a corresponding information collection data packet, and the information collection interest packet comprises the request content of the self state of the network node and the communication state of the adjacent network; c) The upper layer controller 11 generates a network resource distribution vector diagram according to the reported information collection packet, calculates to obtain a comprehensive state evaluation value of each network node, and issues the comprehensive state evaluation value of an adjacent network node to any network node within the management range.
Referring to fig. 2, for the upper controller 11, the upper controller 11 obtains information collection packets respectively reported by each network node 21, 22, 23 within the managed range, and forms intra-network collection information. The information collection data packet comprises comprehensive state evaluation information of a corresponding network node and network state information of one adjacent hop of the network node, wherein the comprehensive state evaluation information of the network node is obtained by calculating a node computing power resource state and a node computing task execution state through a preset state evaluation algorithm; the node computing resource state comprises numerical states of CPU core number, CPU cycle frequency number and idle buffer storage, and the node computing task execution state comprises a numerical state that a task continuously occupies CPU time; the network state information of the adjacent hop of the network node comprises the identification of the adjacent network node, the communication round-trip delay with the adjacent network node, the data transmission speed and the link bandwidth. Then, the upper controller 11 calculates the computational power state of each network node according to the comprehensive state evaluation information of each network node in the in-network collected information, and determines the connection state between each network node in the network according to the network state information of one adjacent hop of each network node, thereby constructing an in-network resource distribution state diagram by using the computational power state of each network node and the connection state between each network node in the network. Then, the upper controller 11 calculates the comprehensive state evaluation value of each network node according to the in-network resource distribution state diagram; for example, the computational power state of any network node and the computational power states of adjacent network nodes are obtained according to the in-network resource distribution state diagram, and then the computational power states of any network node and the adjacent network nodes are processed through a preset evaluation algorithm to obtain a comprehensive state evaluation value of any network node. Finally, the upper controller 11 issues the integrated state evaluation value of the network node adjacent to any network node to the network node so that the network node adjusts the preference order of the network interfaces connected to the adjacent network node.
Referring to fig. 2, for the network node 21, in response to an information collection request from an upper controller 11, the network node 21 reports an information collection packet to the upper controller, where the information collection packet includes comprehensive state evaluation information of any network node and network state information of an adjacent hop. Further, the network node 21 receives the integrated state evaluation values of its own neighboring network nodes, and adjusts the preference order of the network interfaces connected to the neighboring network nodes.
In a specific embodiment, the upper controller in the coarse-grained global scheduling layer may construct a network resource distribution state diagram and is represented as G (V, E), where V is a network node set and E is a set of edges between network nodes, and then a resource shape of any network node V E V at time t may be obtainedThe raw Score of a state is Score v,t (i.e., computational force state value), the edge between network node v and network node u being the network link for both, the weight W of the edge v,u,t Represents the link quality between the network nodes v, u at time t, and generally the greater the weight, the worse the link quality. For example, in the network resource distribution state diagram illustrated in fig. 5, the computational power state of each network node can refer to a corresponding numerical value, and the computational power states of the network nodes A, B, C, D, E, F, G, H, I, J are respectively represented numerically as 50, 35, 100, 20, 50, 30, 40, 10, 70, 80; the link quality between network nodes can refer to the edge values in fig. 5 (e.g., the values 1, 2, 3,1 indicate good link quality). And according to the constructed network resource distribution state diagram G (V, E), calculating the comprehensive state evaluation value of each network node according to the idea of adjacent radiation. A neighbor network node of the network node v can be set as m, and (v, m) belongs to E, considering that a node set Q (Q belongs to Q and Q is not equal to v) of which the network delay and the link quality in n hops that the network node m can reach meet the requirements, at least one path which does not pass through the network node v exists between the network node m and the network node Q, and (v, Q) belongs to E does not exist, namely the network node v and the network node Q are not directly connected, so that the comprehensive state evaluation value of the network node m at the time t is represented as
Figure BDA0002706089530000081
Wherein alpha is q To calculate the scaling weight of the force state, beta q Is a scaling weight of the link quality. The link quality between network node m and network node q is
Figure BDA0002706089530000091
Wherein (m, u) 1 ,u 2 ,...,u n Q) is an optimal effective network path between the network node m and the network node q, which does not pass through the node v and meets the requirements of network delay and link quality.
Further, the integrated state evaluation value of the network node m at the time t +1 is represented as
Score m,t+1 =min(Score′ m,t ,max q∈Q Score q,t )。
It can be understood that, after the upper controller obtains the comprehensive state evaluation value of the network node m at the time t, the comprehensive state evaluation value of the network node m adjacent to the network node v may be issued to the network node v.
In another embodiment, the upper-layer controller may further process the computational power states of any network node and its neighboring network nodes through weighted superposition to obtain a comprehensive state evaluation value of each network node. For example, in fig. 5, the computational force state of the network node F itself is 30, and the computational force states of the neighboring network nodes B, D, E, G, H are 35, 20, 50, 40, and 10, respectively, then the weighted overlap method is used to calculate the comprehensive state evaluation value of the network node F. In the first method, the maximum value of the neighbor node score values may be used as its new score (i.e., the integrated state evaluation value), so that the neighbor node maximum score value of the network node F is max (F, max (BDEGH)) =50. In the second method, the average sum (BDEFGH)/6 =31 is directly superimposed to obtain the integrated state evaluation value of the network node F. In the third method, the weights are assigned to be superimposed and not to exceed the maximum value of the neighbor, the weight of one hop is set to be 0.5, then (0.5B +0.5D +0.5E +0.5G +0.5H + F) =107.5 is calculated, and if the value exceeds the maximum value 70 of the neighbor node, then 70 is taken as the comprehensive state evaluation value of the network node F.
In another embodiment, referring to fig. 3, the coarse-grained global scheduling layer 1 includes a plurality of upper-layer controllers, and the upper-layer controllers are communicatively connected with each other, such as the upper- layer controllers 11, 12, 13; the fine-grained local scheduling layer 2 comprises a plurality of network nodes, such as network nodes 21, 22, 23, 24, 25, 26. Each upper layer controller is communicatively connected to at least one network node within the fine-grained local scheduling layer 2 and forms a management scope for the connected network nodes.
Referring to fig. 3, the upper controller 11 is communicatively connected to the network nodes 21 and 22 and forms a management range for the network nodes 21 and 22, the upper controller 12 is communicatively connected to the network nodes 23 and 24 and forms a management range for the network nodes 23 and 24, and the upper controller 13 is communicatively connected to the network nodes 25 and 26 and forms a management range for the network nodes 25 and 26. Each network node within the fine-grained local scheduling layer 2 includes a lower-layer control unit deployed in the network node, for example, each of the network nodes 21, 22, 23, 24, 25, and 26 includes a lower-layer control unit deployed with the corresponding network node. In order to facilitate communication among the upper layer controllers included in the coarse-grained global scheduling layer 1, communication connections may be established among the upper layer controllers 11, 12, 13 and communication may be performed according to an existing network protocol, so as to implement control coordination and information synchronization functions among the controllers. Similarly, to facilitate communication between the plurality of network nodes comprised within the fine-grained local scheduling layer 2, communication connections may be established between the network nodes 21, 22, 23, 24, 25, 26 and communicated in existing network protocols. For example, network nodes 21 and 22 are both communicatively coupled to network node 23, network nodes 23, 24, and 25 are communicatively coupled to each other, and network node 26 is communicatively coupled to network node 25.
For the network resource scheduling system illustrated in fig. 3, if an upper controller other than any one upper controller is used as another controller, the another controller is configured to obtain information collection packets respectively reported by network nodes within a managed range, and form synchronization information by using the obtained information collection packets, and send the synchronization information to the upper controller. Referring to fig. 3, the upper controller 11 is configured to establish communication with each network node 21 and 22 within a managed range, and request to acquire an information collection data packet reported by each network node 21 and 22 according to a preset in-network resource scheduling method; the upper layer controller 11 is further configured to establish communication with other controllers, request to acquire synchronization information sent by other controllers (the synchronization information includes information collection packets respectively reported by network nodes within a management range of other controllers), and then the upper layer controller 11 calculates and generates a comprehensive state evaluation value of a network node (e.g., network node 23) adjacent to any network node (e.g., network node 21) within the management range, and issues the comprehensive state evaluation value of the adjacent network node (e.g., network node 23) to the network node (e.g., network node 21). Based on the same reason, the upper controller 12 establishes communication with each network node 23, 24 within the managed range, and issues an evaluation value of the comprehensive state of the adjacent network node to each network node; the upper layer controller 13 establishes communication with each network node 21, 22 within the managed range, and issues an integrated state evaluation value of an adjacent network node to each network node.
For example, as shown in fig. 3 and 4, the upper controller 11 sends a request for an information collection interest packet to the network node 21, and the network node 21 reports a corresponding information collection data packet after receiving the information collection interest packet. The upper controller 11 may also obtain synchronization information from other upper controllers (e.g., the upper controllers 12 and 13), where the synchronization information obtained from the upper controller 12 includes information collection packets respectively reported by the network nodes 23 and 24 within the management range of the upper controller 12, and the synchronization information obtained from the upper controller 13 includes information collection packets respectively reported by the network nodes 25 and 26 within the management range of the upper controller 13. Then, the upper controller 11 forms in-network collected information from the acquired information collection packet and the information collection packet included in the synchronization information, thereby constructing an in-network resource distribution state diagram, calculates an integrated state evaluation value of each network node from the in-network resource distribution state diagram, and issues the integrated state evaluation value of the network node (e.g., 21, 22, 24, 25) adjacent to the network node (e.g., 23) so that the network node adjusts the preference order of the network interfaces connected to the adjacent network nodes.
As can be understood by those skilled in the art, in this embodiment, since the coarse-grained global scheduling layer periodically sends the comprehensive state evaluation value of each network node, the network interface preference order is configured for each network node in the fast control layer, so that the system can optimize the scheduling decision of the fine-grained local scheduling layer according to the global view, and implement the uniformity of the timeliness of resource scheduling in the network and the opposition of global optimization. In addition, the coarse granularity and the fine granularity are concepts on the time granularity, and reflect the utilization efficiency in the aspect of time management; because the fine-grained local scheduling layer is positioned at the lower layer and directly controls the information scheduling process among the network nodes, the time granularity is fine, the scheduling response is fast, and a relatively fast control strategy (namely a fast scheduling decision mechanism) is provided in time; the coarse-grained global scheduling layer is located at an upper layer, and needs to perform global optimization and parameter configuration on each network node distributed at a lower layer periodically, and the time granularity is coarse and the response speed is slow, so that the coarse-grained global scheduling layer has a relatively slow control strategy (namely a slow scheduling decision mechanism) in time compared with a fine-grained local scheduling layer. Then, through the interaction and the cooperation between the two scheduling layers, a control mode combining global slow control and local fast control can be realized.
Example II,
On the basis of the in-network resource scheduling system disclosed in the first embodiment, the embodiment discloses an in-network resource scheduling method for hierarchical cooperative decision, and the network resource scheduling method is mainly applied to any network node in a fine-grained local scheduling layer to realize corresponding functions.
Referring to fig. 6, the intra-network resource scheduling method in the present embodiment includes steps S110 to S140, which are described below.
Step S110, obtaining a computing power demand interest packet requested by an upstream network node in a computing power network; the computational demand interest here includes data and required computational configuration parameters.
For example, as shown in fig. 3, in the computational power network formed by the intra-network resource scheduling system, for a current network node 23, it may obtain a computational power demand interest packet requested by an upstream network node 21, where the computational power demand interest packet includes data and required operational configuration parameters, where the operational configuration parameters may include a name of a requested function, a link quality threshold of the network, idle computational power resources and storage resource requirement parameters, input and output requirement parameters of the data, and the like.
Step S120, determining whether the current network node meets the computation power requirement of the computation power requirement interest package, if yes, entering step S130, and if not, entering step S140.
In a specific embodiment, when the current network node determines whether it meets the computation power requirement of the computation power requirement interest package, it may be determined that the function deployed on the current network node, the remaining idle computation power resource and the idle storage resource are adapted to the computation configuration parameter in the computation power requirement interest package, and if it is determined that the adapted function does not execute the computation task currently or can end executing the current computation task currently, the determination result is determined to be yes, and step S130 may be performed.
And step S130, providing the computing power service for the data in the computing power demand interest package according to the deployed function. In the computing power service, the current network node forms a first computing power service response packet according to the operation result, and sends the first computing power service response packet to the upstream network node through the original path.
For example, in fig. 1, the current network node 23 provides the computing power service for the computing power demand interest package requested by the upstream network node 21, and the formed first computing power service response package is fed back to the upstream network node 21 through the original path.
And step S140, providing forwarding service for the computing power demand interest package. In the forwarding service, the current network node looks up an optional network interface corresponding to the calculation power demand interest packet according to a routing information table of the current network node, determines one network interface as a forwarding interface according to a configured network interface preference sequence, and forwards the calculation power demand interest packet to a downstream network node in the calculation power network through the forwarding interface.
In one implementation, referring to fig. 7, the step S140 specifically includes steps S141-S144, which are respectively described as follows.
Step S141, the current network node checks its own routing information table, and finds out the routing entry corresponding to the function adapted to the computing power demand interest packet; and selecting one or more network interfaces as selectable network interfaces corresponding to the computing demand interest package according to the routing entries.
For example, referring to fig. 1, for the current network node 23, if a routing entry in its routing information table is adapted to the function requested by the computing power demand interest package and the routing entry includes the network interfaces connected to the network nodes 21 and 24, the network interfaces connected to the network nodes 21 and 24 may be used as the optional network interfaces corresponding to the computing power demand interest package.
And step S142, configuring preference values of all the network interfaces in the selectable network interfaces, and selecting the network interface corresponding to the maximum preference value as a forwarding interface according to the preference sequence of the network interfaces. The preference value is obtained by evaluating the comprehensive state evaluation information of the network node connected with the corresponding network interface and the network state information of the adjacent hop.
In a specific embodiment, referring to fig. 3, when any one network node is taken as the current network node, the current network node reports an information collection data packet to the upper controller 11 in response to the information collection request of the upper controller 11, where the information collection data packet includes the comprehensive state evaluation information of any network node and the network state information of the adjacent one hop. Therefore, the upper controller 11 may construct an intra-network resource distribution state diagram according to the information collection data packet reported by each network node in the computational power network, and calculate a comprehensive state evaluation value of each network node according to the intra-network resource distribution state diagram; the in-network resource distribution state diagram comprises the computing power state of each network node, the connection state among the network nodes in the network and the link quality state among the network nodes in the network. Further, the upper controller 11 issues the integrated state evaluation value of the network node adjacent to the current network node. Finally, the current network node acquires the integrated state evaluation values of the network nodes adjacent to the current network node from the upper controller 11, and configures the integrated state evaluation value of each network node as a preference value of the network interface correspondingly connected.
It should be noted that, for the technical description of the upper controller 11 calculating the comprehensive state evaluation value of each network node, reference may be made to relevant contents in the first embodiment, and details are not described here again.
For example, in the network resource distribution state diagram in fig. 5, for the network node F, the network node of its neighbor is B, D, E, G, H, and the network interfaces connected to these several network nodes are set in the routing information table of the network node F as optional network interfaces. As a network nodeAnd when the point F receives a calculation power demand interest packet and cannot meet the calculation power demand, the network node directly carries out resource demand scheduling according to a forwarding strategy. Since the network node H is connected to the network node I, J, and the network node D is connected to the network node C, and there is a requirement that the sum of link qualities is smaller than a threshold (for example, a value of 2) for link qualities in the computational power configuration parameter, but since computational power state scores of the network node H and the network node D are lower than those of other neighboring nodes of the network node F, if network interface preference setting is performed according to a method of calculating a comprehensive state evaluation value by weighted overlap, the network node H and the network node D will probably not be selected. However, the coarse-grained global scheduling layer periodically collects the whole network information, updates the in-network resource state distribution map, updates the comprehensive state evaluation value of the network node adjacent to the network node F according to the resource state score adjustment strategy, the network node B, E, G keeps the score value unchanged because no other radiation node meeting the condition exists, and the radiation node meeting the condition of the network node D is the network node C, so that alpha is not set q Value of 0.8, beta q If the value is 0.5, the comprehensive state evaluation value of the network node D can be calculated by the following formula
Figure BDA0002706089530000131
Score m,t+1 =min(Score′ m,t ,max q∈Q Score q,t )。
Thus, the latest integrated state evaluation value of the network node D can be calculated to be 70. Similarly, the latest integrated state evaluation value of the network node H is 66. In this case, if the comprehensive state evaluation values of network node B, D, E, G, H are all allocated to network node F, then, according to the adjusted network interface preference order, network node D and network node H will be more selected than before the adjustment, and even the network interface connected to network node D will be regarded as a forwarding interface. It can be understood that the above disclosed technical means realizes load balancing of resource scheduling to a certain extent, and improves the network resource utilization rate of the network node C or the network node I, J.
And step S143, forwarding the computing power demand interest packet to a downstream network node in the computing power network through the forwarding interface.
For example, as shown in fig. 5, for the current network node F, if the network interface connected to the network node D is used as the forwarding interface, the current network node F may forward the computation demand interest packet to the downstream network node D.
Further, referring to fig. 7, the network resource scheduling method disclosed in this embodiment further includes step S144.
Step S144, after determining the forwarding interface, configuring a forwarding information table forming the current network node, creating a forwarding entry corresponding to the computation demand interest packet in the forwarding information table, and recording the forwarding interface through the corresponding forwarding entry.
It can be understood that the purpose of setting the forwarding information table is to record the forwarding path of the computing power demand interest packet, so that the same computing power demand interest packet can be forwarded according to the forwarding route, and a computing power service response packet formed by a downstream network node is also facilitated to be returned according to the forwarding route.
As can be understood by those skilled in the art, in this embodiment, since it is determined that the current network node cannot meet the computational power demand of the computational power demand interest packet, a forwarding service is provided for the computational power demand interest packet, so that the forwarding policy of the system has the capability of an independent scheduling decision, and even without the coordination action of an upper-layer controller, a fast and flexible scheduling decision can be performed according to a node of one hop and the network state. In the forwarding service, the optional network interfaces corresponding to the calculation power demand interest packet are searched according to the routing information table of the current network node, and one of the network interfaces is determined as the forwarding interface according to the configured preference sequence of the network interfaces and the calculation power demand interest packet is forwarded, so that the calculation power condition and the network communication state of each adjacent network node can be comprehensively considered by a fine-grained local scheduling layer of the system, the optimal network node is selected as the forwarding destination node of the calculation power demand interest packet, and the performance of the calculation power network is improved.
Example III,
On the basis of the in-network resource scheduling method disclosed in the second embodiment, the application discloses an improved in-network resource scheduling method, which is also applied to any network node in a fine-grained local scheduling layer to realize corresponding functions.
Referring to fig. 8, the improved intra-network resource scheduling method includes steps S210-S290, which are described below.
Step S210, receiving a computing power demand interest packet requested by an upstream network node in the computing power network, wherein the computing power demand interest packet comprises data and required computing configuration parameters.
Step S220, determining whether the current network node meets the computing power requirement of the computing power requirement interest package, if yes, entering step S230, and if not, entering step S240.
For example, when the current network node determines whether it meets the computation power requirement of the computation power requirement interest package, it may be determined that the function deployed on the current network node, the remaining idle computation power resource and the idle storage resource are adapted to the computation configuration parameters in the computation power requirement interest package, and when it is determined that the adapted function does not currently execute the computation task or can end executing the current computation task, the determination result is determined as yes.
And step S230, providing the computing power service for the data in the computing power demand interest package according to the deployed function. In the computing power service, the current network node may form a first computing power service response packet from the operation result, and send the first computing power service response packet to the upstream network node through the original path.
Step S240, checking whether a forwarding entry adapted to the calculation demand interest packet exists in the forwarding information table of the current network node, if not, going to step S250, and if so, going to step S270.
It should be noted that the existence of the forwarding entry in the forwarding information table, which is adapted to the computing power demand interest packet, means that the forwarding information recorded in the forwarding information table is consistent with the operation configuration parameters in the computing power demand interest packet, such as a requested function name, a network link quality threshold, idle computing power resources and storage resource requirement parameters, and data input and output requirement parameters.
It should be noted that, unlike the forwarding policy in the named data network, the computation demand interest packet is aggregated onto the same forwarding entry as long as the same name data is requested. However, in this embodiment, even if the effort demand interest packet in the forwarding policy is requested to be a function with the same name, if the request is from a different user or requests to process different data, aggregation cannot be performed through the record receiving interface. Here, because each different computation power request consumes resources such as corresponding computation power, storage, and the like, if the data to be processed is different, the returned result is also different.
Step S250, in the case that the current network node provides the computing power service, checking the routing information table of the current network node to find a routing entry corresponding to the function adapted to the computing power demand interest packet. Specifically, refer to step S141 in the second embodiment, which is not described herein again.
Step S260, configuring preference values of each network interface in the selectable network interfaces, selecting the network interface corresponding to the maximum preference value according to the preference sequence of the network interfaces as a forwarding interface, and forwarding the computing power demand interest packet to a downstream network node in the computing power network through the forwarding interface. The preference value is obtained by evaluating the comprehensive state evaluation information of the network node connected with the corresponding network interface and the network state information of the adjacent hop. Specifically, refer to step S142 and step S143 in embodiment two, which are not described herein again.
It should be noted that, since the current network node forwards the computation power demand interest packet to the downstream network node in the computation power network through the forwarding interface, it indicates that the current network node not only forms an adapted forwarding information table, but also finds a forwarding path of the computation power demand interest packet, and in order to be able to feed back the computation power service response packet to the upstream network node in time, step S260 may directly enter step S270.
Step S270, when a forwarding entry adapted to the calculation demand interest packet exists in the forwarding information table of the current network node, adding a receiving interface in the adapted forwarding entry. The receiving interface is a network interface for receiving the computing power demand interest packet requested by the upstream network node by the current network interface.
For example, as shown in fig. 1, for the current network node 23, since it obtains the computation-demand interest packet requested by the upstream network node 21, the current network node 23 needs to add a forwarding interface, which is a network interface connected to the upstream network node 21, in the adapted forwarding entry.
Step S280, a second computing power service response packet is obtained from the downstream network node, where the second computing power service response packet includes an operation result obtained by the downstream network node providing computing power service for the computing power demand interest packet.
Referring to fig. 1, if the network node 24 is a downstream network node of the current network node 23, the downstream network node 24 continues to determine whether the computational power requirement is satisfied by using the method of step S220 in this embodiment after acquiring the forwarded computational power requirement response packet, if so, the downstream network node 24 forms the operation result into a second computational power service response packet, and if not, the downstream network node 24 continues to forward the computational power requirement interest packet until other network nodes can satisfy the computational power requirement.
It is understood that after the downstream network node forms the second computational power service response packet or receives the second computational power service response packet fed back by other network nodes, the current network node may obtain the second computational power service response packet from the downstream network node.
Step S290, sending a second computation power service response packet to the upstream network node through the receiving interface in the forwarding entry adapted to the computation power demand interest packet, and clearing the adapted forwarding entry in the forwarding information table after sending the second computation power service response packet.
It can be understood that, since the forwarding information table of the current network node is configured with an adapted forwarding entry (a receiving interface is recorded and a forwarding table is recorded), the current network node may receive the second power service response packet from the downstream network node through the forwarding interface, and may also feed back the second power service response packet to the upstream network node through the receiving interface. It can also be understood that after the second computation force service response packet is sent, if the computation task requested by the upstream network node is completed, the forwarding entry in the forwarding information table that is adapted to the first computation force service response packet is directly cleared.
Those skilled in the art can understand that, in this embodiment, because the scheduling decision mechanism of the fine-grained local scheduling layer is optimized by using the global information, the utilization rates of resources such as global computing power and storage of the computing power network are improved, and load balancing of resources in the network is achieved. In addition, according to the technical scheme of the embodiment, the intra-network resource scheduling capability of the non-end-to-end hierarchical cooperative decision is provided in a mode of combining the global coarse-grained global scheduling layer and the local fine-grained local scheduling layer, and the overall performance of the computational network is also improved by utilizing the intra-network resource scheduling function which is realized efficiently and uniformly from the technical aspect.
Those skilled in the art will appreciate that all or part of the functions of the methods in the above embodiments may be implemented by hardware, or may be implemented by a computer program. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a portable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (8)

1. An in-network resource scheduling method for hierarchical cooperative decision is characterized by comprising the following steps:
acquiring a computing power demand interest packet requested by an upstream network node in a computing power network; the computing power demand interest packet comprises data and required computing configuration parameters;
judging whether the current network node meets the computing power requirement of the computing power requirement interest package, if so, providing computing power service for data in the computing power requirement interest package according to a deployed function, and if not, providing forwarding service for the computing power requirement interest package;
in the computing power service, a first computing power service response packet is formed by a computing result, and the first computing power service response packet is sent to an upstream network node through an original path;
in the forwarding service, searching for an optional network interface corresponding to the calculation demand interest packet according to a routing information table of a current network node, determining one network interface as a forwarding interface according to a configured network interface preference sequence, and forwarding the calculation demand interest packet to a downstream network node in the calculation network through the forwarding interface;
the judging whether the current network node meets the computing power requirement of the computing power requirement interest packet comprises the following steps:
judging that the function deployed on the current network node, the spare computing power resource and the spare storage resource are matched with the computing configuration parameters in the computing power demand interest package, and determining that the judgment result is yes under the condition that the matched function does not execute the computing task currently or can finish executing the current computing task;
the searching for the optional network interface corresponding to the calculation demand interest packet according to the routing information table of the current network node includes:
checking a routing information table of the current network node, and finding out a routing entry corresponding to a function matched with the computing power demand interest packet;
and selecting one or more network interfaces as selectable network interfaces corresponding to the computing power demand interest package according to the routing entries.
2. The method for scheduling resources within a network according to claim 1, wherein said determining one of the network interfaces as a forwarding interface according to the configured preference order of the network interfaces comprises:
configuring preference values of all the network interfaces in the selectable network interfaces; the preference value is obtained by evaluating the comprehensive state evaluation information of the network node connected with the corresponding network interface and the network state information of the adjacent hop;
and selecting the network interface corresponding to the maximum preference value as a forwarding interface according to the preference sequence of the network interfaces.
3. The method for scheduling resources in network according to claim 2, wherein after determining the forwarding interface, a forwarding information table of a current network node is configured and formed, a forwarding entry corresponding to the computation power demand interest packet is created in the forwarding information table, and the forwarding interface is recorded by the corresponding forwarding entry.
4. The in-network resource scheduling method of claim 3, before checking the routing information table of the current network node, further comprising:
checking whether a forwarding entry matched with the computing power demand interest packet exists in a forwarding information table of the current network node;
if not, checking a routing information table of the current network node to find a routing entry corresponding to the function matched with the calculation power demand interest packet;
if yes, adding a receiving interface in the adaptive forwarding entry, wherein the receiving interface is a network interface for receiving the computing power demand interest packet requested by the upstream network node by the current network interface.
5. The in-network resource scheduling method of claim 4, further comprising, after forwarding the computational demand interest packet to a downstream network node within the computational network via the forwarding interface:
acquiring a second computing power service response packet from a downstream network node, wherein the second computing power service response packet comprises a computing power service provided by the downstream network node for the computing power demand interest packet and an obtained computing result;
sending the second computing power service response packet to an upstream network node through a receiving interface in a forwarding entry matched with the computing power demand interest packet;
clearing the adapted forwarding entry in the forwarding information table after sending the second computing power service response packet.
6. The method for scheduling resources in network according to claim 2, wherein said configuring the preference value of each network interface among the selectable network interfaces comprises:
responding to an information collection request of an upper layer controller, and reporting an information collection data packet to the upper layer controller; the information collection data packet comprises comprehensive state evaluation information of any network node and network state information of one adjacent hop; the upper layer controller is used for constructing an in-network resource distribution state diagram according to information collection data packets reported by each network node in the computational power network and calculating a comprehensive state evaluation value of each network node according to the in-network resource distribution state diagram; the in-network resource distribution state diagram comprises a calculation force state of each network node, a connection state between each network node in the network and a link quality state between each network node in the network;
and acquiring the comprehensive state evaluation value of the network node adjacent to the current network node from the upper-layer controller, and configuring the comprehensive state evaluation value of each network node as the preference value of the correspondingly connected network interface.
7. An intra-network resource scheduling system of a computational power network, comprising a fine-grained local scheduling layer and a coarse-grained global scheduling layer, wherein the fine-grained local scheduling layer comprises a plurality of interconnected network nodes, and the coarse-grained global scheduling layer comprises at least one upper-layer controller; the upper layer controller is in communication connection with at least one network node and forms a management range for the connected network node;
for each network node, the network node is configured to provide a computation service or a forwarding service for a computation demand interest packet requested by an upstream network node according to the in-network resource scheduling method of any one of claims 1 to 6;
the upper controller establishes communication with each network node in the managed range and is used for calculating the comprehensive state evaluation value of each network node according to the information collection data packet reported by each network node; the comprehensive state evaluation value is used for configuring the preference sequence of the network interfaces connected with the corresponding network nodes.
8. A computer-readable storage medium comprising a program executable by a processor to implement the in-network resource scheduling method of any one of claims 1-6.
CN202011039260.1A 2020-09-28 2020-09-28 In-network resource scheduling method and system for hierarchical collaborative decision and storage medium Active CN112346854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011039260.1A CN112346854B (en) 2020-09-28 2020-09-28 In-network resource scheduling method and system for hierarchical collaborative decision and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011039260.1A CN112346854B (en) 2020-09-28 2020-09-28 In-network resource scheduling method and system for hierarchical collaborative decision and storage medium

Publications (2)

Publication Number Publication Date
CN112346854A CN112346854A (en) 2021-02-09
CN112346854B true CN112346854B (en) 2023-02-03

Family

ID=74361174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011039260.1A Active CN112346854B (en) 2020-09-28 2020-09-28 In-network resource scheduling method and system for hierarchical collaborative decision and storage medium

Country Status (1)

Country Link
CN (1) CN112346854B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225722A (en) * 2021-04-20 2022-10-21 中兴通讯股份有限公司 Computing resource notification method and device, storage medium and electronic device
CN115604106A (en) * 2021-06-28 2023-01-13 华为技术有限公司(Cn) Calculation power distribution method, calculation power updating method and device
CN114070854B (en) * 2021-11-26 2023-06-27 中国联合网络通信集团有限公司 Method, system, equipment and medium for sensing and routing calculation force in calculation force network
CN114615180A (en) * 2022-03-09 2022-06-10 阿里巴巴达摩院(杭州)科技有限公司 Calculation force network system, calculation force calling method and device
CN114978908B (en) * 2022-05-11 2023-09-26 量子科技长三角产业创新中心 Evaluation and operation method and device for computing power network node
CN114866593B (en) * 2022-05-13 2023-03-28 上海交通大学 Industrial network control architecture with cooperative end edges and resource joint allocation method
CN116382926B (en) * 2023-06-05 2023-08-22 支付宝(杭州)信息技术有限公司 Method and device for computing power optimization of recommendation system
CN116437409B (en) * 2023-06-13 2023-08-22 微网优联科技(成都)有限公司 Channel switching method and device for wireless router
CN116566992B (en) * 2023-07-10 2023-11-28 北京智芯微电子科技有限公司 Dynamic collaboration method, device, computer equipment and storage medium for edge calculation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10432552B2 (en) * 2015-10-12 2019-10-01 Fujitsu Limited Just-enough-time provisioning of service function chain resources
CN105530312B (en) * 2015-12-26 2018-08-14 中国人民解放军信息工程大学 Network service path selects system and its selection method
CN110851529A (en) * 2019-11-01 2020-02-28 腾讯科技(深圳)有限公司 Computing power scheduling method and related equipment
CN111314236A (en) * 2020-04-14 2020-06-19 杭州迪普科技股份有限公司 Message forwarding method and device
CN111475276A (en) * 2020-05-29 2020-07-31 深圳市元征科技股份有限公司 Task management method and device based on edge calculation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Energy-Aware Scheduling of Real-Time Tasks in Wireless Networked Embedded Systems;G.Sudha Anil Kumar等;《28th IEEE International Real-Time System Symposium(RTSS 2007)》;20071226;第15-24页 *

Also Published As

Publication number Publication date
CN112346854A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN112346854B (en) In-network resource scheduling method and system for hierarchical collaborative decision and storage medium
CN113079218B (en) Service-oriented power network system, working method and storage medium
Liu et al. DRL-R: Deep reinforcement learning approach for intelligent routing in software-defined data-center networks
US10218623B2 (en) Learning or emulation approach to traffic engineering in information-centric networks
Qian et al. Survey on reinforcement learning applications in communication networks
CN109257287B (en) Shortest path determining method and controller
CN108259367B (en) Service-aware flow strategy customization method based on software defined network
Bi et al. Intelligent quality of service aware traffic forwarding for software-defined networking/open shortest path first hybrid industrial internet
Shang et al. Service-aware adaptive link load balancing mechanism for Software-Defined Networking
Khandaker et al. A functional taxonomy of caching schemes: Towards guided designs in information-centric networks
CN109614215A (en) Stream scheduling method, device, equipment and medium based on deeply study
Torkzadeh et al. Energy-aware routing considering load balancing for SDN: a minimum graph-based Ant Colony Optimization
CN112153153B (en) Coordinated distributed intra-network resource scheduling method and system and storage medium
CN114827284B (en) Service function chain arrangement method and device in industrial Internet of things and federal learning system
Yan et al. Flowlet-level multipath routing based on graph neural network in OpenFlow-based SDN
Du et al. A VDTN scheme with enhanced buffer management
Schneider et al. Every node for itself: Fully distributed service coordination
Zhu et al. Traffic flow optimization for UAVs in multi-layer information-centric software-defined FANET
Yu et al. A multipath routing protocol using congestion control in wireless multimedia sensor networks
Zheng et al. Application of improved ant colony algorithm in load balancing of software-defined networks
Ji et al. A three-level routing hierarchy in improved SDN-MEC-VANET architecture
Wu Deep reinforcement learning based multi-layered traffic scheduling scheme in data center networks
Li et al. Optimal service selection and placement based on popularity and server load in multi-access edge computing
Al-Hammadi et al. Independent tasks scheduling of collaborative computation offloading for SDN-powered MEC on 6G networks
Sharifian et al. LOADng-AT: a novel practical implementation of hybrid AHP-TOPSIS algorithm in reactive routing protocol for intelligent IoT-based networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant