CN115834466A - Calculation force network path analysis method, device, equipment, system and storage medium - Google Patents

Calculation force network path analysis method, device, equipment, system and storage medium Download PDF

Info

Publication number
CN115834466A
CN115834466A CN202211526944.3A CN202211526944A CN115834466A CN 115834466 A CN115834466 A CN 115834466A CN 202211526944 A CN202211526944 A CN 202211526944A CN 115834466 A CN115834466 A CN 115834466A
Authority
CN
China
Prior art keywords
node
link
server node
target routing
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211526944.3A
Other languages
Chinese (zh)
Other versions
CN115834466B (en
Inventor
张力方
胡泽妍
王玉婷
刘桂志
李一喆
李宏平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202211526944.3A priority Critical patent/CN115834466B/en
Publication of CN115834466A publication Critical patent/CN115834466A/en
Application granted granted Critical
Publication of CN115834466B publication Critical patent/CN115834466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides a computational power network path analysis method, a device, equipment, a system and a storage medium, wherein the method comprises the following steps: aiming at the resource occupancy rate of the link between each server node and each target routing node at the current moment, the following steps are executed: and determining the resource load scene type corresponding to the link between the server node at the current moment and the corresponding target routing node according to the resource occupancy rate and the threshold value of the link between the server node at the current moment and the corresponding target routing node, and determining the weight of the link between each server node and each target routing node at the current moment according to the resource occupancy rate of the link between each server node and each target routing node at the current moment, the resource occupancy rate, the weight and the threshold value of the link between each server node and each target routing node at the last moment. The method provided by the application can solve the problem of ultra-low time delay requirement and simultaneously ensure that computing power resources are fully utilized.

Description

Calculation force network path analysis method, device, equipment, system and storage medium
Technical Field
The embodiment of the application relates to the technical field of data processing, in particular to a computational power network path analysis method, device, equipment, system and storage medium.
Background
With the continuous development of artificial intelligence and mobile internet technology, a great number of novel business applications emerge, often the novel business applications need to consume huge computing resources, storage resources and energy consumption, the computing power of the current intelligent terminal equipment is still limited, the battery capacity is also low, and the processing requirements of the novel business applications cannot be met. Therefore, cloud computing is proposed, which uses virtualization technology to build an ultra-large capacity computing resource pool so that various applications can obtain required computing resources, storage resources, and software and platform services. Although the occurrence of cloud computing meets the requirement of computing-intensive business processing, some applications have the characteristic of time delay sensitivity, and the transmission time delay from a terminal to a cloud end cannot meet the requirement of the applications on ultra-low time delay in many cases, so that an edge computing technology can be utilized.
However, the large deployment of the edge computing devices and the intelligent terminal devices solves the problem of long time delay caused by uploading of a large amount of data to a cloud computing center in a network, but also enables computing resources to show a ubiquitous deployment trend. On one hand, the edge computing nodes do not perform effective cooperative processing tasks, the computing resources of a single node cannot meet the computing resource requirements of ultra-large computing intensive tasks such as image rendering and the like, and the problem of ultra-low time delay requirements of novel services with the characteristics of computing intensive and time delay sensitivity cannot be solved; on the other hand, although some edge computing nodes are overloaded and cannot effectively process computing tasks, some computing nodes are still in an idle state due to unbalanced network loads, so that computing resources of the edge network cannot be fully utilized.
Therefore, the prior art cannot effectively analyze the path of the computational power network, and further cannot solve the problem of ultra-low time delay requirement and ensure that computational power resources are fully utilized.
Disclosure of Invention
The application provides a computational power network path analysis method, device, equipment, system and storage medium, which can solve the problem of ultra-low delay requirement and ensure that computational power resources are fully utilized.
In a first aspect, the present application provides a computational power network path analysis method, applied to a computational power network system, including:
acquiring resource occupancy rates of links between each server node and each target routing node at the current moment, resource occupancy rates of links between each server node and each target routing node at the previous moment and weights of links between each server node and each target routing node at the previous moment, wherein the target routing node is one of the plurality of routing nodes directly connected with each server node; wherein, one server node corresponds to one target route node;
aiming at the resource occupancy rate of the link between each server node and each target routing node at the current moment, the following steps are executed: determining a resource load scene type corresponding to a link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment;
determining the weight of the link between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the link between each server node and each target routing node at the current moment, the resource occupancy rate of the link between each server node and each target routing node at the last moment, the weight of the link between each server node and each target routing node at the last moment and a threshold value;
and the weight of the link between each server node and each target routing node at the current moment is used for determining the shortest path of the computational power network.
In one possible design, the threshold values include a first threshold value and a second threshold value, the first threshold value being less than the second threshold value; the determining the resource load scene type corresponding to the link between the server node and the corresponding target routing node at the current time according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current time includes:
if the resource occupancy rate of the link between the server and the corresponding target route is smaller than a first threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is a resource underload;
if the resource occupancy rate of the link between the server node and the corresponding target route is greater than or equal to a first threshold value and less than a second threshold value, determining the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment as resource mid-load;
and if the resource occupancy rate of the link between the server node and the corresponding target route is greater than or equal to a second threshold value and less than 1, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource overload.
In a possible design, the determining, according to the resource load scene type, the resource occupancy rate of the link between each server node and each target routing node at the current time, the resource occupancy rate of the link between each server node and each target routing node at the previous time, the weight of the link between each server node and each target routing node at the previous time, and a threshold, the weight of the link between each server node and each target routing node at the current time includes:
for each server, determining a weight calculation model according to the resource load scene type, the resource occupancy rate of a link between the server node and a corresponding target routing node at the current moment, the resource occupancy rate of a link between the server node and a corresponding target routing node at the last moment and a threshold value;
and obtaining the weight of the link between each server node and each target routing node at the current time through the weight calculation model according to the resource occupancy rate of the link between the server node and the corresponding target routing node at the current time, the resource occupancy rate of the link between the server node and the corresponding target routing node at the last time and the weight of the link between each server node and each target routing node at the last time.
In one possible design, the determining a weight calculation model according to the resource load scenario type, the resource occupancy rate of the link between the server node and the corresponding target routing node at the current time, the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time, and a threshold value includes:
and according to the resource load scene type, determining a weight calculation model and a multiple of a weight coefficient used for calculating a link between the server node and the corresponding target routing node at the current time in the weight calculation model by comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the current time with the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time, and comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time with a threshold value.
In one possible design, the method further includes:
and updating the weight of the link between the server node and the corresponding target routing node at the current moment into a network topology structure of the computational power network system.
In one possible design, the method further includes:
acquiring the weight of a link between two routing nodes on the same link in the plurality of routing nodes;
and determining the shortest path of the computational power network according to the weight of the link between the two routing nodes on each same link and the weight of the link between each server node and the corresponding target router.
In a second aspect, the present application provides a computational power network path analysis apparatus applied to a computational power network system, the computational power network system including a plurality of server nodes and a plurality of routing nodes, the apparatus including:
the system comprises an acquisition module, a routing module and a processing module, wherein the acquisition module is used for acquiring the resource occupancy rate of a link between each server node and each target routing node at the current moment, the resource occupancy rate of a link between each server node and each target routing node at the previous moment and the weight of a link between each server node and each target routing node at the previous moment, and the target routing node is a routing node directly connected with each server node in the plurality of routing nodes; wherein, one server node corresponds to one target route node;
a determining module, configured to execute the following steps for the resource occupancy rates of the links between each server node and each target routing node at the current time: determining a resource load scene type corresponding to a link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment;
the path analysis module is used for determining the weight of the link between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the link between each server node and each target routing node at the current moment, the resource occupancy rate of the link between each server node and each target routing node at the previous moment, the weight of the link between each server node and each target routing node at the previous moment and a threshold value;
and the weight of the link between each server node and each target routing node at the current moment is used for determining the shortest path of the computational power network.
In a third aspect, the present application provides an electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the computational network path analysis method as described above in the first aspect and possible designs of the first aspect.
In a fourth aspect, the present application provides a computing power network system, comprising: an electronic device as claimed in the third aspect, a plurality of server nodes and a plurality of routing nodes.
In a fifth aspect, the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the computational power network path analysis method according to the first aspect and possible designs of the first aspect is implemented.
The computational power network path analysis method, device, equipment, system and storage medium provided by the embodiment are applied to a computational power network system, and the computational power network system comprises a plurality of server nodes and a plurality of routing nodes; firstly, acquiring resource occupancy rates of links between each server node and each target routing node at the current moment, resource occupancy rates of links between each server node and each target routing node at the previous moment and weights of links between each server node and each target routing node at the previous moment, wherein the target routing node is a routing node which is directly connected with each server node in the plurality of routing nodes; wherein, one server node corresponds to one target routing node; then, aiming at the resource occupancy rate of the link between each server node and each target routing node at the current time, the following steps are executed: determining a resource load scene type corresponding to a link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment; determining the weight of the link between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the link between each server node and each target routing node at the current moment, the resource occupancy rate of the link between each server node and each target routing node at the last moment, the weight of the link between each server node and each target routing node at the last moment and a threshold value; and the weight of the link between each server node and each target routing node at the current moment is used for determining the shortest path of the computational power network. Therefore, the method and the device have the advantages that the resource occupancy rates corresponding to the previous moment and the current moment are obtained, then the weights of the server node and the target routing node at the current moment are determined by combining the weights of the server node and the target routing node at the previous moment and the gate lower value, the weights of the server node and the target routing node at the current moment are comprehensively analyzed, dynamic updating of the weights based on the resource occupancy condition is achieved, further, the shortest path resource selected for the service is determined based on the updated weights, reasonable utilization of the resources is achieved, and meanwhile the problem of ultra-low delay requirements is solved based on the shortest path.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a scene schematic diagram of a computational power network path analysis method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a computational power network path analysis method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a computational network path analysis method according to yet another embodiment of the present application;
fig. 4 is a schematic structural diagram of a computational power network path analysis apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
At present, although the problem of too long time delay caused by uploading mass data to a cloud computing center in a network is solved by a large number of deployments of edge computing equipment and intelligent terminal equipment, computational resources are caused to show a ubiquitous deployment trend. On one hand, the edge computing nodes do not perform effective cooperative processing tasks, the computing resources of a single node cannot meet the computing resource requirements of ultra-large computing intensive tasks such as image rendering and the like, and the problem of ultra-low time delay requirements of novel services with the characteristics of computing intensive and time delay sensitivity cannot be solved; on the other hand, although some edge computing nodes are overloaded and cannot effectively process computing tasks, some computing nodes are still idle due to unbalanced network loads, so that computing resources of the edge network cannot be fully utilized. Therefore, the prior art cannot effectively analyze the path of the computational power network, and further cannot solve the problem of ultra-low time delay requirement and ensure that computational power resources are fully utilized.
In order to solve the above problems, the technical idea of the present application is: the method comprises the steps of obtaining resource occupancy rates corresponding to the previous moment and the current moment respectively, determining weights of a server node and a target routing node at the current moment by combining the weights of the server node and the target routing node at the previous moment and a threshold value, comprehensively analyzing the weights of the server node and the target routing node at the current moment, dynamically updating the weights based on the resource occupancy conditions, determining a shortest path resource selected for a service based on the updated weights, reasonably utilizing the resources, and solving the problem of ultra-low delay requirement based on the shortest path.
Interpretation of terms:
W RR_ij : a weight value between routing node i and routing node j;
W RN_ij_t : at time t (which may be the current time), a weight value between the routing node i and the computing server node (i.e., server node) j;
W RN_ij_t-1 : at the moment t-1 (which can be used as the last moment), the weight value between the routing node i and the calculation server node j is calculated;
R RN_ij_t : at the moment t, the resource occupancy rate between the routing node i and the computing server node j is calculated;
R RN_ij_t-1 : at the time of t-1, the resource occupancy rate between the routing node i and the computing server node j is calculated;
th1 is a first threshold value;
th2 is a second threshold value;
α: a weight coefficient.
Referring to fig. 1, fig. 1 is a scene schematic diagram of a computational power network path analysis method provided in an embodiment of the present application. Fig. 1 shows a computational power network system, which includes a plurality of server nodes (e.g., server node 1, i.e., N1, server node 2, i.e., N2) and a plurality of routing nodes (e.g., routing node 1, i.e., R1, routing node 2, i.e., R2, routing node 3, i.e., R3, routing node 4, i.e., R4, routing node 5, i.e., R5, routing node 6, i.e., R6). The routing nodes are used for network signal transmission, namely, service information initiated by the terminal equipment is transmitted through links among the routing nodes; the server node is used for providing corresponding service according to the received service information initiated by the terminal equipment.
By considering server resource usage, the routing weight (taking time t as an example, the routing weight is W) between a dynamic sensing server (here, a server node, such as N1, N2) and a router (here, a routing node, such as R3, R5) RN_31_t 、W RN_52_t ) And configuring and performing network path calculation so as to greatly improve the utilization rate of network resources. Wherein the weight of the link between the routing nodes (e.g., W) RR_12 、W RR_13 、W RR_23 、W RR_24 、W RR_25 、W RR_35 、W RR_45 、W RR_46 、W RR_56 ) Is determined based on characteristics such as the length of a network signal line such as an optical fiber. Therefore, the method comprises the steps of obtaining the resource occupancy rates corresponding to the previous moment and the current moment respectively, determining the weights of the server node and the target routing node at the current moment by combining the weights of the server node and the target routing node at the previous moment and the gate lower value, comprehensively analyzing the weights of the server node and the target routing node at the current moment, dynamically updating the weights based on the resource occupancy condition, determining the shortest path resource selected for a service based on the updated weights, reasonably utilizing the resources, and solving the problem of ultra-low delay requirement based on the shortest path
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart of a computational power network path analysis method according to an embodiment of the present disclosure.
Referring to fig. 2, the computational power network path analysis method is applied to a computational power network system, where the computational power network system includes a plurality of server nodes and a plurality of routing nodes; the method comprises the following steps:
s201, acquiring resource occupancy rates of links between each server node and each target routing node at the current moment, resource occupancy rates of links between each server node and each target routing node at the previous moment and weights of links between each server node and each target routing node at the previous moment, wherein the target routing node is a routing node directly connected with each server node in the plurality of routing nodes.
Wherein, one server node corresponds to one target routing node. I.e. a server node is directly connected to a target routing node.
In this embodiment, in order to implement dynamic update of the weight of the link between the server and the router, the weight, the resource occupation condition, and the resource occupation condition corresponding to the current time at the previous time may be obtained, and the weight at the current time may be recalculated.
S202, aiming at the resource occupancy rate of the link between each server node and each target routing node at the current time, the following steps are executed: and determining the resource load scene type corresponding to the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment.
In this embodiment, for each server, due to the difference in resource occupancy rates on the links between the server node and the corresponding target routing node, the types of resource load scenarios corresponding to the links between the server node and the corresponding target routing node at the current time are different. Specifically, the resource occupancy rate of the link between the server node at the current time and the corresponding target routing node is compared with a threshold value, and the resource load scene type on the link at the current time is determined according to the comparison result.
S203, determining the weight of the link between each server node and each target routing node at the current time according to the resource load scene type, the resource occupancy rate of the link between each server node and each target routing node at the current time, the resource occupancy rate of the link between each server node and each target routing node at the previous time, the weight of the link between each server node and each target routing node at the previous time and a threshold value.
And the weight of the link between each server node and each target routing node at the current moment is used for determining the shortest path of the computational power network.
Here, the weight of the link between each server node and each target routing node at the current time is a weight corresponding to the resource occupancy rate of the link between each server and each target routing node at the current time. The more tense the resource is, the more the weight is, the more the path is bypassed, so that the shortest path capable of reasonably utilizing the resource can be calculated based on the weight.
In the embodiment, according to the determined resource load scene type, the scene to which the weight calculation mode belongs is determined; under the scene that the calculation mode belongs to, a weight calculation mode is determined based on the conditions of the resource occupancy rate of the link between each server and each target routing node at the current moment, the resource occupancy rate of the link between each server node and each target routing node at the previous moment, the weight and the threshold value of the link between each server node and each target routing node at the previous moment, and the like, so that the weight of the link between each server node and each target routing node at the current moment is obtained. And then selecting a shortest path as a resource path provided for the service initiated by the terminal equipment according to the weight of the link between each server node and each target routing node at the current moment and combining the weight of the link between the routing nodes, wherein the server node on the path is the resource distributed to the service.
The computational power network path analysis method provided by the embodiment is applied to a computational power network system, wherein the computational power network system comprises a plurality of server nodes and a plurality of routing nodes; firstly, acquiring resource occupancy rates of links between each server node and each target routing node at the current moment, resource occupancy rates of links between each server node and each target routing node at the previous moment and weights of links between each server node and each target routing node at the previous moment, wherein the target routing node is a routing node which is directly connected with each server node in the plurality of routing nodes; wherein, one server node corresponds to one target route node; then, aiming at the resource occupancy rate of the link between each server node and each target routing node at the current time, the following steps are executed: determining a resource load scene type corresponding to a link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment; determining the weight of the link between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the link between each server node and each target routing node at the current moment, the resource occupancy rate of the link between each server node and each target routing node at the last moment, the weight of the link between each server node and each target routing node at the last moment and a threshold value; and the weight of the link between each server node and each target routing node at the current moment is used for determining the shortest path of the computational power network. Therefore, the method and the device have the advantages that the resource occupancy rates corresponding to the previous moment and the current moment are obtained, then the weights of the server node and the target routing node at the current moment are determined by combining the weights of the server node and the target routing node at the previous moment and the gate lower value, the weights of the server node and the target routing node at the current moment are comprehensively analyzed, dynamic updating of the weights based on the resource occupancy condition is achieved, further, the shortest path resource selected for the service is determined based on the updated weights, reasonable utilization of the resources is achieved, and meanwhile the problem of ultra-low delay requirements is solved based on the shortest path.
In a possible design, on the basis of the foregoing embodiment, this embodiment describes in detail how to determine a resource load scenario type corresponding to a link between the server node and a corresponding target routing node at the current time. The threshold value comprises a first threshold value and a second threshold value, and the first threshold value is smaller than the second threshold value; the resource load scene type corresponding to the link between the server node and the corresponding target routing node at the current moment is determined according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment, and the method can be realized by the following steps:
step a1, if the resource occupancy rate of a link between the server and a corresponding target route is smaller than a first threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource underload;
step a2, if the resource occupancy rate of the link between the server node and the corresponding target route is greater than or equal to a first threshold value and less than a second threshold value, determining the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment as the resource medium load;
and a3, if the resource occupancy rate of the link between the server node and the corresponding target route is greater than or equal to a second threshold value and less than 1, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource overload.
In this embodiment, when R is greater than or equal to 0 RN_ij_t If the resource load is less than Th1, the server Nj should be preferentially selected, that is, the resource load scene type corresponding to the link between the server node and the corresponding target routing node at the time t is the resource load.
When Th1 is less than or equal to R RN_ij_t If the resource load is less than Th2, the resource is indicated to be lightly loaded, and the server Nj is selected to have no tendency, that is, the resource load scene type corresponding to the link between the server node and the corresponding target routing node at the time t (which can be regarded as the current time) is the resource medium load.
When Th2 is less than or equal to R RN_ij_t If < 1, indicating a resource reload, the selection of the server Nj, i.e., the server node at time t (which may be referred to herein as the current time) and the corresponding target, should be avoidedAnd the resource load scene type corresponding to the link between the routing nodes is resource overload.
In a possible design, the present embodiment provides a detailed description of S203 on the basis of the above embodiments. Determining the weight of the link between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the link between each server node and each target routing node at the current moment, the resource occupancy rate of the link between each server node and each target routing node at the previous moment, the weight of the link between each server node and each target routing node at the previous moment and a threshold value, wherein the method can be realized by the following steps:
step b1, aiming at each server, determining a weight calculation model according to the resource load scene type, the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment, the resource occupancy rate of the link between the server node and the corresponding target routing node at the last moment and a threshold value.
In one possible design, determining a weight calculation model according to the resource load scene type, the resource occupancy rate of the link between the server node and the corresponding target routing node at the current time, the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time, and a threshold value may be implemented by:
and according to the resource load scene type, determining a weight calculation model and a multiple of a weight coefficient used for calculating a link between the server node and the corresponding target routing node at the current time in the weight calculation model by comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the current time with the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time, and comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time with a threshold value.
In this embodiment, if the resource load scenario type is resource underload,i.e. 0. Ltoreq.R RN_ij_t Th1 can be analyzed through the following two conditions, the condition of the current moment is selected, and then the weight calculation model is determined.
Case 11: if R is RN_ij_t -R RN_ij_t-1 ≤0
1) When Th1 is less than or equal to R RN_ij_t-1 If the weight is less than Th2, setting k as an odd number, and the weight calculation model as follows:
W RN_ij_t =W RN_ij_t-1 *[1+2α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )];
here, the multiple of the weight coefficient is 2, and since the resource occupancy rate corresponding to the current time is not greater than the resource occupancy rate corresponding to the previous time, and the resource occupancy rate corresponding to the previous time is between two threshold values, it is described that the weight of the link between the server node and the corresponding target routing node at the current time is lower than the weight of the link between the server node and the corresponding target routing node at the previous time, at this time, k is given as an odd number, and the multiple of α is greater than 1, for example, 2.
2) When Th2 is less than or equal to R RN_ij_t-1 When the weight is less than 1, setting k as an odd number, and the weight calculation model as:
W RN_ij_t =W RN_ij_t-1 *[1+3α(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )](ii) a This is achieved by
The multiple of the internal weight coefficient is 3, and since the resource occupancy rate corresponding to the current time is not greater than the resource occupancy rate corresponding to the previous time, and the resource occupancy rate corresponding to the previous time is greater than the second threshold (a large threshold), it indicates that the weight of the link between the server node and the corresponding target routing node at the current time is lower than the weight of the link between the server node and the corresponding target routing node at the previous time, at this time, k is given as an odd number, and the multiple of α is greater than the multiple of α in the case (1) in fig. 21, such as 2.
Case 12: if R is RN_ij_t -R RN_ij_t-1 Setting k to be even number when the weight is more than 0, and calculating the weight by a model as follows:
W RN_ij_t =W RN_ij_t-1 *[1+α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )](ii) a This is achieved by
The multiple of the internal weight coefficient is 1, and since the resource occupancy rate corresponding to the current time is greater than the resource occupancy rate corresponding to the previous time, and the resource occupancy rate corresponding to the current time is not greater than the first threshold value, it indicates that the weight of the link between the server node and the corresponding target routing node at the current time is higher than the weight of the link between the server node and the corresponding target routing node at the previous time, at this time, k is given as an even number, and the multiple of α may be 1.
If the resource load scene type is resource medium load, namely Th1 is less than or equal to R RN_ij_t Th2, the situation of the current moment is selected through the following two situations for analysis, and then the weight calculation model is determined.
Case 21: if R is RN_ij_t -R RN_ij_t-1 ≤0
1) When Th1 is less than or equal to R RN_ij_t-1 If the weight is less than Th2, setting k as an odd number, and the weight calculation model as follows:
W RN_ij_t =W RN_ij_t-1 *[1+α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )](ii) a This is achieved by
The multiple of the internal weight coefficient is 1, and since the resource occupancy rate corresponding to the current time is not greater than the resource occupancy rate corresponding to the previous time, and the resource occupancy rate corresponding to the previous time is between two threshold values, it is indicated that the weight of the link between the server node and the corresponding target routing node at the current time is lower than the weight of the link between the server node and the corresponding target routing node at the previous time, at this time, k is given as an odd number, and the multiple of α may be 1.
2) When Th2 is less than or equal to R RN_ij_t-1 When the weight is less than 1, setting k as an odd number, and the weight calculation model as:
W RN_ij_t =W RN_ij_t-1 *[1+2α(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )](ii) a This is achieved by
The multiple of the internal weight coefficient is 2, and since the resource occupancy rate corresponding to the current time is not greater than the resource occupancy rate corresponding to the previous time, and the resource occupancy rate corresponding to the previous time is greater than the second threshold value (a large threshold value), it indicates that the weight of the link between the server node and the corresponding target routing node at the current time is lower than the weight of the link between the server node and the corresponding target routing node at the previous time, at this time, k is given as an odd number, and the multiple of α is larger than the multiple of α in (1) in case 21, for example, 2.
Case 22: if R is RN_ij_t -R RN_ij_t-1 >0
1) When R is more than or equal to 0 RN_ij_t-1 If the weight is less than Th1, setting k as an even number, and the weight calculation model as follows:
W RN_ij_t =W RN_ij_t-1 *[1+2α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )];
here, the multiple of the weight coefficient is 2, and since the resource occupancy rate corresponding to the current time is greater than the resource occupancy rate corresponding to the previous time, and the resource occupancy rate corresponding to the current time is less than the first threshold, it indicates that the weight of the link between the server node and the corresponding target routing node at the current time is higher than the weight of the link between the server node and the corresponding target routing node at the previous time, at this time, k is given as an even number, and the multiple of α may be 2.
2) When Th1 is less than or equal to R RN_ij_t-1 At < Th2, W RN_ij_t =W RN_ij_t -1。
If the resource load scene type is resource overload, namely Th2 is less than or equal to R RN_ij_t And (3) the weight calculation model can be determined by analyzing the following two conditions and selecting the condition of the current moment.
Case 31: if R is RN_ij_t -R RN_ij_t -1. Ltoreq.0, then W RN_ij_t =W RN_ij_t-1
Case 32: if R is RN_ij_t -R RN_ij_t-1 >0
1) When R is more than or equal to 0 RN_ij_t-1 If the weight is less than Th1, setting k as an even number, and the weight calculation model as follows:
W RN_ij_t =W RN_ij_t-1 *[1+2α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )];
here, the multiple of the weight coefficient is 2, and since the resource occupancy rate corresponding to the current time is greater than the resource occupancy rate corresponding to the previous time, and the resource occupancy rate corresponding to the current time is less than the first threshold, it indicates that the weight of the link between the server node and the corresponding target routing node at the current time is higher than the weight of the link between the server node and the corresponding target routing node at the previous time, at this time, k is given as an even number, and the multiple of α may be 2.
2) When Th1 is less than or equal to R RN_ij_t-1 If < Th2, setting k as an even number, and the weight calculation model as:
W RN_ij_t =W RN_ij_t-1 *[1+3α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )];
here, the multiple of the weight coefficient is 3, and since the resource occupancy rate corresponding to the current time is greater than the resource occupancy rate corresponding to the previous time, and the resource occupancy rate corresponding to the previous time is between two threshold values, it indicates that the weight of the link between the server node and the corresponding target routing node at the current time is higher than the weight of the link between the server node and the corresponding target routing node at the previous time, at this time, k is given as an even number, and the multiple of α is larger than that described in (1) in case 32, for example, 3.
And b2, obtaining the weight of the link between each server node and each target routing node at the current moment through the weight calculation model according to the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment, the resource occupancy rate of the link between the server node and the corresponding target routing node at the last moment and the weight of the link between each server node and each target routing node at the last moment.
In this embodiment, the resource occupancy rate of the link between the server node and the corresponding target routing node at the current time, the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time, and the weight of the link between each server node and each target routing node at the previous time are input to the weight calculation model, so as to obtain the weight of the link between each server node and each target routing node at the current time.
In a possible design, on the basis of the foregoing embodiment, the method may further be implemented by:
and updating the weight of the link between the server node and the corresponding target routing node at the current moment into a network topology structure of the computational power network system.
In this embodiment, after the setting of the network topology weight is completed, the weight is dynamically updated. And then based on the updated weight, performing resource allocation and selecting the shortest path. The more the resource is tense, the more the weight is, and the shortest path that can reasonably utilize the resource can be calculated based on the weight since the path is bypassed as much as possible.
In one possible design, the method may be further implemented by:
acquiring the weight of a link between two routing nodes on the same link in the plurality of routing nodes;
and determining the shortest path of the computational power network according to the weight of the link between the two routing nodes on each same link and the weight of the link between each server node and the corresponding target router.
In this embodiment, the server resource to be allocated is determined by combining the weight of the link between two routing nodes on the same link among the plurality of routing nodes and the weight of the link between each server node and the corresponding target router, and then the shortest path is determined from all links from the terminal device to the selected server node. The shortest path calculation is performed by Dijkstra algorithm, which is not specifically limited herein.
Specifically, referring to fig. 3, fig. 3 is a schematic flow chart of a computational power network path analysis method according to still another embodiment of the present application. The method comprises the steps of obtaining resource occupancy rates corresponding to a previous moment and a current moment respectively, determining weights of a server node and a target routing node at the current moment by combining the weights of the server node and the target routing node at the previous moment and gate-down values, comprehensively analyzing the weights of the server node and the target routing node at the current moment, and determining a shortest path of a computational power network based on the weight of a link between two routing nodes on each same link and the weight of a link between each server node and a corresponding target router. The method realizes dynamic updating of the weight based on the resource occupation condition, further determines a shortest path resource selected for the service based on the updated weight, realizes reasonable utilization of the resource, and solves the problem of ultra-low delay requirement based on the shortest path.
In order to implement the computational power network path analysis method, the present embodiment provides a computational power network path analysis apparatus. Referring to fig. 4, fig. 4 is a schematic structural diagram of a computational power network path analysis apparatus provided in an embodiment of the present application; the computational power network path analysis apparatus 40 is applied to a computational power network system including a plurality of server nodes and a plurality of routing nodes, and includes:
an obtaining module 401, configured to obtain resource occupancy rates of links between each server node and each target routing node at a current time, resource occupancy rates of links between each server node and each target routing node at a previous time, and weights of links between each server node and each target routing node at the previous time, where the target routing node is a routing node directly connected to each server node among the multiple routing nodes; wherein, one server node corresponds to one target route node;
a determining module 402, configured to execute the following steps for the resource occupancy rate of the link between each server node and each target routing node at the current time: determining a resource load scene type corresponding to a link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment;
a path analysis module 403, configured to determine, according to the resource load scene type, the resource occupancy rates of links between each server and each target routing node at the current time, the resource occupancy rates of links between each server node and each target routing node at the previous time, the weight of a link between each server node and each target routing node at the previous time, and a threshold value, the weight of a link between each server node and each target routing node at the current time;
and the weight of the link between each server node and each target routing node at the current moment is used for determining the shortest path of the computational power network.
In this embodiment, an obtaining module 401, a determining module 402, and a path analyzing module 403 are provided, and are configured to obtain resource occupancy rates of links between each server node and each target routing node at a current time, resource occupancy rates of links between each server node and each target routing node at a previous time, and weights of links between each server node and each target routing node at the previous time, where the target routing node is a routing node directly connected to each server node among the multiple routing nodes; wherein, one server node corresponds to one target route node; then, aiming at the resource occupancy rate of the link between each server node and each target routing node at the current time, the following steps are executed: determining a resource load scene type corresponding to a link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment; determining the weight of the link between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the link between each server node and each target routing node at the current moment, the resource occupancy rate of the link between each server node and each target routing node at the last moment, the weight of the link between each server node and each target routing node at the last moment and a threshold value; and the weight of the link between each server node and each target routing node at the current moment is used for determining the shortest path of the computational power network. Therefore, the method and the device have the advantages that the resource occupancy rates corresponding to the previous moment and the current moment are obtained, then the weights of the server node and the target routing node at the current moment are determined by combining the weights of the server node and the target routing node at the previous moment and the gate lower value, the weights of the server node and the target routing node at the current moment are comprehensively analyzed, dynamic updating of the weights based on the resource occupancy condition is achieved, further, the shortest path resource selected for the service is determined based on the updated weights, reasonable utilization of the resources is achieved, and meanwhile the problem of ultra-low delay requirements is solved based on the shortest path.
The apparatus provided in this embodiment may be used to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
In one possible design, the threshold values include a first threshold value and a second threshold value, the first threshold value being less than the second threshold value; the determining module is specifically configured to:
when the resource occupancy rate of the link between the server and the corresponding target route is smaller than a first threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource underload;
when the resource occupancy rate of a link between the server node and the corresponding target route is greater than or equal to a first threshold value and less than a second threshold value, determining the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment as resource medium load;
and when the resource occupancy rate of the link between the server node and the corresponding target route is greater than or equal to a second threshold value and less than 1, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource overload.
In one possible design, the path analysis module includes: a weight calculation model determination unit and a weight determination unit;
a weight calculation model determination unit, configured to determine, for each server, a weight calculation model according to the resource load scene type, the resource occupancy rate of a link between the server node and a corresponding target routing node at the current time, the resource occupancy rate of a link between the server node and a corresponding target routing node at the previous time, and a threshold value;
and the weight determining unit is used for obtaining the weight of the link between each server node and each target routing node at the current moment through the weight calculation model according to the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment, the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment and the weight of the link between each server node and each target routing node at the previous moment.
In a possible design, the weight calculation model determination unit is specifically configured to:
and according to the resource load scene type, determining a weight calculation model and a multiple of a weight coefficient used for calculating a link between the server node and the corresponding target routing node at the current time in the weight calculation model by comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the current time with the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time, and comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time with a threshold value.
In one possible design, the apparatus further includes: a weight update module; a weight update module to:
and updating the weight of the link between the server node and the corresponding target routing node at the current moment into a network topology structure of the computational power network system.
In one possible design, the path analysis module is further configured to:
acquiring the weight of a link between two routing nodes on the same link in the plurality of routing nodes;
and determining the shortest path of the computational power network according to the weight of the link between the two routing nodes on each same link and the weight of the link between each server node and the corresponding target router.
In order to implement the computational power network path analysis method, the embodiment provides an electronic device. Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic apparatus 50 of the present embodiment includes: at least one processor 501 and memory 502; memory 502 for storing computer execution instructions; at least one processor 501 for executing computer-executable instructions stored in the memory to implement the various steps performed in the above-described embodiments. Reference may be made in particular to the description relating to the method embodiments described above.
An embodiment of the present application further provides a computing power network system, including: an electronic device, a plurality of server nodes and a plurality of routing nodes as described above.
An embodiment of the present application further provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the method for computing power network path analysis as described above is implemented.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or modules, and may be in an electrical, mechanical or other form. In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit. The unit formed by the modules can be realized in a hardware mode, and can also be realized in a mode of hardware and a software functional unit.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (in english: processor) to execute some steps of the methods described in the embodiments of the present application. It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise a high speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one magnetic disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, or the like. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus. The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the storage medium may reside as discrete components in an electronic device or host device.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A computational power network path analysis method is characterized by being applied to a computational power network system, wherein the computational power network system comprises a plurality of server nodes and a plurality of routing nodes; the method comprises the following steps:
acquiring resource occupancy rates of links between each server node and each target routing node at the current moment, resource occupancy rates of links between each server node and each target routing node at the previous moment and weights of links between each server node and each target routing node at the previous moment, wherein the target routing node is one of the plurality of routing nodes directly connected with each server node; wherein, one server node corresponds to one target route node;
aiming at the resource occupancy rate of the link between each server node and each target routing node at the current moment, the following steps are executed: determining a resource load scene type corresponding to a link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment;
determining the weight of the link between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the link between each server node and each target routing node at the current moment, the resource occupancy rate of the link between each server node and each target routing node at the last moment, the weight of the link between each server node and each target routing node at the last moment and a threshold value;
and the weight of the link between each server node and each target routing node at the current moment is used for determining the shortest path of the computational power network.
2. The method of claim 1, wherein the threshold value comprises a first threshold value and a second threshold value, and wherein the first threshold value is less than the second threshold value; the determining the resource load scene type corresponding to the link between the server node and the corresponding target routing node at the current time according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current time includes:
if the resource occupancy rate of the link between the server and the corresponding target route is smaller than a first threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is a resource underload;
if the resource occupancy rate of the link between the server node and the corresponding target route is greater than or equal to a first threshold value and less than a second threshold value, determining the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment as resource mid-load;
and if the resource occupancy rate of the link between the server node and the corresponding target route is greater than or equal to a second threshold value and less than 1, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource overload.
3. The method of claim 2, wherein determining the weight of the link between each server node and each target routing node at the current time according to the resource load scenario type, the resource occupancy rate of the link between each server node and each target routing node at the current time, the resource occupancy rate of the link between each server node and each target routing node at the previous time, the weight of the link between each server node and each target routing node at the previous time, and a threshold value comprises:
for each server, determining a weight calculation model according to the resource load scene type, the resource occupancy rate of a link between the server node and a corresponding target routing node at the current moment, the resource occupancy rate of a link between the server node and a corresponding target routing node at the last moment and a threshold value;
and obtaining the weight of the link between each server node and each target routing node at the current time through the weight calculation model according to the resource occupancy rate of the link between the server node and the corresponding target routing node at the current time, the resource occupancy rate of the link between the server node and the corresponding target routing node at the last time and the weight of the link between each server node and each target routing node at the last time.
4. The method of claim 3, wherein determining a weight calculation model according to the resource load scenario type, the resource occupancy rate of the link between the server node and the corresponding target routing node at the current time, the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time, and a threshold value comprises:
and according to the resource load scene type, determining a weight calculation model and a multiple of a weight coefficient used for calculating a link between the server node and the corresponding target routing node at the current time in the weight calculation model by comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the current time with the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time, and comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous time with a threshold value.
5. The method according to any one of claims 1-4, further comprising:
and updating the weight of the link between the server node and the corresponding target routing node at the current moment into a network topology structure of the computational power network system.
6. The method according to any one of claims 1-4, further comprising:
acquiring the weight of a link between two routing nodes on the same link in the plurality of routing nodes;
and determining the shortest path of the computational power network according to the weight of the link between the two routing nodes on each same link and the weight of the link between each server node and the corresponding target router.
7. A computational power network path analysis apparatus applied to a computational power network system including a plurality of server nodes and a plurality of routing nodes, the apparatus comprising:
the system comprises an acquisition module, a routing module and a processing module, wherein the acquisition module is used for acquiring the resource occupancy rate of a link between each server node and each target routing node at the current moment, the resource occupancy rate of a link between each server node and each target routing node at the previous moment and the weight of a link between each server node and each target routing node at the previous moment, and the target routing node is a routing node directly connected with each server node in the plurality of routing nodes; wherein, one server node corresponds to one target route node;
a determining module, configured to execute the following steps for resource occupancy rates of links between each server node and each target routing node at the current time: determining a resource load scene type corresponding to a link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment;
the path analysis module is used for determining the weight of the link between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the link between each server node and each target routing node at the current moment, the resource occupancy rate of the link between each server node and each target routing node at the previous moment, the weight of the link between each server node and each target routing node at the previous moment and a threshold value;
and the weight of the link between each server node and each target routing node at the current moment is used for determining the shortest path of the computational power network.
8. An electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the computational network path analysis method of any of claims 1-6.
9. A computational power network system, comprising: the electronic device of claim 8, a plurality of server nodes, and a plurality of routing nodes.
10. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the computational network path analysis method of any one of claims 1-6.
CN202211526944.3A 2022-12-01 2022-12-01 Method, device, equipment, system and storage medium for analyzing path of computing power network Active CN115834466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211526944.3A CN115834466B (en) 2022-12-01 2022-12-01 Method, device, equipment, system and storage medium for analyzing path of computing power network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211526944.3A CN115834466B (en) 2022-12-01 2022-12-01 Method, device, equipment, system and storage medium for analyzing path of computing power network

Publications (2)

Publication Number Publication Date
CN115834466A true CN115834466A (en) 2023-03-21
CN115834466B CN115834466B (en) 2024-04-16

Family

ID=85533396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211526944.3A Active CN115834466B (en) 2022-12-01 2022-12-01 Method, device, equipment, system and storage medium for analyzing path of computing power network

Country Status (1)

Country Link
CN (1) CN115834466B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8040808B1 (en) * 2008-10-20 2011-10-18 Juniper Networks, Inc. Service aware path selection with a network acceleration device
WO2013042349A1 (en) * 2011-09-22 2013-03-28 日本電気株式会社 Device and method for determining allocation resources and resource provision system
WO2014185768A1 (en) * 2013-05-13 2014-11-20 Mimos Berhad A method of spectrum aware routing in a mesh network and a system derived thereof
US20200351900A1 (en) * 2019-04-30 2020-11-05 Fujitsu Limited Monitoring-based edge computing service with delay assurance
CN113766544A (en) * 2021-09-18 2021-12-07 国网河南省电力公司信息通信公司 Multi-edge cooperation-based power Internet of things slice optimization method
CN114040479A (en) * 2021-10-29 2022-02-11 中国联合网络通信集团有限公司 Calculation force node selection method and device and computer readable storage medium
WO2022116957A1 (en) * 2020-12-02 2022-06-09 中兴通讯股份有限公司 Algorithm model determining method, path determining method, electronic device, sdn controller, and medium
CN114745317A (en) * 2022-02-09 2022-07-12 北京邮电大学 Computing task scheduling method facing computing power network and related equipment
CN114867065A (en) * 2022-05-18 2022-08-05 中国联合网络通信集团有限公司 Base station computing force load balancing method, equipment and storage medium
CN115396358A (en) * 2022-08-23 2022-11-25 中国联合网络通信集团有限公司 Route setting method, device and storage medium for computing power perception network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8040808B1 (en) * 2008-10-20 2011-10-18 Juniper Networks, Inc. Service aware path selection with a network acceleration device
WO2013042349A1 (en) * 2011-09-22 2013-03-28 日本電気株式会社 Device and method for determining allocation resources and resource provision system
WO2014185768A1 (en) * 2013-05-13 2014-11-20 Mimos Berhad A method of spectrum aware routing in a mesh network and a system derived thereof
US20200351900A1 (en) * 2019-04-30 2020-11-05 Fujitsu Limited Monitoring-based edge computing service with delay assurance
WO2022116957A1 (en) * 2020-12-02 2022-06-09 中兴通讯股份有限公司 Algorithm model determining method, path determining method, electronic device, sdn controller, and medium
CN113766544A (en) * 2021-09-18 2021-12-07 国网河南省电力公司信息通信公司 Multi-edge cooperation-based power Internet of things slice optimization method
CN114040479A (en) * 2021-10-29 2022-02-11 中国联合网络通信集团有限公司 Calculation force node selection method and device and computer readable storage medium
CN114745317A (en) * 2022-02-09 2022-07-12 北京邮电大学 Computing task scheduling method facing computing power network and related equipment
CN114867065A (en) * 2022-05-18 2022-08-05 中国联合网络通信集团有限公司 Base station computing force load balancing method, equipment and storage medium
CN115396358A (en) * 2022-08-23 2022-11-25 中国联合网络通信集团有限公司 Route setting method, device and storage medium for computing power perception network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HASAN ANIL AKYILDIZ: "Joint server and route selection in SDN networks", 《2017 IEEE INTERNATIONAL BLACK SEA CONFERENCE ON COMMUNICATIONS AND NETWORKING》, 1 February 2018 (2018-02-01) *
戴鑫: "面向算力网络的微服务调度策略研究与实现", 《中国优秀硕士学位论文全文数据库》, 15 June 2022 (2022-06-15) *

Also Published As

Publication number Publication date
CN115834466B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN111614746B (en) Load balancing method and device of cloud host cluster and server
CN111176792A (en) Resource scheduling method, device and related equipment
CN110851235B (en) Virtual network function deployment method suitable for multidimensional resource optimization configuration
CN112491741B (en) Virtual network resource allocation method and device and electronic equipment
CN108347377B (en) Data forwarding method and device
CN113687949B (en) Server deployment method, device, deployment equipment and storage medium
CN109788006B (en) Data equalization method and device and computer equipment
CN113746763B (en) Data processing method, device and equipment
CN112118314B (en) Load balancing method and device
CN110995856B (en) Method, device and equipment for server expansion and storage medium
CN115840649B (en) Method and device for partitioning capacity block type virtual resource allocation, storage medium and terminal
CN115834466A (en) Calculation force network path analysis method, device, equipment, system and storage medium
CN113472591B (en) Method and device for determining service performance
CN114881221A (en) Mapping scheme optimization method and device, electronic equipment and readable storage medium
CN108520025B (en) Service node determination method, device, equipment and medium
CN113656046A (en) Application deployment method and device
CN113468442A (en) Resource bit flow distribution method, computing device and computer storage medium
CN102696257B (en) Method and device for implementing temperature balance among multiple physical servers
CN110960858A (en) Game resource processing method, device, equipment and storage medium
CN116566992B (en) Dynamic collaboration method, device, computer equipment and storage medium for edge calculation
CN112019368B (en) VNF migration method, VNF migration device and VNF migration storage medium
CN113535388B (en) Task-oriented service function aggregation method
CN115361284B (en) Deployment adjustment method of virtual network function based on SDN
CN116248577A (en) Method and device for determining calculation force node
CN110955612A (en) Data caching method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant