CN116032935A - Task unloading method and device, electronic equipment and storage medium - Google Patents

Task unloading method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116032935A
CN116032935A CN202211394511.7A CN202211394511A CN116032935A CN 116032935 A CN116032935 A CN 116032935A CN 202211394511 A CN202211394511 A CN 202211394511A CN 116032935 A CN116032935 A CN 116032935A
Authority
CN
China
Prior art keywords
task
candidate
edge server
load
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211394511.7A
Other languages
Chinese (zh)
Inventor
李新
黄善国
辛静杰
张路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202211394511.7A priority Critical patent/CN116032935A/en
Publication of CN116032935A publication Critical patent/CN116032935A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure provides a task unloading method, a device, an electronic device and a storage medium, comprising: acquiring a task to be offloaded and determining a transmission network of the task to be offloaded; calculating the load capacity of a plurality of transmission links in the transmission network and the load capacity of an edge server in the transmission network; determining an unloading decision of the task to be unloaded based on the load amounts of the transmission links and the load amounts of the edge servers; and unloading the task to be unloaded according to the unloading decision. According to the method and the device, the unloading decision is determined by calculating the load amounts of the transmission links in the transmission network and the load amounts of the edge servers, and then the task to be unloaded is unloaded through the unloading decision, so that the task to be unloaded can be directly unloaded from the terminal equipment to the proper edge server.

Description

Task unloading method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of edge computing technologies, and in particular, to a task offloading method, a task offloading device, an electronic device, and a storage medium.
Background
The distance between the terminal device and the remote cloud server limits the transmission of the task to be offloaded to the remote cloud server to a large extent. Some tasks from remote areas may lead to a lower quality of experience (Quality of Experience, qoE) for the user due to long routing delays, especially for delay sensitive tasks.
Currently, in order to meet the increasing demands of computationally intensive and delay sensitive tasks, mobile edge computing is proposed that pushes computation, control and storage towards the network edge (e.g., base station and access point), and the terminal device can offload the task to a physically close edge server for processing. Based on this, how to select an appropriate offload edge server for a task becomes an urgent problem to be solved.
In the prior art, tasks are firstly offloaded to a local edge server closest to terminal equipment, and if the local edge server cannot process the tasks within a time delay threshold of the tasks due to overload, the tasks are migrated to other light-load edge servers through an optical network to be processed, so that the offloading of the tasks on the terminal equipment is realized. In addition, the calculation load of the edge server and the traffic load of the transmission link are not considered at the same time in the unloading process, so that the load in the unloading process is unbalanced, and the stability of the unloading process is further affected.
Disclosure of Invention
In view of the foregoing, an object of the present disclosure is to provide a task offloading method, apparatus, electronic device, and storage medium.
As one aspect of the present disclosure, there is provided a task offloading method including:
acquiring a task to be offloaded and determining a transmission network of the task to be offloaded;
calculating the load capacity of a plurality of transmission links in the transmission network and the load capacity of an edge server in the transmission network;
determining an unloading decision of the task to be unloaded based on the load amounts of the transmission links and the load amounts of the edge servers;
and unloading the task to be unloaded according to the unloading decision.
Optionally, the calculating the load capacity of a plurality of transmission links in the transmission network includes:
acquiring idle frequency spectrum blocks in the transmission links, and calculating the weight of the idle frequency spectrum blocks;
calculating the weights of the transmission links according to the number of the frequency slots of the idle frequency spectrum block and the weights of the idle frequency spectrum block;
calculating the load variance of the transmission links according to the weights of the transmission links;
wherein the calculating the load variance of the plurality of transmission links according to the weights of the plurality of transmission links is expressed as:
Figure BDA0003932879950000021
wherein ,
Figure BDA0003932879950000022
representing the load variance of a number of transmission links; />
Figure BDA0003932879950000023
AT the time AT r,j Spectrum occupancy of links (a, b) AT time AT according to several transmission links r,j Is obtained; />
Figure BDA0003932879950000024
AT the time AT r,j Average spectrum occupancy of the links (a, b) by arithmetic mean calculation of the spectrum occupancy.
Optionally, the acquiring the idle spectrum blocks in the plurality of transmission links and calculating weights of the idle spectrum blocks are expressed as:
Figure BDA0003932879950000025
where δ (n) represents the weight of a spectral block containing n consecutive available frequency slots, and n-u+1 represents the selectable number of u consecutive frequency slots occupied in the available spectral block containing n frequency slots.
Optionally, the calculating the weights of the transmission links according to the number of the frequency slots of the idle spectrum block and the weights of the idle spectrum block is expressed as:
Figure BDA0003932879950000026
wherein ,
Figure BDA0003932879950000027
representing the weights of the transmission links, H and z h Respectively representing the number of available frequency spectrum blocks contained in the transmission link (a, b) at the time t and the number of frequency slots contained in the h available frequency spectrum block, < >>
Figure BDA0003932879950000028
Set of available spectrum blocks representing link (a, b) at time t>
Figure BDA0003932879950000031
Optionally, the calculating the load of the edge server in the transport network includes:
obtaining the resource processing amount of the candidate edge server based on the unloading capacity of the candidate edge server and the task to be unloaded received by the candidate edge server;
Calculating an arithmetic average value of the resource processing amount to obtain the average resource processing amount of the candidate edge server;
calculating a load variance of the candidate edge server based on the resource throughput and the average resource throughput;
wherein the load variance of the candidate edge servers is calculated as:
Figure BDA0003932879950000032
wherein ,
Figure BDA0003932879950000033
representing load variance of candidate edge servers, c r,j Representing the computing resources allocated by edge server j to task r to be offloaded, +.>
Figure BDA0003932879950000034
Representing average resource throughput of candidate edge servers, AT r,j Time of task r reaching candidate edge server j, +.>
Figure BDA0003932879950000035
The load of the candidate edge server j at time t is indicated.
Optionally, the determining, based on the load amounts of the plurality of transmission links and the load amount of the edge server, an offloading decision of the task to be offloaded includes:
calculating the weight of the task to be offloaded from the plurality of transmission links to each edge server in the transmission network, and setting the path with the minimum weight as a candidate path;
selecting the candidate paths based on a preset spectrum rule to obtain candidate paths conforming to the spectrum rule;
modulating the candidate paths conforming to the spectrum rules to obtain modulated candidate paths;
Setting an edge server positioned on the candidate path of the modulation processing as a candidate edge server;
generating a candidate offloading decision based on the candidate path of the modulation process and the candidate edge server;
calculating the deflection of the transmission links based on the load amounts of the transmission links, and calculating the matching degree of the candidate edge server based on the load amounts of the candidate edge servers;
calculating an expected intensity of the candidate offloading decision based on the bias level and the matching level;
calculating an execution time of the candidate offloading decision based on the expected intensity;
and in response to determining that the execution time of the candidate offloading decision is less than or equal to a second threshold, determining that the candidate offloading decision is an offloading decision.
Optionally, the calculating the expected intensity of the candidate offloading decision based on the bias degree and the matching degree is expressed as:
Figure BDA0003932879950000041
wherein ,Pr,j Represents the desired intensity, eta r,j Representing the bias of edge servers, τ r,j And alpha and beta respectively represent the matching degree factor of the edge server and the preference degree factor of the routing and resource allocation scheme, and are both more than or equal to 0.
As a second aspect of the present disclosure, the present disclosure also provides a task unloading device including:
The task acquisition module acquires a task to be offloaded and determines a transmission network of the task to be offloaded;
the load capacity calculation module is used for calculating the load capacity of a plurality of transmission links in the transmission network and the load capacity of an edge server in the transmission network;
the unloading decision determining module is used for determining an unloading decision of the task to be unloaded based on the load amounts of the transmission links and the load amounts of the edge servers;
and the task unloading module is used for unloading the task to be unloaded according to the unloading decision.
As a third aspect of the disclosure, the disclosure further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, where the processor executes the program to implement the task unloading method provided by the disclosure.
As a fourth aspect of the disclosure, the disclosure also provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of the above.
As described above, the present disclosure comprehensively considers the computation load of the edge server and the traffic load of the transmission link in the metro network, and actively realizes the load balancing of the edge server and the link while ensuring the successful execution of the task, thereby fully utilizing the computation resources of the edge server and the spectrum resources of the link in the network, ensuring the long-term stability of the system, and overcoming the defects existing in the prior art.
Drawings
In order to more clearly illustrate the technical solutions of the present disclosure or related art, the drawings required for the embodiments or related art description will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
Fig. 1 is a schematic diagram of a transmission network structure according to an embodiment of the disclosure.
Fig. 2A is a schematic diagram of a task offloading method according to an embodiment of the disclosure.
Fig. 2B is a schematic diagram of a method for calculating a transmission link load according to an embodiment of the present disclosure.
Fig. 2C is a schematic diagram of a method for calculating an edge server load according to an embodiment of the present disclosure.
Fig. 2D is a schematic diagram of a method for determining an offloading decision according to an embodiment of the disclosure.
Fig. 3 is a schematic structural diagram of a task unloading device according to an embodiment of the disclosure.
Fig. 4 is a schematic structural diagram of an electronic device according to a task offloading method provided in an embodiment of the present disclosure.
Detailed Description
For the purposes of promoting an understanding of the principles and advantages of the disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present disclosure pertains. The terms "first," "second," and the like, as used in embodiments of the present disclosure, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
In the prior art, in the task unloading process, the task is required to be sent to a local edge server closest to the terminal device, and then the task is sent to a proper target edge server through the local edge server to unload the task, so that when the task is transmitted to the target edge server, due to uneven flow distribution in a network, the transmission path may have no available spectrum resources due to overload of a link and high spectrum occupancy rate. Whether the completion time of a task exceeds the execution delay threshold of the task due to overload of an edge server or no available spectrum resource is caused due to overload of traffic load of a link on a transmission path, the task is caused to fail.
In order to solve the problems, the disclosure provides a task unloading method, a task unloading device, electronic equipment and a storage medium. Through the method, firstly, the load capacity of the transmission link and the load capacity of the edge server in the transmission network are calculated, then an unloading decision is generated based on the load capacity of the transmission link and the load capacity of the edge server, and finally, the task is unloaded through the unloading decision.
Having described the basic principles of the present disclosure, various non-limiting embodiments of the present disclosure are specifically described below.
Fig. 1 is a schematic diagram of a transmission network structure according to an embodiment of the disclosure.
In some embodiments, as shown in FIG. 1, the transport network may include edge servers 1-4, transport links A-B, B-C, C-D, D-E, E-F, F-A, and optical nodes A-F. Wherein, the workload of edge server 1 may be 0.96f, the workload of edge server 2 may be 0.6f, the workload of edge server 3 may be 0.48f, and the workload of edge server 4 may be 0.92f. The length of each transmission link is equal and 20 km, and the computing power of each edge server is also the same, denoted as f=3×10 10 The number of the frequency slots on each transmission link is I=12, and the transmission rate of one frequency slot using BPSK is B=6.25 Gbit/s; the maximum number of frequency slots allowed to be allocated to one task to be offloaded is F max =4; the maximum and minimum computing resources allowed to be allocated to a task to be offloaded are respectively: c (C) max =0.5 f and C min =0.1f。
Fig. 2A is a schematic diagram of a task offloading method according to an embodiment of the disclosure.
The task offloading method shown in fig. 2A further includes the steps of:
step S10: and acquiring the task to be offloaded and a transmission network of the task to be offloaded.
In some embodiments, a task to be offloaded on a terminal device may be acquired first, and a transmission network related to the task is formed through the acquired task to be offloaded. The task to be offloaded can then be transmitted to the edge server over the transmission network for offloading.
In some embodiments, after we acquire the task to be offloaded, we can perform modeling processing on the task to be offloaded, which can be specifically expressed as: r (s, c, d, t) arr ,T max ) Wherein s represents an access optical node to be task offloaded,s∈V A c is the computing resource required for executing the task, d represents the size of the input data volume of the task, t arr T represents the moment when the task arrives at the metropolitan optical network max Representing the execution latency threshold of the task.
In some embodiments, the transport network of this task we form by the task to be offloaded can be expressed as: g (V, E, J), where G represents the network, V represents the collection of optical nodes in the transport network, { v|v ε V }; e represents a set of transmission links, { (a, b) | (a, b) ∈E, a+.b }, e| is the number of transmission links; j represents the edge server set, { j|j ε J }, j|is the number of edge servers, y j Representing the optical node where edge server j is located, y j E V, VA represents the set of task reachable optical nodes.
In some embodiments, the tasks to be offloaded from the user terminal are first aggregated to the optical nodes and then transferred to the corresponding target edge servers (i.e., the edge servers most suitable for the task) based on the offloading decision. Edge computing servers are typically deployed on these nodes, i.e., y j ∈V A
Figure BDA0003932879950000072
However, the deployment cost of the edge server is large, so that the edge server cannot be deployed on all the task reachable nodes in an actual network. Based on this, the invention further improves the transport network such that the communication within the transport network is carried by the elastic optical network. In a flexible optical network, the aforementioned spectrum resources on links (a, b) may be divided into I frequency slots, and the spectrum resources of links (a, b) may be denoted Γ= {1,2, 3.
In some alternative embodiments, when the task to be offloaded reachable nodes are node a, node C and node F, it is assumed that the edge servers in the transport network are overloaded when their capacity exceeds their computing power; the computing power of each edge server is the same, f=3×10 10 cycle/s; the number of the frequency slots on each link is I=12, and the transmission rate of one frequency slot using BPSK is B=6.25 Gbit/s; the best allowed to be assigned to a taskThe number of the large frequency slots is F max =4; the maximum and minimum computing resources allowed to be allocated to a task are: c (C) max =0.5 f and C min =0.1f; the modulation formats selected for transmission include BPSK, QPSK and 8QAM, and the transmission rate and maximum transmission distance parameters supported by the modulation formats are shown in table 3 of the present disclosure. Task to be offloaded [ A, 0.6X10 10 cycles,2.25Gbits,15,0.8s]The network is accessed by optical node a at time t=15, at which time the task queues of edge servers 1,2,3, and 4 are shown in table 1.
Table 1 task queue and task execution end time for four edge servers at time 1 t =15
Figure BDA0003932879950000071
In some embodiments, when the task queue is as shown in table 1 above, the remaining occupancy time for each slot on each link in the network at time t=15 may be as shown in table 2 below.
Table 2 remaining occupation time (unit: s) of each slot on each link at time t=15
Figure BDA0003932879950000081
Step S20: and calculating the load capacity of a plurality of transmission links in the transmission network and the load capacity of an edge server in the transmission network.
In some embodiments, after we acquire the tasks to be offloaded from the user terminal, these tasks to be offloaded may be sent onto the transport network. Furthermore, according to the specific requirements of the task, the unloading decision can be determined for the task by comprehensively considering the load capacity of the edge server in the transmission network and the load capacity of the transmission link. It will be appreciated that during actual operation there may be a large number of edge servers on the transport network, while there may be several transport links per task to be offloaded to the target edge server (the edge server that is the most suitable for this task).
Fig. 2B is a schematic diagram of a method for calculating a transmission link load according to an embodiment of the present disclosure.
In some embodiments, as shown in fig. 2B, a further development of calculating the load amounts of several transmission links in the transmission network in step S20 specifically includes the following steps:
s201: and acquiring idle frequency spectrum blocks in the transmission links, and calculating the weight of the idle frequency spectrum blocks.
In some embodiments, the occupancy state of the frequency slot i on the transmission link (a, b) at time t is determined by
Figure BDA0003932879950000082
Representation (i εΓ), where ∈Γ>
Figure BDA0003932879950000083
Indicating that at time t the frequency slot i has been occupied, < >>
Figure BDA0003932879950000084
Indicating that a frequency slot i is available at time t. The weight delta (n) of a free spectral block containing n consecutive available frequency slots can then be expressed as:
Figure BDA0003932879950000091
where δ (n) represents the weight of a free spectral block containing n consecutive available frequency slots, and n-u+1 represents the selectable number of occupied u consecutive frequency slots in the available spectral block containing n frequency slots. The more available frequency slots the free spectrum block contains, the higher the importance of the free spectrum block.
In some alternative embodiments, when the task to be offloaded reaches the transport network. First, the importance of the available spectrum blocks containing different numbers of frequency slots is calculated according to the above formula, and the specific calculation result can be shown in table 3.
Table 3 contains weights for available spectrum blocks of different numbers of frequency slots
Figure BDA0003932879950000092
S202: and calculating the weights of the transmission links according to the number of the frequency slots of the idle frequency spectrum block and the weights of the idle frequency spectrum block.
In some embodiments, after we get the weights of the idle spectrum blocks, we can update the weights of the transmission links by the weights of the idle spectrum blocks and the available number of slots of the idle spectrum blocks, where the weights of the transmission links can be expressed as:
Figure BDA0003932879950000093
wherein ,
Figure BDA0003932879950000094
representing the weights of the transmission links, H and z h Respectively representing the number of available frequency spectrum blocks contained in the transmission link (a, b) at the time t and the number of frequency slots contained in the h available frequency spectrum block, < >>
Figure BDA0003932879950000095
Set of available spectrum blocks representing link (a, b) at time t>
Figure BDA0003932879950000096
S203: and calculating the load variance of the transmission links according to the weights of the transmission links.
In some embodiments, after we obtain the weight about the transmission link, we can also calculate the load of the transmission link according to the weight of the transmission link, and can also understand that at a certain moment, the spectrum occupancy rate of the link is calculated, and then calculate the imbalance of the loads of several transmission links in the whole network according to the spectrum occupancy rate of the link.
In some embodiments, minimizing the weight sum of the imbalance of the load of the transmission link and the imbalance of the load of the edge server is the object to be achieved by the present invention. However, since there are many candidate offloading decisions per task, and the sum of the weights for each candidate offloading decision is different, this approach is the worst and highly complex if the sum of weights is chosen to be the smallest by traversing all candidate offloading decisions. We therefore propose to determine the final offloading decision by updating the weights of the transmission links to determine candidate offloading decisions and calculating bias and fitness. So designed, it is not necessary to traverse all candidate offloading decisions using the methods of the present disclosure. Furthermore, we propose a method that does not need to calculate the weighted sum of the two imbalances, but instead translates into calculating the bias and fitness. Making it easier to calculate.
Specifically, calculating the load variance of the transmission link can be expressed as:
Figure BDA0003932879950000101
wherein ,
Figure BDA0003932879950000102
representing the load variance of a plurality of transmission links after selecting the server j; />
Figure BDA0003932879950000103
AT the time AT r,j Spectrum occupancy of links (a, b) AT time AT according to several transmission links r,j Is obtained; />
Figure BDA0003932879950000104
AT the time AT r,j Average spectrum occupancy of the links (a, b) by arithmetic mean calculation of the spectrum occupancy.
In some embodiments, the smaller the load variance of the transmission link, the more balanced the spectrum usage situation among the transmission links, and thus the more balanced the load capacity of the transmission links.
In some embodiments, we can calculate the load of the transmission link by the method described above, and get the final result. And then, the load capacity of the edge server can be calculated, a final unloading decision is determined according to the load capacity of the transmission link and the load capacity of the edge server, and the task to be unloaded is unloaded according to the unloading decision.
Fig. 2C is a schematic diagram of a method for calculating an edge server load according to an embodiment of the present disclosure.
In some embodiments, as shown in fig. 2C, a further development of calculating the load of the edge server in the transport network in step S20 specifically includes the following steps:
S204: and obtaining the resource processing capacity of the candidate edge server based on the unloading capacity of the candidate edge server and the task to be unloaded received by the candidate edge server.
In some embodiments, after we calculate the load of the transmission link, we can also calculate the load of the edge server, and then determine the final offloading decision by the load of the transmission link and the load of the edge server.
In some embodiments, when calculating the load of the edge server, the resource throughput of the edge server needs to be calculated first, which can be specifically expressed as:
Figure BDA0003932879950000111
wherein ,cr,j Representing the resource throughput allocated to task r by edge server j, C max Representing the resource throughput of an edge server that can be allocated to at most one task, C min Representing the minimum amount of resource processing that an edge server can allocate to a task,
Figure BDA0003932879950000112
representing the load of the edge server j at the time t, C j Representation ofComputing power of edge server j.
In some embodiments, in the above formula
Figure BDA0003932879950000113
Can be expressed as: />
Figure BDA0003932879950000114
wherein ,/>
Figure BDA0003932879950000116
Representing the task queue of edge server j at time t. />
S205: and calculating an arithmetic average value of the resource processing amount to obtain the average resource processing amount of the candidate edge server.
In some embodiments, when we get the resource throughput of the server, we can also calculate the average resource throughput of the edge server based on the resource throughput. Specifically, an edge server may handle multiple tasks simultaneously, and when a task is completed, the computing resources allocated to that task are released.
Figure BDA0003932879950000115
Representing the remaining computing resources of edge server j at time t +.>
Figure BDA0003932879950000121
Figure BDA0003932879950000122
Representing edge server AT time AT r,j Can be expressed as +.>
Figure BDA0003932879950000123
wherein ,/>
Figure BDA0003932879950000124
The smaller the table may show that the load distribution among the edge servers is more balanced. It will be appreciated that the averaging of edge serversThe resource throughput may be calculated by averaging the resource throughput.
S206: and calculating the load variance of the candidate edge server based on the resource processing amount and the average resource processing amount.
In some embodiments, after we obtain the resource throughput and the average resource throughput of the edge server through the above calculation method, we can calculate the resource throughput and the average resource throughput, thereby obtaining the load capacity of the edge server. The amount of load of an edge server may also be specified as the amount of its computing resources that are occupied.
In some embodiments, the load variance of an edge server may be expressed as:
Figure BDA0003932879950000125
wherein ,
Figure BDA0003932879950000126
representing the load variance of candidate edge servers after selecting server j, c r,j Representing the amount of computing resources allocated by the candidate edge server j to the task r to be offloaded,/>
Figure BDA0003932879950000127
Representing average resource throughput of candidate edge servers, AT r,j Time of task r reaching candidate edge server j, +.>
Figure BDA0003932879950000128
The load of the candidate edge server j at time t is indicated.
In some embodiments, after deriving the weights for several transmission links, we have started to calculate the transmission paths and determine candidate edge servers, and calculate the variance of the spectrum occupancy for several links and the load variance of the edge servers only to describe our problem. It will be appreciated that the calculation of the variance of several transmission links and the variance of candidate edge servers is not our focus, updating the weights of several transmission links, and calculating fitness and bias as well as expected strength is our focus. In a specific technical implementation, we typically do not need to calculate the two variances described above.
As described above, in the present disclosure, we are based on obtaining the load amount of the transmission link and the load amount of the edge server, respectively. Next, we will further determine a final offloading decision by the load of the transmission link and the load of the edge server, and offload the task to be offloaded by the determined offloading decision.
Step S30: and determining the unloading decision of the task to be unloaded based on the load amounts of the transmission links and the load amounts of the edge servers.
In some embodiments, after we have determined the load amounts of the transmission link and the edge server through the above calculation, we can further determine the offloading decision of the task to be offloaded according to the determined load amounts and the processing resources required by the task to be offloaded.
In some embodiments, we can first determine the candidate offloading decision of the task to be offloaded by the load amount of the transmission link, and further select the candidate offloading decision by the load amount of the edge server to determine the final offloading decision.
In some embodiments, we can calculate the expected value of the task to be offloaded by the load of the transmission link and the load of the edge server, and determine the final offloading decision by the expected value of the task to be offloaded.
Fig. 2D is a schematic diagram of a method for determining an offloading decision according to an embodiment of the disclosure.
In some embodiments, as shown in fig. 2D, the further development of step S30 specifically includes the following steps:
s301: and calculating the weight of the task to be offloaded to each edge server in the transmission network through the transmission links, and setting the path with the minimum weight as a candidate path.
In some embodiments, after we obtain the load capacity of the transmission links, we can calculate the weight of the task to be offloaded from each edge server in the transmission network through the plurality of transmission links, and set the path with the minimum weight as a candidate path; it can be understood that a link with a load amount lower than a first threshold in the transmission link may be set as a candidate link, where the first threshold may be a threshold set by a method of manually measuring and calculating a task to be offloaded and empirically measuring the task. The load amount being lower than the first threshold value indicates that the load amount remaining by the transmission link meets the load amount required by the task to be offloaded on the transmission link, and the task to be offloaded can be transmitted through the transmission link.
In some embodiments, we can also calculate the access optical node s from the task r to each edge server j separately using Dijkstra's algorithm (s+.y) j ) Is the path p with the lowest weight r,j . Wherein the path p r,j Weight PW of (2) r,j The method of calculation of (1) can be expressed as,
Figure BDA0003932879950000131
in some embodiments, if the access optical node s to be task-offloaded r is an optical node configured by the edge server j, and no routing and resource allocation is required, the transmission path weight at this time is a smaller value Δ. The lower the weight of a transmission path, the higher the probability that the transmission path contains available spectrum, and the higher the probability that the task to be offloaded is successfully transmitted using the transmission path.
In some embodiments, table 4 represents the path from optical node a to each edge server with the lowest weight. The transmission path with the lowest weight from the optical node a to the edge server 2 is: transmission links a-F, transmission links F-B, transmission links B-C, transmission path weight of 0.033, and transmission path length of 60km. The lowest weighted transmission path from optical node a to edge server 3 is: transmission links A-F, F-B, B-C, C-E, with a transmission path weight of 0.065 and a transmission path length of 80km. The lowest weighted transmission path from optical node a to edge server 4 is: and the transmission links A-F have the transmission path weight of 0.011 and the transmission path length of 20km.
Table 4 path from optical node a to each edge server with lowest weight
Figure BDA0003932879950000141
In some embodiments, when we find a transmission link that meets the task to be offloaded, we can set this transmission path as a candidate path. It will be appreciated that the number of candidate paths may not be unique and that any path that meets the above requirements may be counted as the final candidate path.
S302: and selecting the candidate paths based on a preset spectrum rule to obtain candidate paths conforming to the spectrum rule.
In some embodiments, certain spectrum rules need to be satisfied when the task to be offloaded is transmitted to the edge server.
In some embodiments, the spectrum rules may be expressed as 1. Spectrum proximity restrictions, i.e., the frequency slots allocated to tasks on one link should be contiguous; 2. spectral consistency constraints, i.e. for path p r,j For each link on the table, the frequency slots allocated to the tasks should be the same; 3. the spectrum is not overlap limited, i.e. the blocks of spectrum allocated to different tasks cannot overlap, and a frequency slot can only be occupied by one task at the same time.
In some embodiments, when we filter the candidate paths according to the above rule, we can obtain candidate paths meeting the spectrum rule.
S303: and carrying out modulation processing on the candidate paths conforming to the spectrum rules to obtain the candidate paths subjected to modulation processing.
In some embodiments, when we get candidate paths that meet the spectrum rule, we can modulate these candidate paths.
In some embodiments, candidate path p r,j The length of (2) cannot exceed the maximum transmission distance of the selected modulation format. When the candidate path satisfies the maximum transmission distance limit of the modulation format, the modulation format with the highest modulation class is preferentially selected.
In some embodiments, the present invention considers three modulation formats altogether, binary phase shift keying (Binary Phase Shift Keying, BPSK) (m=1), quadrature phase shift keying (Quadrature Phase Shift Keying, QPSK) (m=2), and 8-quadrature amplitude modulation (8-Quadrature Amplitude Modulation,8 QAM) (m=3), respectively. The higher the modulation level, the higher the transmission rate supported by one frequency slot, and the shorter the supported transmission distance. It is understood that only the three modulation formats described above are shown in this disclosure, but it is not represented that the modulation process in this disclosure can be performed only in the three modulation formats described above.
In some embodiments, the following table 5 shows a specific description of the three modulation methods:
table 5 capacity and transmission distance parameters of modulation format
Figure BDA0003932879950000151
In some embodiments, when we modulate candidate paths that meet the spectrum rule, we can get the candidate paths for modulation processing.
S304: and setting an edge server positioned on the candidate path of the modulation processing as a candidate edge server.
In some embodiments, when we get the candidate path for the modulation process, we can select the candidate edge server based on the candidate path for the modulation process. In particular, we can consider an edge server on a candidate path of the modulation process as a candidate edge server.
S305: and generating a candidate unloading decision based on the candidate path of the modulation process and the candidate edge server.
In some embodiments, after we have determined the candidate paths and candidate edge servers for the modulation process, we can further determine candidate offloading decisions based on the candidate paths and candidate edge servers for the modulation process.
In some embodiments, we can determine candidate offloading decisions by the following method. Specifically, p is on a candidate path for modulation processing r,j There may be a plurality of spectrum blocks meeting the above spectrum resource allocation constraint, and we may first select spectrum block resources on candidate paths for the modulation process.
In some embodiments, the method of selecting a spectral block is as follows: 1. allocating as many frequency slots as possible for the task r to be offloaded, wherein the number of the allocated frequency slots is n r,j Representing n r,j The following should be satisfied: n is more than or equal to 1 r,j ≤F max ,F max Representing the maximum number of slots that a task is allowed to be allocated; 2. when a plurality of frequency spectrum blocks containing different frequency slots exist, the frequency spectrum block containing the frequency slots with the largest number is preferentially selected, so that the frequency spectrum fragment rate can be reduced to a certain extent; 3. when a plurality of frequency spectrum blocks containing the same frequency slot number exist, the First-Fit is adopted to determine the frequency spectrum blocks. J (J) c For the candidate edge server set of task r, if the candidate path p of the modulation process r,j On which there are available spectrum resources or s=y j Adding an edge server J to J c This offloading decision is a candidate offloading decision for task r.
In some embodiments, when we get candidate offloading decisions, we can also calculate the transmission time of the task to be offloaded at each candidate offloading decision. For a candidate offload decision with a candidate edge server of j, if s+.y j The task r to be offloaded is accessed to the optical node s from the task r to be offloaded through a path p r,j The transmission time to the candidate edge server j is: TD (time division) r,j =d/(m r,j ×n r,j X B), wherein m r,j Representing candidate paths p in the modulation process r,j Modulation format used on n r,j Representing candidate paths p in the modulation process r,j The number of allocated slots, B, indicates the transmission rate using one slot in the BPSK modulation format. If s=y j ,TD r,j =0. Task r arrivalThe moments of the candidate edge servers j are: AT (automatic Transmission) r,j =t arr +TD r,j
In some alternative embodiments, the candidate offloading decisions may also be as shown in table 6.
Table 6 candidate offloading decisions
Figure BDA0003932879950000171
S306: and calculating the deflection degree of the transmission links based on the candidate paths of the modulation processing, and calculating the matching degree of the candidate edge servers based on the load capacity of the candidate edge servers.
In some embodiments, we can further calculate the bias of the candidate paths of the modulation process and the matching of the candidate edge servers, and calculate the expected intensity of the task to be offloaded for each candidate offloading decision by the bias of the candidate links of the modulation process and the matching of the candidate edge servers to determine the final offloading decision.
In some embodiments, calculating the bias of the candidate paths for the modulation process may be expressed as:
Figure BDA0003932879950000172
wherein ,τr,j And representing the preference degree of the task r to be offloaded by adopting a corresponding route and resource allocation scheme, namely, the preference degree of the candidate link of the modulation processing. τ r,j The larger the candidate path p representing the modulation process r,j The more available frequency slot blocks are contained, the more the number of frequency slots are contained in the available frequency slot blocks, and the higher the modulation level is used, the greater the probability that the task r to be offloaded selects the route and the resource allocation scheme is.
In some embodiments, calculating the matching degree of the candidate edge servers may be expressed as:
Figure BDA0003932879950000173
wherein ,ηr,j And the matching degree of the task r to be offloaded and the candidate edge server j is represented. The process of a specific calculation can also be regarded as calculating the computing resource c required by the task r to be offloaded r,j Remaining computing resources with candidate edge server j
Figure BDA0003932879950000174
Cosine similarity between them. η (eta) r,j A larger value indicates a higher degree of matching between the two, and a higher probability that the task r selects the edge server j.
S307: a desired intensity of the candidate offloading decision is calculated based on the bias and the match.
In some embodiments, the expected intensity of the task to be offloaded for each candidate offloading decision is calculated and J is calculated as the expected intensity c The edge servers in (a) are ordered in descending order.
In some embodiments, calculating the expected intensity of the task to be offloaded for each candidate offloading decision may also be expressed as:
Figure BDA0003932879950000181
wherein ,Pr,j Represents the desired intensity, eta r,j Representing the bias of edge servers, τ r,j And alpha and beta respectively represent the matching degree factor of the edge server and the preference degree factor of the routing and resource allocation scheme, and are both more than or equal to 0.
In some alternative embodiments, the expected strengths of the candidate offloading decisions previously described may also be as shown in table 7.
Table 7 calculates the expected intensity of the task to be offloaded for each candidate offloading decision
Figure BDA0003932879950000182
S308: based on the expected intensity, an execution time of the candidate offloading decision is calculated.
In some embodiments, after we get the expected intensity of the candidate offloading decision through the above calculation, we can calculate the execution time of the candidate offloading decision through the expected intensity of the candidate offloading decision. Specifically, we can calculate the start time and the end time of the task to be offloaded respectively, and obtain the execution time of the task to be offloaded through the difference value of the two.
In some embodiments, we can also model the execution time of the task r to be offloaded. The completion time of a task to be offloaded includes the transmission time, the waiting time and the calculation time. Wherein, the transmission time comprises two parts, respectively: the time when the task to be offloaded is transmitted from the terminal device to the edge server, and the time when the edge server returns the calculation result to the terminal device. The invention focuses on determining unloading decision for the task according to the load of the edge server and the link after the task reaches the network, so the task transmission equipment from the terminal equipment to the network is not concerned. In addition, in the practical application scenario, taking face recognition as an example, the input data includes mobile system settings, program codes and input parameters, the data volume is large, and the result of face recognition, i.e. the data volume of the calculation result, is far smaller than the input data volume, so we neglect the time of returning the calculation result from the edge server to the terminal device. The transmission time of a task therefore only includes the time to transmit the task from its access optical node to the target edge server.
In some embodiments, the transmission time TD of the task r to be offloaded to the candidate edge server j is obtained by the above method for the candidate offloading decision of the candidate edge server j r,j The waiting time WD of the task to be offloaded r on the candidate edge server j can then be calculated r,j
In some embodiments, we can use ST separately r,j and ETr,j The time when the candidate edge server j starts processing the task r and the time when the processing is completed are shown. Task to be offloaded r is AT time AT r,j Reaching the candidate edge server j, the task r to be offloaded is at the candidate edgeLatency WD on server j r,j Can be represented by the formula
Figure BDA0003932879950000191
Calculated, wherein->
Figure BDA0003932879950000192
AT time instant AT for candidate edge server j r,j If the remaining computing resources meet the computing resource requirements that should be allocated to the task r to be offloaded, the task r to be offloaded does not need to wait, WD r,j =0; if the residual computing resources do not meet the computing resource requirements which are allocated to the task, the task to be offloaded needs to wait until the residual computing resources of the candidate edge server j meet the computing resource requirements of the task r after the execution of other tasks to be offloaded is completed.
In some embodiments, it is assumed that when
Figure BDA0003932879950000193
After the task r' to be offloaded is completed (i.e. at time ET r',j ) The remaining computing resources of the candidate edge server j meet the computing resource requirement of the task r to be offloaded, and the waiting time WD of the task r to be offloaded r,j The method comprises the following steps: the execution completion time of the task r' to be offloaded is subtracted by the time of the task r to be offloaded reaching the candidate edge server j. The time for the candidate edge server j to process the task r to be offloaded is as follows: PD (potential difference) device r,j =c/c r,j . Thus, the candidate edge server j starts processing the time ST of the task r to be offloaded r,j And the time ET after the treatment is finished r,j Can be represented by the formula ST r,j =AT r,j +WD r,j and ETr,j =ST r,j +PD r,j And (5) calculating to obtain the product. Through the above analysis, the execution time of the task to be offloaded r processed by the edge server j can be expressed as: CD (compact disc) r,j =TD r,j +WD r,j +PD r,j
S309: and in response to determining that the execution time of the candidate offloading decision is less than or equal to a second threshold, determining that the candidate offloading decision is an offloading decision.
In some embodiments, we can also select ordered J one by one c The candidate edge server j in the list calculates the completion time CD when the task r to be offloaded takes the corresponding candidate offloading decision r,j If the completion time CD r,j Meeting the second threshold of execution of task r, i.e. CD r,j ≤T max Ending the cycle, wherein the corresponding unloading decision is the final unloading decision of the task r to be unloaded; otherwise, continue to select J c The next candidate edge server in (a) performs the calculation. If all the unloading decisions are traversed, the unloading decisions meeting the second threshold of task execution are not found, and the task execution fails.
In summary, in the present disclosure, first, the load amounts of the transmission link and the edge server are calculated by the weight of the transmission link and the resource throughput of the edge server. The final offloading decision is then determined by the transmission link and the load of the edge server. And finally, unloading the task to be unloaded through a final unloading decision.
Based on the same technical concept, the disclosure also provides a task unloading device corresponding to the method of any embodiment, and the task unloading method of any embodiment can be realized by using the task unloading device provided by the disclosure.
Fig. 3 is a schematic structural diagram of a task unloading device according to an embodiment of the disclosure.
The task unloading device shown in fig. 3 further includes:
the system comprises a task acquisition module 100, a load amount calculation module 200, an unloading decision determination module 300 and a task unloading module 400;
wherein the task acquisition module is configured to: and acquiring a task to be offloaded and determining a transmission network of the task to be offloaded.
The load calculation module is configured to: calculating the load capacity of a plurality of transmission links in the transmission network and the load capacity of an edge server in the transmission network; the method specifically comprises the following steps:
Acquiring idle frequency spectrum blocks in the transmission links, and calculating the weight of the idle frequency spectrum blocks;
calculating the weights of the transmission links according to the number of the frequency slots of the idle frequency spectrum block and the weights of the idle frequency spectrum block;
calculating the load variance of the transmission links according to the weights of the transmission links;
wherein the calculating the load variance of the plurality of transmission links according to the weights of the plurality of transmission links is expressed as:
Figure BDA0003932879950000211
wherein ,
Figure BDA0003932879950000212
representing the load variance of a plurality of transmission links after selecting the server j; />
Figure BDA0003932879950000213
AT the time AT r,j Spectrum occupancy of links (a, b) AT time AT according to several transmission links r,j Is obtained; />
Figure BDA0003932879950000214
AT the time AT r,j Average spectrum occupancy of the links (a, b) by arithmetic mean calculation of the spectrum occupancy;
and acquiring idle frequency spectrum blocks in the transmission links, and calculating weights of the idle frequency spectrum blocks, wherein the weights are expressed as follows:
Figure BDA0003932879950000215
wherein δ (n) represents a weight of a spectrum block containing n consecutive available frequency slots, and n-u+1 represents a selectable number of occupied u consecutive frequency slots in the available spectrum block containing n frequency slots;
the weight of the transmission links is calculated according to the number of the frequency slots of the idle frequency spectrum block and the weight of the idle frequency spectrum block, and is expressed as follows:
Figure BDA0003932879950000216
wherein ,
Figure BDA0003932879950000217
representing the weights of the transmission links, H and z h Respectively representing the number of available frequency spectrum blocks contained in the transmission link (a, b) at the time t and the number of frequency slots contained in the h available frequency spectrum block, < >>
Figure BDA0003932879950000218
Set of available spectrum blocks representing link (a, b) at time t>
Figure BDA0003932879950000219
Obtaining the resource processing amount of the candidate edge server based on the unloading capacity of the candidate edge server and the task to be unloaded received by the candidate edge server;
calculating an arithmetic average value of the resource processing amount to obtain the average resource processing amount of the candidate edge server;
calculating a load variance of the candidate edge server based on the resource throughput and the average resource throughput;
wherein the load variance of the candidate edge servers is calculated as:
Figure BDA0003932879950000221
/>
wherein ,
Figure BDA0003932879950000222
representing the load variance of candidate edge servers after selecting server j, c r,j Indicating allocation of candidate edge server j toComputing resource amount of task to be offloaded r, +.>
Figure BDA0003932879950000223
Representing average resource throughput of candidate edge servers, AT r,j Time of task r reaching candidate edge server j, +.>
Figure BDA0003932879950000224
Representing the load of the candidate edge server j at time t;
the offloading decision-making module is configured to: determining an unloading decision of the task to be unloaded based on the load amounts of the transmission links and the load amounts of the edge servers; the method specifically comprises the following steps:
Calculating the weight of the task to be offloaded from the plurality of transmission links to each edge server in the transmission network, and setting the path with the minimum weight as a candidate path;
selecting the candidate paths based on a preset spectrum rule to obtain candidate paths conforming to the spectrum rule;
modulating the candidate paths conforming to the spectrum rules to obtain modulated candidate paths;
setting an edge server positioned on the candidate path of the modulation processing as a candidate edge server;
generating a candidate offloading decision based on the candidate path of the modulation process and the candidate edge server;
calculating the deflection of the transmission links based on the candidate paths of the modulation processing, and calculating the matching degree of the candidate edge servers based on the load capacity of the candidate edge servers;
calculating an expected intensity of the candidate offloading decision based on the bias level and the matching level;
calculating an execution time of the candidate offloading decision based on the expected intensity;
responsive to determining that the execution time of the candidate offloading decision is less than or equal to a second threshold, determining that the candidate offloading decision is an offloading decision;
Wherein the calculating the expected intensity of the candidate offloading decision based on the bias and the matching is expressed as:
Figure BDA0003932879950000225
wherein ,Pr,j Represents the desired intensity, eta r,j Representing the bias of edge servers, τ r,j And alpha and beta respectively represent the matching degree factor of the edge server and the preference degree factor of the routing and resource allocation scheme, and are both more than or equal to 0.
The task offloading module is configured to: and unloading the task to be unloaded according to the unloading decision.
Based on the same technical concept, the present disclosure also provides an electronic device corresponding to the method of any embodiment, which includes a memory, a processor, and a computer program stored on the memory and capable of running on the processor, where the processor implements the task offloading method of any embodiment.
Fig. 4 shows a more specific hardware architecture of an electronic device according to this embodiment, where the device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 implement communication connections therebetween within the device via a bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1020 may store an operating system and other application programs, and when the embodiments of the present specification are implemented in software or firmware, the associated program code is stored in memory 1020 and executed by processor 1010.
The input/output interface 1030 is used to connect with an input/output module for inputting and outputting information. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1040 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1050 includes a path for transferring information between components of the device (e.g., processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
It should be noted that although the above-described device only shows processor 1010, memory 1020, input/output interface 1030, communication interface 1040, and bus 1050, in an implementation, the device may include other components necessary to achieve proper operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The electronic device of the foregoing embodiment is configured to implement the corresponding task offloading method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Based on the same technical concept, corresponding to any of the above embodiments, the present disclosure further provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the task offloading method as described in any of the above embodiments.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The storage medium of the foregoing embodiments stores computer instructions for causing the computer to perform the task offloading method of any one of the foregoing embodiments, and has the advantages of the corresponding method embodiments, which are not described herein.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the disclosure, including the claims, is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present disclosure, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present disclosure as described above, which are not provided in details for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present disclosure. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present disclosure, and this also accounts for the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform on which the embodiments of the present disclosure are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The disclosed embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Accordingly, any omissions, modifications, equivalents, improvements, and the like, which are within the spirit and principles of the embodiments of the disclosure, are intended to be included within the scope of the disclosure.

Claims (10)

1. A method of task offloading comprising:
acquiring a task to be offloaded and determining a transmission network of the task to be offloaded;
calculating the load capacity of a plurality of transmission links in the transmission network and the load capacity of an edge server in the transmission network;
determining an unloading decision of the task to be unloaded based on the load amounts of the transmission links and the load amounts of the edge servers;
and unloading the task to be unloaded according to the unloading decision.
2. The method of claim 1, wherein said calculating the load of a number of transmission links in the transmission network comprises:
acquiring idle frequency spectrum blocks in the transmission links, and calculating the weight of the idle frequency spectrum blocks;
calculating the weights of the transmission links according to the number of the frequency slots of the idle frequency spectrum block and the weights of the idle frequency spectrum block;
calculating the load variance of the transmission links according to the weights of the transmission links;
wherein the calculating the load variance of the plurality of transmission links according to the weights of the plurality of transmission links is expressed as:
Figure FDA0003932879940000011
wherein ,
Figure FDA0003932879940000012
representing the load variance of a plurality of transmission links after selecting the server j; />
Figure FDA0003932879940000013
AT the time AT r,j Spectrum occupancy of links (a, b) AT time AT according to several transmission links r,j Is obtained; />
Figure FDA0003932879940000014
AT the time AT r,j Average spectrum occupancy of the links (a, b) by arithmetic mean calculation of the spectrum occupancy.
3. The method of claim 2, wherein the acquiring the free-spectrum blocks in the plurality of transmission links and calculating weights for the free-spectrum blocks is expressed as:
Figure FDA0003932879940000015
Where δ (n) represents the weight of a spectral block containing n consecutive available frequency slots, and n-u+1 represents the selectable number of u consecutive frequency slots occupied in the available spectral block containing n frequency slots.
4. The method according to claim 2, wherein the calculating weights of the plurality of transmission links according to the number of slots of the free spectrum block and the weights of the free spectrum block is expressed as:
Figure FDA0003932879940000021
wherein ,
Figure FDA0003932879940000022
representing the weights of the transmission links, H and z h Respectively representing the number of available frequency spectrum blocks contained in the transmission link (a, b) at the time t and the number of frequency slots contained in the h available frequency spectrum block, < >>
Figure FDA0003932879940000023
Set of available spectrum blocks representing link (a, b) at time t>
Figure FDA0003932879940000024
5. The method of claim 1, wherein calculating the load of the edge server in the transport network comprises:
obtaining the resource processing amount of the candidate edge server based on the unloading capacity of the candidate edge server and the task to be unloaded received by the candidate edge server;
calculating an arithmetic average value of the resource processing amount to obtain the average resource processing amount of the candidate edge server;
calculating a load variance of the candidate edge server based on the resource throughput and the average resource throughput;
Wherein the load variance of the candidate edge servers is calculated as:
Figure FDA0003932879940000025
wherein ,
Figure FDA0003932879940000026
representing the load variance of candidate edge servers after selecting server j, c r,j Representing the amount of computing resources allocated by candidate edge server j to task to be offloaded r,/>
Figure FDA0003932879940000027
Representing average resource throughput of candidate edge servers, AT r,j Time when task r arrives at candidate edge server j, < >>
Figure FDA0003932879940000028
The load of the candidate edge server j at time t is indicated.
6. The method of claim 1, wherein the determining the offloading decision for the task to be offloaded based on the load amounts of the plurality of transmission links and the load amount of the edge server comprises:
calculating the weight of the task to be offloaded from the plurality of transmission links to each edge server in the transmission network, and setting the path with the minimum weight as a candidate path;
selecting the candidate paths based on a preset spectrum rule to obtain candidate paths conforming to the spectrum rule;
modulating the candidate paths conforming to the spectrum rules to obtain modulated candidate paths;
setting an edge server positioned on the candidate path of the modulation processing as a candidate edge server;
Generating a candidate offloading decision based on the candidate path of the modulation process and the candidate edge server;
calculating the deflection of the transmission links based on the candidate paths of the modulation processing, and calculating the matching degree of the candidate edge servers based on the load capacity of the candidate edge servers;
calculating an expected intensity of the candidate offloading decision based on the bias level and the matching level;
calculating an execution time of the candidate offloading decision based on the expected intensity;
and in response to determining that the execution time of the candidate offloading decision is less than or equal to a second threshold, determining that the candidate offloading decision is an offloading decision.
7. The method of claim 1, wherein the calculating the expected strength of the candidate offloading decision based on the bias and the match is expressed as:
Figure FDA0003932879940000031
wherein ,Pr,j Represents the desired intensity, eta r,j Representing the bias of edge servers, τ r,j And alpha and beta respectively represent the matching degree factor of the edge server and the preference degree factor of the routing and resource allocation scheme, and are both more than or equal to 0.
8. A task offloading apparatus, comprising:
the task acquisition module acquires a task to be offloaded and determines a transmission network of the task to be offloaded;
The load capacity calculation module is used for calculating the load capacity of a plurality of transmission links in the transmission network and the load capacity of an edge server in the transmission network;
the unloading decision determining module is used for determining an unloading decision of the task to be unloaded based on the load amounts of the transmission links and the load amounts of the edge servers;
and the task unloading module is used for unloading the task to be unloaded according to the unloading decision.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202211394511.7A 2022-11-08 2022-11-08 Task unloading method and device, electronic equipment and storage medium Pending CN116032935A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211394511.7A CN116032935A (en) 2022-11-08 2022-11-08 Task unloading method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211394511.7A CN116032935A (en) 2022-11-08 2022-11-08 Task unloading method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116032935A true CN116032935A (en) 2023-04-28

Family

ID=86071418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211394511.7A Pending CN116032935A (en) 2022-11-08 2022-11-08 Task unloading method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116032935A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116668447A (en) * 2023-08-01 2023-08-29 贵州省广播电视信息网络股份有限公司 Edge computing task unloading method based on improved self-learning weight

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116668447A (en) * 2023-08-01 2023-08-29 贵州省广播电视信息网络股份有限公司 Edge computing task unloading method based on improved self-learning weight
CN116668447B (en) * 2023-08-01 2023-10-20 贵州省广播电视信息网络股份有限公司 Edge computing task unloading method based on improved self-learning weight

Similar Documents

Publication Publication Date Title
CN108121512B (en) Edge computing service caching method, system and device and readable storage medium
US20140089510A1 (en) Joint allocation of cloud and network resources in a distributed cloud system
US9426075B2 (en) Method and system to represent the impact of load variation on service outage over multiple links
CN112491741B (en) Virtual network resource allocation method and device and electronic equipment
CN116032935A (en) Task unloading method and device, electronic equipment and storage medium
CN112738220A (en) Management method, load balancing method and load balancing device of server cluster
CN109379445A (en) A kind of sending method and device of PUSH message
CN106454948B (en) Wireless network virtualizes interior joint redistribution method
CN114268371B (en) Quantum channel resource allocation method and device and electronic equipment
CN111857942A (en) Deep learning environment building method and device and server
CN110650209A (en) Method and device for realizing load balance
CN114007231A (en) Heterogeneous unmanned aerial vehicle data unloading method and device, electronic equipment and storage medium
US9648098B2 (en) Predictive peer determination for peer-to-peer digital content download
JP2018013994A (en) Program, computer and information processing method
CN109995818A (en) A kind of method and device of server load balancing
CN110278241A (en) A kind of registration request processing method and processing device
CN113422726B (en) Service chain deployment method and device, storage medium and electronic equipment
CN110188975A (en) A kind of resource acquiring method and device
CN111427682B (en) Task allocation method, system, device and equipment
CN113656046A (en) Application deployment method and device
CN113642638A (en) Capacity adjustment method, model training method, device, equipment and storage medium
US20210185119A1 (en) A Decentralized Load-Balancing Method for Resource/Traffic Distribution
WO2013027332A1 (en) Information processing device, information processing method, and program
CN115086226B (en) Anonymous link establishment method and system in anonymous network
CN111191891A (en) Evaluation accuracy calibration method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination