CN115185660A - Unloading and buffer storage method and system for MAR task in multi-access edge calculation - Google Patents
Unloading and buffer storage method and system for MAR task in multi-access edge calculation Download PDFInfo
- Publication number
- CN115185660A CN115185660A CN202210795740.3A CN202210795740A CN115185660A CN 115185660 A CN115185660 A CN 115185660A CN 202210795740 A CN202210795740 A CN 202210795740A CN 115185660 A CN115185660 A CN 115185660A
- Authority
- CN
- China
- Prior art keywords
- cache
- task
- mar
- unloading
- subtask
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000004364 calculation method Methods 0.000 title claims abstract description 13
- 238000003860 storage Methods 0.000 title claims description 18
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 19
- 238000005457 optimization Methods 0.000 claims abstract description 15
- 238000004590 computer program Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 16
- 230000003139 buffering effect Effects 0.000 claims description 4
- 238000000638 solvent extraction Methods 0.000 claims description 4
- 241000257303 Hymenoptera Species 0.000 description 16
- 230000006870 function Effects 0.000 description 13
- 230000005540 biological transmission Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 235000013305 food Nutrition 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000009877 rendering Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 235000012907 honey Nutrition 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000005251 gamma ray Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/48—Indexing scheme relating to G06F9/48
- G06F2209/484—Precedence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/48—Indexing scheme relating to G06F9/48
- G06F2209/485—Resource constraint
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention relates to the technical field of task unloading and cache of MAR mobile equipment tasks, and provides an unloading and cache method and a system of MAR tasks in multi-access edge calculation, wherein the unloading and cache method comprises the following steps: dividing the MAR task into a plurality of subtasks; performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue; placing and initializing cache files in a cache set to obtain a cache placing strategy; initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy; and optimizing by using a multi-objective swarm optimization algorithm according to the cache placement strategy and the task unloading strategy which are generated by initialization. The speed of task uninstallation and cache placement is improved.
Description
Technical Field
The invention belongs to the technical field of task unloading and buffer storage of Mobile Augmented Reality (MAR) Mobile equipment tasks, and particularly relates to an unloading and buffer storage method and system of an MAR task in multi-access edge computing.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
A number of new video applications are emerging, such as Augmented Reality (AR), virtual Reality (VR). These new video applications may bring a better experience due to the nature of immersive scenes. Therefore, the method is applied to a plurality of fields such as the Internet of things, education, remote medical treatment and the like. In the central cloud computing system, tasks with large computing loads can be unloaded to the central cloud for processing. However, central cloud systems are not sufficient to support these computational heavy, low latency requirements. The reason is that the central cloud is far away from the user, the calculation load is high, and the service quality cannot be guaranteed. As a major evolution in 5G communication systems, mobile Edge Computing (MEC) provides a good direction for solving these problems with its powerful smart storage and Computing capabilities. The MEC sinks the central cloud computing service to the edge of the core network, closer to the data generated by the user. When users request content or offload tasks, they may access the edge servers directly, rather than accessing the remote central cloud. This can greatly reduce backhaul load, the number of serving users, and transmission link distance. Therefore, it can provide high bandwidth, low latency network services to users.
Since MAR mobile devices are close to the edge servers, reducing end-to-end delay is a major advantage compared to cloud-based architectures. AR functionality is less dependent on infrastructure links than cloud-based architectures because servers are deployed at the edge. The edge server ensures more reliable communication than a cloud-based architecture. Content caching is possible due to the localized nature of the information. Content caching reduces end-to-end delay and congestion in infrastructure networks other than edge servers. User data is not transmitted through a public network, and safer communication is ensured. The edge-based architecture may support a lightweight and power-efficient MAR device, such as a wearable device, because it supports computing offloading.
However, there are some problems with the offloading and buffering procedures of the MAR task in the multi-access edge computation at present. In particular, the mobile edge cache mainly utilizes the storage resource provided by the mobile edge server, which can reduce the network data traffic and thus shorten the content access delay of the user. Some studies are based on the independent operation of a single edge server, the caching capacity of which is often particularly limited, which degrades the performance of the wireless mobile network in many ways. In addition, designing a caching scheme for each server separately is not only cumbersome, but also fails to make full use of caching resources. To address these challenges, cooperative caching schemes have been proposed to improve network performance. Although cooperative caching improves cache utilization over non-cooperative caching, it still has some architectural disadvantages. In the research of task unloading, some documents consider the queuing state of an application program buffer and an idle processor, and propose a one-dimensional search algorithm to minimize the delay of task execution. There is also literature that applies queuing theory to the modeling of edge compute nodes with the goal of minimizing average task offload time. These documents offload all tasks during the offloading process, ignoring the execution capabilities of the local device. Moreover, the above research on task offloading only considers single-user task offloading, and the research on multi-user task offloading needs to consider competition of shared resources, and scheduling is more complex.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides an unloading and cache placement method and system for an MAR task in multi-access edge calculation, wherein a multi-target artificial bee colony algorithm is adopted to optimize a cache placement strategy and a task unloading strategy, and the speed of task unloading and cache placement is improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a first aspect of the present invention provides an offload and cache placement method for MAR tasks in multiple access edge computing, comprising:
dividing the MAR task into a plurality of subtasks;
performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue;
placing and initializing cache files in a cache set to obtain a cache placing strategy;
initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy;
and optimizing by using a multi-objective swarm optimization algorithm according to the cache placement strategy and the task unloading strategy which are generated by initialization.
Further, the subtasks include two types: the first type can only be executed at the local device; the second class can be executed either on the local device or the edge server, and the computation results can be cached at the edge server.
Further, judging whether the front thread task of the subtask is completed or not according to the priority queue;
if the front thread task is completed and the subtasks belong to the first class, the front thread task is directly executed in the local equipment;
if the front thread task is completed and the subtask belongs to the second class, the mobile equipment accesses the edge server to determine whether a cache file required by the subtask exists, and if so, the cache file is directly transmitted to an execution point of the next subtask as a result; otherwise, the mobile device offloads the subtasks to the nearest edge server for execution.
Further, the objective of the multi-objective swarm optimization algorithm is as follows: maximizing hit rate and minimizing total service delay.
Further, the constraint of maximizing hit rate includes:
the space occupied by the cache files cached on each edge server cannot exceed the cache space of the edge server itself.
Further, the constraint of minimizing the total service delay includes:
each subtask completion time cannot exceed the maximum completion time;
the bandwidth resource allocated to each downlink cannot exceed the bandwidth resource of the mobile device downlink;
the bandwidth resources allocated to each uplink cannot exceed the bandwidth resources of the mobile device uplink;
the computational resources allocated to each subtask cannot exceed the total computational resources of the mobile device;
the computational resources allocated to each subtask cannot exceed the total computational resources of the edge server.
Furthermore, when the cache file is placed and initialized, an analytic hierarchy process is adopted;
the criteria layer of the analytic hierarchy process takes into account two factors: the ratio of the size of each cache file to the cache space of the placed edge server, and the execution time of the subtask unloading to a certain edge server when the cache file is placed to the server.
A second aspect of the present invention provides an offload and cache placement system for MAR tasks in multiple access edge computing, comprising:
a task partitioning module configured to: dividing the MAR task into a plurality of subtasks;
a priority queuing module configured to: performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue;
a placement initialization module configured to: placing and initializing cache files in a cache set to obtain a cache placing strategy;
a task offload initialization module configured to: initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy;
an optimization module configured to: and optimizing by using a multi-objective swarm optimization algorithm according to the cache placement strategy and the task unloading strategy which are generated by initialization.
A third aspect of the present invention provides a computer readable storage medium, on which a computer program is stored, which program, when being executed by a processor, performs the steps of the method for offloading and caching MAR tasks in a multiple access edge computation as described above.
A fourth aspect of the present invention provides a computer device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the steps of the method for offloading and storing MAR tasks in multi-access edge computing as described above.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides an unloading and cache placement method of MAR tasks in multi-access edge calculation, which designs two indexes of hit rate and service delay to evaluate the task unloading and cache placement on an edge server, and proposes the problem of task unloading and cache placement by taking the maximum hit rate and the minimum delay as targets under the constraint of edge server calculation resources and cache space; aiming at the problems of task unloading and cache placement, a multi-target artificial bee colony algorithm is adopted; introducing Pareto optimal relation in the optimization process to find an optimal solution; extensive evaluation proves that the algorithm has better performance and the speed of task unloading and cache placement is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is a flowchart of the unloading and cache placement of the MAR task in the multi-access edge computation according to the first embodiment of the present invention;
fig. 2 is a diagram illustrating MAR task partitioning according to a first embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Interpretation of terms:
artificial bee colony Algorithm (ABC): an intelligent optimization algorithm for simulating a honey collection process of bees. In the algorithm, the entire colony contains three different bees, namely, employment bees (employer bes), observation bees (onhook bes) and scout bees (scout bes). In implementation, each food source is encoded into a "solution" and given a fitness measure. First, a new solution is generated by each hiring bee based on its corresponding solution (food source) and its "neighbors". If the fitness of the new solution is higher than the original solution, the old solution is replaced by the new solution, otherwise the new solution is discarded, and then each observer bee selects a food source by roulette based on the food source information (i.e., fitness) provided by the hiring bee, and tries to improve the food source using a similar mechanism as the hiring bee. Finally, if a food source is not improved in consecutive "limit" iterations, its corresponding hiring bee is converted into a scout bee. Scout bees will generate a feasible random solution based on the search space. The above process is repeated until the termination condition is satisfied. In the ABC algorithm, the iteration number "limit" is a very important control parameter, which has a large influence on the performance of the algorithm.
Example one
The embodiment provides an unloading and buffer storage method for MAR tasks in multi-access edge computing.
As shown in fig. 2, a MAR task can be divided into five subtasks with dependency relationship according to its working characteristics, and the subtasks are respectively Video resource acquisition (Video resource) z 1 Tracking (Tracker) z 2 Map (Mapper) z 3 Object Detection (Object Detection) z 4 And rendering (render) z 5 . The video resource acquisition and rendering can be only executed on the local device, the other three subtasks can be executed on the local device or the edge server, and the calculation results can be cached on the edge server.
Defined as the set Z = { Z = { [ Z ] 1 ,z 2 ,z 3 ,z 4 ,z 5 }; the video source acquires and renders the two subtasks which must be executed in the local device, and other three subtasks which can be executed locally or unloaded to the edge server are defined as a set Z' = { Z = } 2 ,z 3 ,z 4 }. In addition, cache files of the three subtasks of tracking, mapping and object identification are cached on the edge server.
In the MEC system, edge servers work cooperatively, assuming that I cache files in the cache file library are placed and MAR subtasks generated in the MEC system are unloaded. By optimizing the cache placement strategy and the task offloading strategy, the highest hit rate and the lowest total service latency are guaranteed.
Therefore, the objective function is designed, the task offload and buffer placement problem is modeled, and the problem is formulated to maximize hit rate and minimize total service latency, according to the workflow of MAR mobile devices and edge systems (MEC systems).
To better evaluate whether cache placement is reasonable, the first objective function, namely maximize hit ratio, P 0 Expressed as:
wherein, P hit Represents the hit rate in the MEC system;whether the cache request of the subtask z of the mobile device d hits on the edge server n which is connected for the first time or not;a cache representing a subtask z of mobile device d requests edge server n to be connected for the first time but is migrated to edge server k if the cache is hit; to representIndicating whether the cache r corresponding to MAR subtask z in local device d hits,indicating that the cache r corresponding to MAR subtask z in local device d is hit,indicating that the cache r corresponding to the MAR subtask z in the local device d is not hit; u represents the total number of requests; c. C n,r Indicating whether the cache r is cached on the edge server n;representing a set of placements cached on an edge server n; s r Represents the size of the cache r; s n Representing the size of the cache space of the edge server n; constraint-C in the first objective function 1 Judging whether the cache is cached on the edge server or not for caching decision; constraint two C in the first objective function 2 Indicates that the space occupied by the cached file r cached on each edge server n cannot be representedBeyond the cache space of the edge server itself.
Second objective function P 1 To minimize the total service delay of MAR tasks in the system.
Wherein, T d,z Represents the completion time (service delay) of each MAR subtask z in the local device d;representing the maximum execution time allowed by each MAR subtask Z in the local device d, Z representing the set of all subtasks Z, and Z' representing the set of subtasks Z that can be executed on the edge server;representing the computational resources of the local device d,represents a set of local devices d;representing the computational resources of the edge server n,represents a collection of edge servers n; constraining a representative of each subtask completion time not exceeding its maximum completion time; constraints two and three represent that the bandwidth resources assigned to each downlink/uplink cannot exceed the bandwidth resources assigned to each mobile device downlink/uplink,indicating the bandwidth resources of the uplink u assigned to the local device d,representing the set of uplinks u in the local device d,representing the bandwidth resources of the uplink of the local device d,indicating the bandwidth resource of the downlink u assigned to the local device d,representing the set of downlinks u in the local device d,bandwidth resources representing the downlink of the local device d; constraints four and five represent respectively that the computational resources assigned to each subtask cannot exceed the total computational resources of each mobile device \ edge server,indicating that local device d is assigned to a computing resource of MAR subtask z,representing the total computational resources of the local device d,representing the computing resources that the edge server n allocates to the subtask z.
Wherein the service delay T d,z The calculating method comprises the following steps:
wherein,andrepresenting the transmission power of the mobile device d and the edge server n respectively,andrepresenting the channel gains of the uplink and downlink, respectively; gamma ray 2 Representing the noise intensity;andindicates the transmission rate of the uplink and the downlink;andbandwidth resources representing an uplink and a downlink;represents the uplink transmission delay of the subtask z; sigma d,z Represents the data size of the subtask z;the output result of the subtask z is transmitted in the downlink; sigma' d,z The data size representing the completion of the execution of the subtask z; sigma' d,z-1 Representing the result size of the front thread task of the subtask z;the propagation delay of the subtask z between edge server n and edge server k;a transmission delay between edge servers of a result of a front thread task representing a front thread task of the subtask z;representing the transmission delay of the subtask; h d,z-1 = n denotes that the execution point of the previous thread task of the subtask z is on the edge server n; h d,z-1 =0 indicates that the execution point of the front thread task of the subtask z is local; theta d,z =0 indicates that the subtask z is not unloaded locally; theta d,z = n denotes that the subtask offload point is edge server n; epsilon represents the transmission rate between edge servers;representing the time delay of the sub-task z cache request being transferred; u represents the number of mobile device requests to transfer;representing execution local to a taskDelaying;computing resources representing the assignment of local devices to subtasks;representing the execution time delay of the subtask z on the edge server n with the shortest transmission delay;representing the computing resources that server n assigns to subtask z;representing the execution delay of the subtask z on the other edge server n; t is d,z Representing the execution latency of the task.Representing an offloading decision for the task;indicating that the task is executing locally;a migration decision for the task is represented,indicating that the task has not migrated.
As shown in fig. 1, the unloading and buffering method for a MAR task in multi-access edge computing provided in this embodiment specifically includes the following steps:
step 1, dividing the MAR task into a plurality of (five) subtasks with dependency relationships according to the working characteristics of the MAR task, wherein the subtasks comprise two types: the first type can only be executed at the local device; the second type can be executed in the local device or the edge server, and the calculation result can be cached in the edge server; the subtasks in the first class constitute set Z, and the subtasks in the second class constitute set Z'. And then, performing priority queuing on all subtasks in the MEC system according to the latest execution time to obtain a priority queue.
wherein z is 5 The last sub-task is represented and,represents the latest completion time of the preceding task of the subtask z of the mobile device d; of tasksRepresents the maximum completion time allowed for completing the entire task for mobile device d;represents the latest completion time of a subsequent subtask of the subtask z of the mobile device d; t is d,z+1 Representing the execution time of the next subtask that completes the subtask z of the mobile device d;represents the maximum execution time allowed for completing the subtask z of the mobile device d; therefore, the calculation formula of the latest execution time is:
and generating a priority queue according to the latest execution time of each MAR subtask.
Step 2, placing and initializing the caches in the cache set; and (4) considering two angles of the completion time and the cache size of each task for each cache in the cache set, and placing the cache file by using an analytic hierarchy process.
When the cache file is placed by adopting an analytic hierarchy process, the target layer selects a proper server for placing the cache file. The criterion layer when the cache files are placed considers two factors, namely the ratio of the size of each cache file to the cache space of the placed edge server and the execution time of the subtasks unloaded to a certain server when the cache files are placed on the server. And selecting a proper edge server according to the probability cache file calculated by the analytic hierarchy process.
First, a decision matrix is defined:
wherein, a represents the importance degree of the ratio of the completion time of the task z on the server and the cache space of the cache task on the server when the edge server is selected. The importance matrices are designed for two factors of the criterion layer respectively:
wherein, K 1,z,i (b, y) represents an importance matrix of the factor that when the cache file i is cached on the edge servers b and y, the service delay of the subtask z of the terminal d on the edge server b is relative to the service delay on the edge server y; k is a radical of 2,z,i (b, y) is an importance matrix of the factor of the ratio of the size of each cache file i to the cache space of the edge servers b and y placed.
T(z,b)=(1-x rb )T d,z
Wherein x is rb Representing a child of mobile device dThe popularity of the cache r corresponding to the business z on the server b; t is d,z Represents the execution time of the subtask z of the mobile device d; t (z, b) represents the completion time when the subtask z of mobile device d is offloaded onto edge server b.
Judging the maximum eigenvalue lambda of the matrix A max The corresponding feature vectors are:
matrix K 1,z,i ,K 2,z,i Corresponding feature vectors are respectivelyThe weight of the nth server may be expressed as:
for task z, the probability that the ith cache file selects the nth edge server can be expressed as:
the probability of each cache file selecting an edge server n for all subtasks can be expressed as:
therefore, the probability of the cache file selecting the edge server n is:
wherein,representing the probability that cache file i will be selected for each task to be placed on edge server n.
And 3, initializing the unloading point and the execution point of each subtask according to the priority queue. And unloading the tasks of the MAR subtasks which cannot find the cache in the edge server according to the sequence in the priority queue and the priority relation between the subtasks.
When the subtasks are to be executed according to the priority order, whether the front thread task exists or not and whether the front thread task is completed or not are firstly determined, and if the front thread task is completed, the subtasks are directly executed locally when the subtask is one of the two subtasks for acquiring and rendering the video resource.
Specifically, according to the order in the priority queue and the priority relationship between the subtasks, task unloading is performed on the MAR subtasks for which the cache cannot be found in the edge server. Firstly, according to a generated priority queue, judging whether a front thread task of a subtask is completed or not, if the subtask is not a video source acquisition and rendering subtask, a mobile device firstly accesses an edge server whether a cache file required by the subtask exists or not, if so, the mobile device directly transmits a result to an execution point of the next subtask, otherwise, the mobile device unloads the task to the edge server closest to the edge for execution. If the video source acquires and renders the two subtasks are executed locally.
When the subtask is one of other three subtasks (tracking, mapping and target detection), firstly connecting an edge server with the shortest request time, searching a cache on the server, and if a cache file exists, transmitting the cache file serving as a result to the next subtask; when the cache file does not exist, the server needs to forward the cache file request to other edge servers in the MEC system. When the cache is not found in the MEC system, firstly checking whether the local resources are sufficient, and if the local computing resources are sufficient, executing locally; otherwise, the edge server to the initial connection executes. When the computing resources of the initially connected edge server are insufficient, the initially connected edge server can be migrated to other edge servers for execution. And removing the subtask from the queue after the execution is finished. When both queues are empty, the initialization is complete.
And 4, optimizing by using a multi-objective swarm optimization algorithm according to the cache placement strategy and the task unloading strategy which are generated by initialization. The solution is searched by hiring bees and then the solution information is shared by the follower bees, the follower bees update the solution until the solution is not updated any more, and the follower bees are converted into scout bees to return to the initialization stage and then initialize the scout bees to generate a new solution.
Step 401, generating Y solutions w through initialization of step 3 and step 4 ij I.e. a honey source, wherein i =1, 2.. Y; j =2 xd + I; d represents the number of the mobile devices in the system, and I represents the number of the caches in the cache set; each codec is a cache placement strategy and a task offloading strategy;
step 402, employing bees to search for honey sources according to a solved search formula, wherein the search formula is as follows:
wherein w ij Represents the old solution; w is a kj Denotes w ij One solution in the neighborhood; w' i,j Representing the newly generated solution;w 'can be controlled by random selection' i,j And (4) updating.
Generating a new solution according to the search formula if w' i,j Governing old solution w ij The new solution is used to replace the old solution and added to the external file set, otherwise, the old solution is added to the external file set. If the two solutions do not interact with each otherDominating, namely adding all solutions into an external archive set, and judging whether the solutions are kept in the external archive set by using a fitness function (congestion function), wherein the fitness function is as follows:
wherein,T max ,T min representing the boundary values of the two objective functions of the grid in which the solution is located in the external archive set.
The follower bee selects the position of the solution that can be further developed according to the roulette method, the probability formula of which is:
wherein p is i Denotes the probability of the i-th solution in the external archive set, PY i Representing the fitness value of the ith solution.
And step 404, if one solution reaches the limit times and is not updated, the follower bees are converted into scout bees, the solution is abandoned, and the scout bees are initialized again to generate a new solution.
And step 405, repeating the steps 401-404 until the iteration is finished. A suitable solution is selected from the pareto optimal set (the external archive set).
The invention designs two indexes of hit rate and service delay to evaluate the task unloading and caching positions on the edge server. Under the constraint of the edge server computing resource and the cache space, the problems of task unloading and cache storage are solved by taking the maximum hit rate and the minimum delay as targets. Aiming at the problem, a multi-target artificial bee colony algorithm is adopted. And introducing Pareto optimal relation in the optimization process to find an optimal solution. Extensive evaluation proves that the algorithm has better performance.
Example two
The embodiment provides an unloading and cache placement system of a MAR task in multi-access edge computing, which specifically includes the following modules:
a task partitioning module configured to: dividing the MAR task into a plurality of subtasks;
a priority queuing module configured to: performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue;
a placement initialization module configured to: placing and initializing the cache files in the cache set to obtain a cache placing strategy;
a task offload initialization module configured to: initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy;
an optimization module configured to: and optimizing by using a multi-objective swarm optimization algorithm according to the initialized and generated cache placement strategy and task unloading strategy.
It should be noted that, each module in the present embodiment corresponds to each step in the first embodiment one to one, and the specific implementation process is the same, which is not described herein again.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the method for offloading and caching MAR tasks in multiple access edge computing as described in the above embodiment.
Example four
The present embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the steps of the method for offloading and storing MAR tasks in the multiple access edge computing as described in the above embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
- An unloading and buffer storage method of a MAR task in multi-access edge calculation is characterized by comprising the following steps:dividing the MAR task into a plurality of subtasks;performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue;placing and initializing cache files in a cache set to obtain a cache placing strategy;initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy;and optimizing by using a multi-objective swarm optimization algorithm according to the initialized and generated cache placement strategy and task unloading strategy.
- 2. The method for offload and cache placement in multi-access edge computing (MAR) task of claim 1, wherein the subtasks include two types: the first type can only be executed at the local device; the second class can be executed either on the local device or the edge server, and the computation results can be cached at the edge server.
- 3. The method for offloading and buffering storage in multi-access edge computing (MAR) task of claim 1, wherein according to the priority queue, it is determined whether a previous thread task of a sub-task is completed;if the front thread task is completed and the subtasks belong to the first class, the front thread task is directly executed in the local equipment;if the front thread task is completed and the subtask belongs to the second class, the mobile equipment accesses the edge server to determine whether a cache file required by the subtask exists, and if so, the cache file is directly transmitted to an execution point of the next subtask as a result; otherwise, the mobile device offloads the subtasks to the nearest edge server for execution.
- 4. The method for offloading and cache placement in multi-access edge computing (MAR) tasks of claim 1, wherein the objective of the multi-objective swarm optimization algorithm is: maximizing hit rate and minimizing total service delay.
- 5. The method of offloading and cache placement of MAR tasks in multiple access edge computation of claim 1, wherein the constraint of maximizing hit rate comprises:the space occupied by the cache files cached on each edge server cannot exceed the cache space of the edge server itself.
- 6. The method of offloading and caching for MAR tasks in multi-access edge computation of claim 1, wherein the constraint of minimizing total service delay comprises:each subtask completion time cannot exceed the maximum completion time;the bandwidth resource allocated to each downlink cannot exceed the bandwidth resource of the mobile device downlink;the bandwidth resources allocated to each uplink cannot exceed the bandwidth resources of the mobile device uplink;the computational resources allocated to each subtask cannot exceed the total computational resources of the mobile device;the computational resources allocated to each subtask cannot exceed the total computational resources of the edge server.
- 7. The method for unloading and buffering cache files in multi-access edge computing (MAR) of claim 1, wherein an analytic hierarchy process is used when cache files are placed and initialized;the criteria layer of the analytic hierarchy process takes into account two factors: the ratio of the size of each cache file to the cache space of the placed edge server, and the execution time of the subtask unloading to a certain edge server when the cache file is placed to the server.
- An offload and cache placement system for MAR tasks in multi-access edge computing, comprising:a task partitioning module configured to: dividing the MAR task into a plurality of subtasks;a priority queuing module configured to: performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue;a placement initialization module configured to: placing and initializing the cache files in the cache set to obtain a cache placing strategy;a task offload initialization module configured to: initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy;an optimization module configured to: and optimizing by using a multi-objective swarm optimization algorithm according to the cache placement strategy and the task unloading strategy which are generated by initialization.
- 9. A computer readable storage medium, having stored thereon a computer program, which, when being executed by a processor, carries out the steps of the method for offloading and cache placement in a multiple access edge computation of a MAR task as claimed in any of the claims 1-7.
- 10. A computer apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps in the method for offloading and caching of MAR tasks in a multiple access edge calculation according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210795740.3A CN115185660A (en) | 2022-07-07 | 2022-07-07 | Unloading and buffer storage method and system for MAR task in multi-access edge calculation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210795740.3A CN115185660A (en) | 2022-07-07 | 2022-07-07 | Unloading and buffer storage method and system for MAR task in multi-access edge calculation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115185660A true CN115185660A (en) | 2022-10-14 |
Family
ID=83517932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210795740.3A Pending CN115185660A (en) | 2022-07-07 | 2022-07-07 | Unloading and buffer storage method and system for MAR task in multi-access edge calculation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115185660A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117806806A (en) * | 2024-02-28 | 2024-04-02 | 湖南科技大学 | Task part unloading scheduling method, terminal equipment and storage medium |
-
2022
- 2022-07-07 CN CN202210795740.3A patent/CN115185660A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117806806A (en) * | 2024-02-28 | 2024-04-02 | 湖南科技大学 | Task part unloading scheduling method, terminal equipment and storage medium |
CN117806806B (en) * | 2024-02-28 | 2024-05-17 | 湖南科技大学 | Task part unloading scheduling method, terminal equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113950066B (en) | Single server part calculation unloading method, system and equipment under mobile edge environment | |
CN109885397B (en) | Delay optimization load task migration algorithm in edge computing environment | |
CN109788046B (en) | Multi-strategy edge computing resource scheduling method based on improved bee colony algorithm | |
CN114338504B (en) | Micro-service deployment and routing method based on network edge system | |
CN111930436A (en) | Random task queuing and unloading optimization method based on edge calculation | |
CN110069341B (en) | Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing | |
CN113115252B (en) | Delay sensitive task distributed mobile edge computing resource scheduling method and system | |
CN109656713B (en) | Container scheduling method based on edge computing framework | |
CN115190033B (en) | Cloud edge fusion network task unloading method based on reinforcement learning | |
CN116263681A (en) | Mobile edge computing task unloading method, device, equipment and storage medium | |
CN111953547A (en) | Heterogeneous base station overlapping grouping and resource allocation method and device based on service | |
CN113573363A (en) | MEC calculation unloading and resource allocation method based on deep reinforcement learning | |
CN115185660A (en) | Unloading and buffer storage method and system for MAR task in multi-access edge calculation | |
CN112256413A (en) | Scheduling method and device for edge computing task based on Internet of things | |
Chen et al. | Joint optimization of task offloading and resource allocation via deep reinforcement learning for augmented reality in mobile edge network | |
CN114691372A (en) | Group intelligent control method of multimedia end edge cloud system | |
Xu et al. | A meta reinforcement learning-based virtual machine placement algorithm in mobile edge computing | |
Huang et al. | Computation offloading for multimedia workflows with deadline constraints in cloudlet-based mobile cloud | |
CN113741999A (en) | Dependency-oriented task unloading method and device based on mobile edge calculation | |
CN117579701A (en) | Mobile edge network computing and unloading method and system | |
CN113342504A (en) | Intelligent manufacturing edge calculation task scheduling method and system based on cache | |
CN117311975A (en) | Large model parallel training method, system and readable storage medium | |
CN115361453A (en) | Load fair unloading and transferring method for edge service network | |
CN114980160A (en) | Unmanned aerial vehicle-assisted terahertz communication network joint optimization method and device | |
Shang et al. | Deep reinforcement learning-based task scheduling in heterogeneous MEC networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |