CN111585916A - LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation - Google Patents
LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation Download PDFInfo
- Publication number
- CN111585916A CN111585916A CN201911365711.8A CN201911365711A CN111585916A CN 111585916 A CN111585916 A CN 111585916A CN 201911365711 A CN201911365711 A CN 201911365711A CN 111585916 A CN111585916 A CN 111585916A
- Authority
- CN
- China
- Prior art keywords
- task
- cloud
- edge
- computing
- edge node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/80—Actions related to the user profile or the type of traffic
- H04L47/805—QOS or priority aware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L5/00—Arrangements affording multiple use of the transmission path
- H04L5/003—Arrangements for allocating sub-channels of the transmission path
- H04L5/0044—Arrangements for allocating sub-channels of the transmission path allocation of payload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L5/00—Arrangements affording multiple use of the transmission path
- H04L5/003—Arrangements for allocating sub-channels of the transmission path
- H04L5/0058—Allocation criteria
- H04L5/006—Quality of the received signal, e.g. BER, SNR, water filling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1074—Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Abstract
The invention belongs to the technical field of electric power, and particularly relates to an LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation. The invention is that four kinds of different time delays exist in the cloud edge cooperative system, calculate the whole time delay of the apparatus and establish as mathematical model for the problem of minimizing the total time delay of all mobile apparatuses; and considering a task unloading scheme in four different system scenes, and solving an optimal allocation value of computing resources in the cloud edge cooperative system. Under all conditions, the performance of the method is optimal, and by executing the cloud edge cooperation strategy and adopting the task unloading and resource allocation strategy provided by the invention, the computing capacity of the edge node and the cloud server can be optimally allocated, so that the minimization of the average system time delay is realized.
Description
Technical Field
The invention belongs to the technical field of electric power, and particularly relates to an LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation.
Background
With the rapid development of the LTE wireless private network, the number of access devices in the network is increasing, which brings great challenges to the traditional cloud computing network. In the traditional cloud computing, task delay is large due to multi-hop data transmission, and the load of a core network is large. Mobile edge computing is considered as a key technology of next generation wireless communication, and provides low-latency computing service and improves the computing capacity of a network by a mobile edge computing center deployed at the edge of the network so as to meet the continuously improved user requirements.
How to offload the assumed tasks to the edge server by the mobile device for using the services provided by the edge network to make efficient and reasonable offload decisions has become a main research direction of the edge computing problem at present. However, with the development of technologies such as the internet of things, the scale of data computing services increases explosively, and task data is flooded into a computing network, which poses a great challenge to the limited computing resources and communication resources of the mobile edge computing center.
The prior art is as follows:
the technical scheme 1: the patent publication number is CN109302709A, which is named as a mobile edge computing-oriented task unloading and resource allocation strategy of the Internet of vehicles, and relates to a (MEC) -based task unloading mode decision and resource allocation method in a vehicle heterogeneous network, and the method is mainly completed by two steps: firstly, clustering the request vehicles according to different QoS by adopting an improved K-means algorithm so as to determine a communication mode; second, channel and power allocation is performed using a distributed Q-Learning algorithm using a Contention Free Period (CFP) based LTE-U in combination with a Carrier Aggregation (CA) technique.
The technical scheme 2 is as follows: the patent publication number is CN109698861A, which is named as a calculation task unloading algorithm based on cost optimization, and relates to a calculation task unloading algorithm based on cost optimization, and the calculation task unloading algorithm is mainly completed through four steps: firstly, constructing a new edge cloud computing model; the new edge cloud computing model comprises three important costs: calculating the execution cost of tasks, the communication cost between the same-end calculation tasks and the asymmetric communication cost between cross-end calculation tasks; secondly, expanding the new model of edge cloud computing; thirdly, combining the calculation costs; fourth, the optimized offloading strategy is solved based on a greedy criterion.
Technical scheme 3: the patent publication number is CN110489176A, entitled a multi-access edge computing task unloading method based on a boxing problem, which is mainly completed by three steps: firstly, calculating the capacity of each edge server and the ratio of the size of input data of each terminal task to required calculation resources; then two queues are formed from large to small according to the capacity and the task ratio; and finally, sequentially taking out the tasks in the task queue, configuring the tasks on the edge server with the largest capacity and the residual computing resources in the container queue, and repeating the operation until the task queue is empty.
Therefore, how to make effective task offloading and resource allocation decisions in a limited resource context becomes a key issue.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a task unloading and resource allocation method for an LTE electric power wireless private network based on cloud-side coordination, aiming at researching different time delays in a cloud-side coordination system and realizing the minimization of the total time delay of all mobile equipment by task unloading and resource allocation calculation under limited resources.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the LTE electric wireless private network task unloading and resource allocation method based on cloud-edge cooperation is characterized in that all time delays of equipment are calculated and established as a mathematical model for the problem of minimizing the total time delay of all mobile equipment in four different time delays existing in a cloud-edge cooperation system; considering task unloading schemes in four different system scenes, and solving an optimal allocation value of computing resources in the cloud edge cooperative system; the method comprises the following steps:
and 6, providing a solution to the constructed problem.
If the task is not existed, the mobile device uploads the tasks to the edge node, and the edge node is also responsible for determining the unloading proportion of each task at the cloud edge.
step (1) the mobile device sends a corresponding task request to the connected edge node, and the task request is used for judging whether the data is wanted by the edge node, and the edge node judges whether the result of the task exists in a cache; if yes, directly returning a task result, and finishing the task processing; otherwise, continuing the next step;
the mobile equipment directly uploads the whole time delay sensitive task to a connected edge node through a wireless channel;
step (3) the MEC server positioned at each edge node divides the received task into two parts, wherein one part is left on the MEC server, and the other part is unloaded to the cloud server;
step (4), the MEC server distributes the available computing resources to each computing task, and simultaneously uploads partial data to the cloud server through a backhaul link;
step (5), the cloud server allocates the computing resources to corresponding computing tasks, so that parallel computing is achieved;
and (6) collecting the calculation result by each edge node, caching the calculation result through a task caching strategy, and then returning the calculation result to each mobile device.
The delay existing in the calculation process comprises the following steps:
a. transmission delay of mobile device: according to the Shannon formula, the MT is calculatediOf (2) an upload rate ri,jExpressed as:
where B denotes the channel bandwidth, piRepresents MTiOf the transmission power, σ2Representing the noise power of the mobile device, Ii,jRepresenting the interference power, g, within the celli,jRepresenting a channel gain of the communication;
sc is communicated with MEC through optical fiber, and the transmission rate c between the two is far greater than MTiOf (2) an upload rate ri,jAnd the distance between the two is very small, neglecting the communication delay of sc and MEC, useTo indicate that the mobile device will task qiUpload to scjThe resulting time delay:
in the above formula: MT (multiple terminal)iOf (2) an upload rate ri,j,SiRepresenting the size of the computing task;
b. computing time delay of edge node: after the edge node successfully receives the complete computing task sent by the mobile equipment, the MEC server immediately executes an unloading strategy, and divides each computing task into two parts, wherein one part is executed by the MEC server, and the other part is executed by the cloud server; it is assumed that each calculation task is arbitrarily split without considering the task content, corresponding to the scene of video compression, speech recognition, and the like](ii) a Definition ofi,j∈[0,1]In order to calculate the division ratio of the task,i,jrepresenting the proportion of the computing task data executed at the MEC server; by usingThe computing resource allocated to the ith mobile device by the jth edge node is represented, and the time delay generated by the task executed at the edge node is represented as:
in the above formula: ci,jIndicating the CPU computation cycle required to compute this task;
c. transmission delay of edge node: for each edge node, separating the communication module and the computation module (CPU/GPU); the communication module is a transceiver, and the computing module is a CPU/GPU;
in the edge node, the calculation of the task and the transmission of the task are executed in parallel; all the edge nodes are connected with the cloud server through different backhaul links; an optimal cooperative strategy of edge computing and cloud computing is provided, and a resource scheduling strategy and a routing algorithm are assumed to be determined; h is to bejExpressed as backhaul communication capacity per device associated with the jth edge node; the average backhaul transmission delay is proportional to the size of the transmitted data, and is expressed as:
whereinRepresenting the time, S, required to transmit data of 1 bit size over the backhaul linkiWhich represents the size of the computing task,i,jrepresenting the proportion of the computing task data executed at the MEC server;
d. computing time delay of the cloud server: when the cloud service successfully receives the task data sent from the edge node, the cloud server allocates available computing resources to each task to achieve parallel processing; (1-i,j)si,jci,jRepresenting the number of CPUs, i.e., computing resources, required to execute tasks offloaded to the cloud server; by usingRepresenting cloud computing resources allocated to an ith mobile device served by a jth edge node; the computing latency of the cloud server is expressed as:
in the above formula: ci,jIndicating the CPU computation cycles required to compute this task.
The different time delays existing in the cloud edge coordination system in the step 1 include: the method comprises the steps of transmitting delay of the mobile equipment, calculating delay of edge nodes, transmitting delay of the edge nodes and calculating delay of a cloud server; the total latency generated by each device includes:
assume that 1: since the offloading of tasks depends on specific parameters of each task;
assume 2: the task calculation depends on a specific data structure and the correlation between adjacent data, and the cloud server starts to process the task until the transmission between the edge node and the cloud server is finished;
based on the above assumptions, the total latency incurred by the ith mobile device served by the jth edge node can be expressed as:
in the above formula:representing a mobile device to task qiUpload to scjThe time delay that is generated is,representing the time delay incurred in executing the task at the edge node,representing the latency incurred by the edge node to upload the task to the cloud server,representing the time delay generated by the processing of the computing task at the cloud server;
the minimization of the total time delay of all mobile devices is realized by task unloading and computing resource allocation under limited resources, and the goal is expressed as:
in the above formula: y isi,jRepresenting the cache state of the task;
wherein (7a) and (7b) representNeither edge computing resources nor cloud computing resources allocated to the mobile device are likely to exceed their own maximum number of computing resources; optimization variables include the unload ratio of a taski,j}, allocation of computing resources
q1 is broken down into two parts:
task offloading and computing resource allocation only affect the solution of the latter part, dividing the problem:
The proposed solution to the constructed problem described in step 6 comprises:
(1) and (3) task caching strategy:
before task unloading decision is carried out, the problem Q2 is further simplified, and a task caching strategy is used for solving a caching vector Yi,j=[y1,1,y1,2,...,yi,j]The algorithm process is shown in fig. 4:
the request times of the mobile equipment for requesting the task Q are assumed to follow the Poisson distributionC represents the request times of the tasks, and lambda represents the average occurrence rate of the tasks in unit time; setting the vector group Y to be null during initialization, determining the task type according to Poisson distribution for each task, judging whether the task exists in a task cache table, and if the task exists in the task cache table, directly returning an execution result to the MEC serverMobile device, yi,jIf not, continuing to judge whether the task exists in the task history record table; if present in the table, will yi,jSetting the value to be 1, directly returning an execution result, storing the task into a task cache table, and updating a task history record table; if neither table has recorded task qiThen y will bei,jSetting to 0, traversing the subtask space in such a way, and finally obtaining a vector group Yi,j=[y1,1,y1,2,...,yi,j];
Using vector set Yi,j=[y1,1,y1,2,...,yi,j]The simplification of Q2 yields:
(2) Task unloading strategy:
determining an optimal task offload ratioi,jMaintained during the analysisThe value of (d) is unchanged;
calculating delay by analysisAnd task allocation ratioi,jDetermine an optimal division ratio for monotonicity betweenFirstly, according to the formulaDeduceIs monotonically increasing, withi,jIs increased with an increase in; when in usei,j∈[0,1]To obtainSecondly, according to the formulaDeduceIs monotonically decreasing withi,jIncrease and decrease of (a) and also obtain the value range of the functionObservation formulaIn connection with the previous discussion, it was discoveredFirst followingi,jIs increased and decreased withi,jIs increased with an increase in; when in useWhen it is obtainedMinimum value of (d):wherein
Each mobile device defines two important parameters, including:
(2.1) normalized backhaul communication capacity is defined as the ratio of backhaul communication capacity to marginal computing power, i.e.
(2.2) normalized cloud computing power is defined as the ratio of cloud computing power to edge computing power, i.e.
Based on the definition, obtaining an optimal task unloading strategy; optimal task unloading strategy in cloud edge cooperative systemExpressed as:
in the above formula: mu.si,jDenotes the ratio of backhaul communication capacity to edge computing power, vi,jRepresenting a ratio of cloud computing power to edge computing power;
the different systems in which said formula (10) exists are as follows:
the 1 st: communication-limited systems: the edge node and the cloud server have sufficient computing resources, but the communication capacity is insufficient;
in this case, it is preferable that the air conditioner,this occurs when an edge node connects a large number of mobile devices and the capacity of the backhaul link is insufficient, simplifying the optimal task offload ratio:
equation (11) indicates that in this case, the optimal task offload ratio is determined only by the normalized backhaul communication capacity, and is not affected by the normalized cloud computing capacity; in this case, the communication resources reduce the main bottleneck of end-to-end delay per device; when mu isi,j→ 0, optimal task unload ratio 1, all computing tasks are executed at the edge nodes;
the 2 nd: a compute-constrained system: the edge nodes and the cloud server have sufficient communication capacity, but insufficient computing resources;
in this case, it is preferable that the air conditioner,simplifying the optimal task unloading ratio:
the optimal task offloading rate is determined only by the normalized cloud computing capability, in this case, as the backhaul communication capacity is relatively sufficient, the time delay generated by communication becomes small, and the size of the computing time delay determines the size of the overall time delay of the mobile device; in this case, the edge node and the cloud server are considered as a whole, according to the ratio of their computing power, vi,jSplitting the tasks according to the proportion; if the computing power of the edge node is larger than that of the cloud server, vi,jIf the number of the data is less than 1, unloading more data to the edge node; otherwise, more data is offloaded to the cloud, at vi,jIn the special case of 1, data is evenly distributed between the edge node and the cloud server;
and (3) type: edge-dominated systems: the computing resources allocated to the mobile device by the edge node are far more than those allocated by the cloud server, i.e. the edge node is capable of allocating to the mobile deviceObtain vi,j→ 0, this case corresponds to a large scale small cell network, where the cloud server serves many edge nodes; in this system, the optimal task offload ratio is simplified as follows:
indicating that the entire task should be executed on the edge node, the entire task only needs to be offloaded to the edge node;
and 4, the method comprises the following steps: cloud-side dominant system: the cloud server has sufficient computing resources, while the edge nodes have limited computing resources, i.e.Obtain vi,j→ infinity, such a system corresponds to a scenario where the cloud server is powerful in computing power and the edge nodes are weak in computing power; v is toi,jSubstituting → ∞ into the formula gives:
according to the formula, the optimal task unloading rate is determined only by the normalized backhaul communication capacity; if backhaul communication capacity HjComputing power of large or edge nodesThe unloading rate of the optimal task is reduced along with the reduction of the unloading rate of the optimal task; because when v isi,jTime delay generated by the cloud end to execute the task can be ignored compared with time delay generated by the edge node to execute the task; so when mu isi,jIf the delay is less than 1, the overall delay of the mobile device will be dominated by the delay generated by the backhaul transmission, so that the proportion of task data unloaded to the edge side for processing is larger, namelyOtherwise, when μi,jWhen the data is more than 1, more data should be unloaded to the cloud server for execution;
(3) computing resource allocation policy: unloading the optimal task rateSubstituting into formula to calculate time delayThe simplification is as follows:
question Q3 is rewritten as:
theorem 1: q4 is a convex optimization problem;
and (3) proving that: only by proving that the target function and the limiting condition of the target function are both convex functions, the theorem proves that the target function is a convex function; looking at the constraints (7a) and (7b), it can be seen that the constraints (7c) and (7d) are affine, reflecting their convexity; the following demonstrates that the objective function is also a convex function:
through mathematical calculations, all the former principals of W are obtained:
according to the linear algebra theory, when the front principal and the minor of a matrix are all positive definite, the matrix is a positive definite matrix; so as to obtain the compound with the characteristics of,based on variablesAnda convex function of (d); and the objective functionIs the sum of a series of convex functions, and the obtained target function is also a convex function; theorem 1 obtaining the certificate;
according to theorem 1, Q4 meeting the KKT condition is obtained, and the optimal allocation value of the computing resources in the cloud edge cooperative system is obtained by using a Lagrange operator:
The invention has the following advantages and beneficial technical effects:
the invention provides a task unloading and resource allocation (Cloud-Edge-TORD) scheme based on Cloud-Edge cooperation, a Cloud-Edge cooperation system and an MEC cache model are established, and time delay existing in the Cloud-Edge cooperation system is analyzed. The aim of minimizing the total time delay of all mobile equipment is achieved through task unloading and resource allocation calculation under limited resources, a problem model is established, and the problem is decomposed to obtain a task caching, unloading and resource allocation strategy.
The method is based on the established cloud-edge cooperative system and the MEC cache model, analyzes the time delay in the cloud-edge cooperative system, and aims to minimize the time delay. Simulation shows that the Cloud-Edge-TORD scheme provided by the invention has optimal performance under all conditions, and the task unloading and resource allocation strategy provided by the invention can be adopted to optimally allocate the computing capacity of the Edge node and the Cloud server by executing the Cloud Edge cooperation strategy, so that the minimization of the average system delay is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings of the embodiments of the present invention will be briefly described below. Wherein the drawings are only for purposes of illustrating some embodiments of the invention and are not to be construed as limiting the invention to all embodiments thereof.
FIG. 1 is a schematic diagram of a cloud-edge collaboration system of the present invention;
FIG. 2 is a schematic diagram illustrating a task execution flow of the cloud-edge collaboration system according to the present invention;
FIG. 3 is a schematic diagram of the MEC cache model of the present invention;
FIG. 4 is a process diagram of a cache vector algorithm of the present invention;
FIG. 5 is a comparison graph of time delays for 4 scenarios when the computing power of the cloud server increases in accordance with the present invention;
fig. 6 is a time delay comparison diagram of 4 schemes when the computing power of the cloud server becomes smaller according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
The invention relates to a LTE electric power wireless private network task unloading and resource allocation method based on cloud-edge cooperation. The invention considers task unloading schemes in four different system scenes and solves the optimal allocation value of computing resources in the cloud edge cooperative system.
The invention relates to a LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation, which comprises the following steps:
and 6, providing a solution to the constructed problem.
In the invention, step 1 describes establishing a cloud-edge collaboration system, as shown in fig. 1, fig. 1 is a schematic diagram of the cloud-edge collaboration system of the invention. The cloud edge coordination system is composed of a central cloud server and M SCeNBs, and is represented by a set M ═ {1, 2. The SCeNB represents a small base station. Each SCeNB deploys an MEC server that uses limited resources for data processing, caching, and storage. Each combination of SCeNB and MEC server is called an edge node. Within the coverage area of the jth SCeNB, there is a set χjX of the representationjEach mobile device has a computational task with different latency requirements. Assuming that each user is already connected to a base station, the specific connection relationship can be determined by some user connection policies. Further, each mobile device is connected to a respective base station through a wireless channel, and the edge nodes transmit data to the cloud server through different backhaul links. In this system, it is assumed that each computing task can be processed on both edge nodes and cloud servers according to [1 ]]Model of (1)]Y.Mao,C.You,J.Zhang,K.Huang,andK.B.Letaief,“Asurvey onmobile edge computing:The communication perspective,”IEEECommun.Surv.Tut.,vol.19,no.4,pp.2322–2358,Aug.2017。
We assume that all tasks are of the same type and arrive at the same time, so q can be usedi,j=(si,j,ci,j) Representing computational tasks generated by the ith mobile device connected to the jth edge node, where si,jRepresenting the size of the computational task, ci,jIndicating the CPU computation cycles required to compute this task. N represents the total number of tasks requested by the system in a certain time period, and the computing capacity of the MEC server in the jth edge node and the computing capacity of the cloud server are respectively defined asAnd Fc. The computing resources of each MEC server and cloud server may be allocated to the mobile device through virtual machine technology. As shown in fig. 2, fig. 2 is a schematic diagram of a task execution flow of the cloud edge collaboration system of the present invention.
And 3, analyzing time delay existing in the cloud edge coordination system, and in the cloud-edge-end task allocation model, assuming that each mobile device cannot directly process tasks due to limited computing capacity and electric quantity of each mobile device, and no local computing exists. The mobile device first sends a task request to determine whether the results of the task are in the cache. If the task exists, the task result is directly returned, and if the task does not exist, the mobile equipment uploads the task to the edge node. Then, whether the task is processed by the edge node alone or the edge node and the cloud server are cooperatively processed is determined by the corresponding edge node. In the latter case, the edge node is also responsible for determining the unloading proportion of each task at the cloud edge.
step (1), the mobile device firstly sends a corresponding task request to the connected edge node, which is equivalent to sending only the head part of the data at the beginning of the https protocol, and is used for judging whether the data is wanted by the edge node, and the edge node judges whether the result of the task exists in the cache. If yes, directly returning a task result, and finishing the task processing; otherwise, the next step is followed.
And (3) the mobile equipment directly uploads the whole time delay sensitive task to the connected edge node through a wireless channel without local calculation.
And (3) the MEC server positioned at each edge node divides the received task into two parts, wherein one part is left on the MEC server, and the other part is unloaded to the cloud server.
And (4) the MEC server allocates the available computing resources to each computing task and uploads part of data to the cloud server through the backhaul link.
And (5) the cloud server also allocates the computing resources to corresponding computing tasks, so that parallel computing is realized.
And (6) finally, collecting the calculation result by each edge node, caching the calculation result through a task caching strategy, and then returning the calculation result to each mobile device.
It should be noted that the time to divide the task is very short compared to the corresponding computation and communication delays and can therefore be neglected. Furthermore, the data volume of both the task request and the computation result is very small, so the send and return delays can be neglected, which corresponds to many practical computational scenarios, such as face recognition, virus detection and video analysis, etc.
Wherein, four delays existing in the calculation process comprise the following:
a. transmission delay of mobile device: as mentioned above, each computational task is uploaded to the corresponding edge node via a wireless channel. According to the Shannon formula, MT can be obtainediOf (2) an upload rate ri,jExpressed as:
where B denotes the channel bandwidth, piRepresents MTiOf the transmission power, σ2Representing the noise power of the mobile device, Ii,jRepresenting the interference power, g, within the celli,jRepresenting the channel gain of the communication.
Note that sc is communicated with MEC via optical fiber, and the transmission rate c between the two is far greater than MTiOf (2) an upload rate ri,jAnd the distance between the two is very small, so the communication delay between the sc and the MEC can be ignored and usedTo indicate that the mobile device will task qiUpload to scjThe resulting time delay:
in the above formula: MT (multiple terminal)iOf (2) an upload rate ri,j,SiRepresenting the size of the computational task.
b. Computing time delay of edge node: after the edge node successfully receives the complete computing task sent from the mobile device, the MEC server immediately executes an unloading strategy, and each computing task is divided into two parts, wherein one part is executed by the MEC server, and the other part is executed by the cloud server. It is assumed herein that each calculation task can be arbitrarily split without considering task contents, which corresponds to a scenario of video compression and speech recognition, etc. [ 2 ]]. Definition ofi,j∈[0,1]In order to calculate the division ratio of the task,i,jrepresenting the proportion of the computational task data performed at the MEC server. By usingThe computing resources allocated to the ith mobile device by the jth edge node are represented, and therefore, the time delay caused by the task executed at the edge node can be represented as:
in the above formula: ci,jIndicating the CPU computation cycles required to compute this task.
c. Transmission delay of edge node: the communication module and the computation module (CPU/GPU) are typically separated for each edge node. The communication module is a transceiver, and the computing module is a CPU/GPU.
Thus, in the edge node, the computation of the task and the transmission of the task may be performed in parallel. Furthermore, all edge nodes are connected to the cloud server via different backhaul links, which are typically equipped with very high bandwidth. In fact, the backhaul link is user-shared, so its delay is difficult to model due to the randomness of packet arrival, multi-user scheduling, complex routing algorithms, and other factors. The optimal cooperation strategy of edge computing and cloud computing is proposed, and the resource scheduling strategy and the routing algorithm are determined. Thus, HjExpressed as the backhaul communication capacity of each device associated with the jth edge node. Therefore, similar to the average transmission delay in (2), the average backhaul transmission delay is also proportional to the size of the transmitted data, which can be expressed as:
whereinRepresenting the time, S, required to transmit data of 1 bit size over the backhaul linkiWhich represents the size of the computing task,i,jrepresenting the proportion of the computational task data performed at the MEC server.
d. Computing time delay of the cloud server: when the cloud service successfully receives the task data sent from the edge node, the cloud server allocates available computing resources to each task to achieve parallel processing. (1-i,j)si,jci,jRepresenting the number of CPUs, i.e., computing resources, required to perform the tasks offloaded to the cloud server. By usingRepresenting cloud computing resources allocated to an ith mobile device served by a jth edge node. Thus, the computing latency of the cloud server can be expressed as:
in the above formula: ci,jIndicating the CPU computation cycles required to compute this task.
And 4, aiming at the time delay analysis result, constructing a problem model:
the invention analyzes four different time delays existing in a cloud edge cooperative system, comprising the following steps: transmission delay of mobile deviceComputing delay of edge nodesTransmission delay of edge nodeComputing latency of cloud serverTo determine the overall delay incurred by each device, several reasonable assumptions are made as follows:
assume that 1: since the offloading of tasks depends on specific parameters of each task, such as its data size and the amount of work required for computation, the MEC server in the edge node can only offload tasks via the task offloading policy after receiving the task data.
Assume 2: in a real system, task computation may depend on a specific data structure and correlation between neighboring data, e.g. video analysis in a multimedia system. In order to ensure the reliability of the calculation result, the cloud server cannot start processing tasks until the transmission between the edge node and the cloud server is finished.
Based on the above assumptions, the total latency incurred by the ith mobile device served by the jth edge node can be expressed as:
in the above formula:representing a mobile device to task qiUpload to scjThe time delay that is generated is,representing the time delay incurred in executing the task at the edge node,representing the latency incurred by the edge node to upload the task to the cloud server,representing the time delay generated by the processing of the computing task at the cloud server.
The goal of the present invention is to achieve minimization of the total delay of all mobile devices by task offloading and computational resource allocation under limited resources, and finally, the goal can be expressed as:
in the above formula: y isi,jIndicating the buffer status of the task.
Where (7a) and (7b) represent the impossibility of both the edge computing resources and the cloud computing resources allocated to the mobile device exceeding their own maximum number of computing resources. Optimization variables include the unload ratio of a taski,j}, allocation of computing resources
to solve the problem Q1, the structural characteristics of the function are first analyzed specifically. According to the formula (2), the transmission delay of the mobile device is only related to the nature of the transmitted task, and no additional optimization variable exists. Meanwhile, the transmission delay from the edge node to the cloud server, the calculation delay of the edge node and the calculation delay of the cloud server are irrelevant to the transmission delay of the mobile equipment. Thus, Q1 can be broken down into two parts:
the task offloading and computational resource allocation discussed in this invention only affects the solution of the latter part, so the problem can also be divided:
Step 6, the proposed solution to the constructed problem comprises:
(1) and (5) task caching strategy.
Before task offload decision making, the problem Q2 may be further simplified, and the cache vector Y may be solved using the task cache strategy presented in the previous sectioni,j=[y1,1,y1,2,...,yi,j]The algorithm process is shown in fig. 4:
the request times of the mobile equipment for requesting the task Q are assumed to follow the Poisson distributionC denotes the number of requests of the task, and λ denotes the average occurrence rate of the task per unit time. Setting a vector group Y to be null during initialization, determining the task type according to Poisson distribution for each task, judging whether the task exists in a task cache table, if so, directly returning an execution result to the mobile equipment by the MEC server, and if so, Yi,jIf the task exists in the table, the task does not need to be processed, and y is set to be 1i,jSetting to 1, directly returning an execution result, storing the task into a task cache table, updating a task history record table, and if neither table records the task qiThen y will bei,jSetting to 0, traversing the subtask space in such a way, and finally obtaining a vector group Yi,j=[y1,1,y1,2,...,yi,j]。
Using vector set Yi,j=[y1,1,y1,2,...,yi,j]The simplification of Q2 yields:
(2) And (4) task unloading strategy.
Considering that the problem Q3 is still too complex, it is still not directly solvable. Therefore, an optimal task unload ratio is determined firsti,jMaintained during the analysisThe value of (a) is not changed.
The delay can be calculated analyticallyAnd task allocation ratioi,jMonotonicity between them to determine the optimal division ratioFirstly, according to the formulaCan deduceIs monotonically increasing, withi,jIs increased, therefore, wheni,j∈[0,1]Can obtainSecondly, according to the formulaCan deduceIs monotonically decreasing withi,jIncrease and decrease of (2) can also obtain the value range of the functionCarefully observe the formulaIn conjunction with the foregoing discussion, it can be seen that,will follow firsti,jIs increased and decreased, and then followsi,jIs increased. Therefore, whenWhen it is obtainedMinimum value of (d):wherein
For ease of illustration, two important parameters are defined for each mobile device.
(2.1) normalized backhaul communication capacity is defined as the ratio of backhaul communication capacity to marginal computing power, i.e.
(2.2) normalized cloud computing power is defined as the ratio of cloud computing power to edge computing power, i.e.
Based on the above definition, we can get the optimal task unloading strategy. Optimal task unloading strategy in cloud edge cooperative systemCan be expressed as:
in the above formula: mu.si,jDenotes the ratio of backhaul communication capacity to edge computing power, vi,jRepresenting the ratio of cloud computing power to edge computing power.
Equation (10) shows that the optimal task offloading strategy depends only on two ratios: a normalized backhaul communication capacity and a normalized cloud computing capacity. Furthermore, the optimal task split strategy is determined by the harmonic mean of these two ratios. Can be easily verified at the edgeThe proportion of task data processed by a node will vary with μi,jOr vi,jIs increased and decreased. Thus, when a mobile device is allocated few edge computing resources while at the same time having sufficient cloud computing, it should offload more task data to the cloud server. Conversely, if cloud computing resources are scarce, the edge node should be allowed to process more task data, for which it is easy to conclude that: offloading more tasks to a more powerful server is an effective solution to reduce the overall latency of the mobile device.
The 4 different systems that exist for the equation (10) are as follows:
the 1 st: communication-limited systems: the edge node and the cloud server have sufficient computing resources, but insufficient communication capacity.
In this case, it is preferable that the air conditioner,this occurs when an edge node connects a large number of mobile devices and the capacity of the backhaul link is insufficient. The optimal task unload ratio can be simplified as follows:
equation (11) indicates that in this case, the optimal task offload ratio is determined only by the normalized backhaul communication capacity, and is not affected by the normalized cloud computing capacity. In this case, the communication resources are the main bottleneck to reduce the end-to-end delay of each device, so the edge computing power and cloud computing power size are relatively unimportant. Consider the special case when mui,j→ 0, optimal task unload ratioThe number is 1, which indicates that all computing tasks are executed at the edge node, and no task data is uploaded to the cloud server for execution. This is in line with our expectation that when the communication capacity of the backhaul link is insufficientIn extreme cases, tasks are not offloaded to cloud servers because offloading tasks to cloud servers exacerbates network congestion and results in longer transmission delays.
The 2 nd: a compute-constrained system: the edge nodes and cloud servers have sufficient communication capacity but insufficient computing resources.
In this case, it is preferable that the air conditioner,the optimal task unload ratio can be simplified as follows:
from the formula, it can be seen that the optimal task offload ratio is determined only by the normalized cloud computing power. In this case, since the backhaul communication capacity is relatively sufficient, the time delay caused by the communication becomes very small, and the size of the calculation time delay determines the size of the overall time delay of the mobile device. Note that in this case, the edge node and the cloud server can be considered as a whole, and therefore should be based on the ratio of their computing power, vi,jAnd splitting the tasks according to the proportion. More specifically, if the computing power of the edge node is greater than the computing power of the cloud server, vi,jIf the number of the data is less than 1, unloading more data to the edge node; otherwise, more data should be offloaded to the cloud, at vi,jIn the special case of 1, the data should be evenly distributed between the edge nodes and the cloud servers.
And (3) type: edge-dominated systems: the computing resources allocated to the mobile device by the edge node are far more than those allocated by the cloud server, i.e. the edge node is capable of allocating to the mobile deviceV can be obtainedi,j→ 0, this situation corresponds to a large scale small cell network where the cloud server serves many edge nodes. In this system, the optimal task offload ratio can be simplified as follows:
this indicates that the entire task should be performed on the edge node without having to be offloaded to the cloud server. Because offloading task data to the cloud server may create additional transmission delays and result in longer computational delays when the cloud computing power is much less than that of the edge nodes. Even if the normalized backhaul communication capacity is large enough, cloud computing latency can still have a major impact on overall latency. Thus, the entire task need only be offloaded to the edge node.
And 4, the method comprises the following steps: cloud-side dominant system: the cloud server has sufficient computing resources, while the edge nodes have limited computing resources, i.e.V can be obtainedi,j→ infinity, such a system corresponds to a scenario where the cloud server is powerful in computing power and the edge nodes are weak in computing power. V is toi,jSubstituting → ∞ into the formula gives:
from the formula, it can be seen that the optimal task offload ratio is determined only by the normalized backhaul communication capacity. If backhaul communication capacity HjComputing power of large or edge nodesAs the size becomes smaller, the optimal task unload rate will also decrease. This is because when v isi,jAnd → infinity, the time delay generated by the cloud end to execute the task is negligible compared with the time delay generated by the edge node to execute the task. So when mu isi,jIf the delay is less than 1, the overall delay of the mobile device will be dominated by the delay generated by the backhaul transmission, so that the proportion of task data unloaded to the edge side for processing is larger, namelyOtherwise, when μi,jAnd when the data is more than 1, more data should be unloaded to the cloud server for execution.
(3) And calculating a resource allocation strategy.
Firstly, the optimal task unloading ratio obtained in the previous chapter is obtainedSubstituting into formula to calculate time delayThe simplification is as follows:
question Q3 may also be rewritten as:
theorem 1: q4 is a convex optimization problem
And (3) proving that: only by proving that the target function and the limiting condition of the target function are both convex functions, the theorem proves. Looking at the constraints (7a) and (7b), it can be seen that the constraints (7c) and (7d) are affine, which reflects their convexity. It is only necessary to prove that the objective function is also a convex function.
all the former principals of W can be obtained by simple mathematical calculations:
according to linear algebra theory, when the front principal and minor components of a matrix are all positive definite, the matrix is a positive definite matrix. Thereby obtaining the result that,is based on variablesAnda convex function of (a). And the objective functionIs the sum of a series of convex functions, from which it can be concluded that the objective function is also a convex function. Theorem 1 proves the syndrome.
According to theorem 1, it can be seen that Q4 satisfies the KKT condition, and then the lagrangian is used to find the optimal allocation value of the computing resources in the cloud edge coordination system:
Example 2
The invention utilizes MATLAB simulation tool to carry out simulation verification on the proposed task unloading and resource allocation (Cloud-Edge-TORD) scheme based on Cloud Edge cooperation under the architecture of the Cloud Edge cooperation. To verify the superiority of this solution, the proposed solution will be compared with three other solutions, scheme 1: only edge, all tasks are unloaded to MEC server, all are processed by MEC server; scheme 2: all tasks are unloaded to the cloud server and are processed by the cloud server; scheme 3, Simple fixed group and edge, there is no effective task allocation strategy, and the task is simply divided into two halves, one half is executed by the MEC server, and the other half is executed by the cloud server.
The parameters of the embodiment are set as follows:
in the simulation experiment, assuming that each MEC server fixedly serves 25 mobile devices, the coverage radius of each MEC server is 500 meters. The size of the data volume of the task and the number of calculation cycles needed for execution are uniformly distributed, and the distribution range si,j∈[0.2,1]Mbits,ci,j∈[500,2000]Hz. Computing power of MEC server connected with scComputing power F of cloud serverc∈[200,600]GHz, the transmission bandwidth B of the mobile device is 20MHz, and the channel gain and noise power of the communication are gi,j=10-5W,σ2=10-9Transmission rate c of optical fiber is 1Gbps, capacity H of backhaul linkj∈[5,50]Mbps。
The performance of this example was analyzed as follows:
as shown in fig. 5, fig. 5 is a time delay comparison diagram of 4 schemes when the computing power of the cloud server is increased according to the present invention. The figure shows the time delay comparison of 4 schemes when the number of mobile devices changes. As can be seen from the figure, the delay of the Only Edge scheme is always stable in a smaller range, because each MEC server fixedly serves 25 mobile devices, and as the number of the mobile devices increases, the number of the MEC servers increases proportionally, so that the computing resources allocated to each mobile device are approximately constant, and the delay is always stable in a range; in addition, when the number of mobile devices is small, the performance of the Only closed scheme is better than that of the Only edge and Simple fixed and edge schemes, because when the number of mobile devices is small, the number of required edge nodes is small, and therefore, the cloud computing resources allocated to each mobile device are always more than that of the edge computing resources, but as the number of edge nodes is increased, the performance of the Only closed scheme is better than that of the Only closed scheme due to the limited cloud computing resources. Especially when the number of mobile devices is extremely large, the Only edge scheme has better performance than the Simple fixed group and edge scheme, which indicates that more computing tasks should be offloaded to the edge side rather than the cloud side; under all conditions, the Cloud-Edge-TORD scheme provided by the invention has optimal performance, and the task unloading and resource allocation strategy provided by the invention can be adopted to optimally allocate the computing capacity of the Edge node and the Cloud server by executing the Cloud Edge cooperation strategy, so that the minimization of the average system delay is realized.
Average system delay comparison of different schemes, as shown in fig. 6, fig. 6 is a delay comparison graph of 4 schemes when the computing power of the cloud server of the present invention is reduced. And time delay comparison of 4 schemes is given when the computing capacity of the cloud server is changed. When the computing capacity of the cloud server is increased, the time delay generated by the system adopting the Only closed scheme is greatly reduced. In this case, offloading the task to the cloud server is a better choice than offloading the task to the MEC server, which can effectively reduce a large amount of latency. In contrast, as the computing power of the cloud server becomes smaller, the performance of the Only edge scheme will be better than the Only closed scheme. Similar to fig. 5, the Cloud-Edge-TORD scheme proposed by the present invention is always optimal in all schemes.
Claims (10)
1. The LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation is characterized by comprising the following steps: calculating all time delays of equipment and establishing a mathematical model for the problem of minimizing the total time delay of all mobile equipment by four different time delays in a cloud-edge cooperative system; considering task unloading schemes in four different system scenes, and solving an optimal allocation value of computing resources in the cloud edge cooperative system; the method comprises the following steps:
step 1, establishing a cloud edge cooperative system;
step 2, establishing an MEC cache model;
step 3, analyzing time delay existing in the cloud edge cooperative system;
step 4, aiming at the time delay analysis result, constructing a problem model;
step 5, solving the constructed problem;
and 6, providing a solution to the constructed problem.
2. The LTE electric power wireless private network task unloading and resource allocation method based on cloud edge coordination according to claim 1, which is characterized in that: step 1, establishing a cloud-edge coordination system, wherein the cloud-edge coordination system is composed of a central cloud server and M SCeNBs and is represented by a set M ═ {1, 2., J }; each SCeNB deploys an MEC server, and the MEC server uses limited resources for data processing, caching and storing; each combination of SCeNB and MEC server is called an edge node; within the coverage area of the jth SCeNB, there is a set χjX of the representationjEach mobile device has a calculation task with different time delay requirements; assuming that each user is connected with a base station, and the specific connection relation is determined by a user communication strategy; each mobile device is connected with a corresponding base station through a wireless channel, and the edge nodes transmit data to the cloud server through different backhaul links; in the system, each computing task is supposed to be processed on an edge node and a cloud server; according to [1]The model in (1), assuming all tasks are of the same type and arrive at the same time, with qi,j=(si,j,ci,j) Representing computational tasks generated by the ith mobile device connected to the jth edge node, where si,jRepresenting the size of the computational task, ci,jIndicating the CPU computation cycle required to compute this task; representing the total number of tasks requested by the system in a certain time period by N; defining the computing capacity of the MEC server and the computing capacity of the cloud server in the jth edge node asAnd Fc(ii) a The computing resources of each MEC server and cloud server are allocated to the mobile device by virtual machine technology.
3. The method of claim 1The LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation is characterized by comprising the following steps: step 2, establishing the MEC cache model, which comprises the following steps: for each task, defining the cache vector of the MEC server as Yi,j=[y1,1,y1,2,...,yi,j](ii) a If y isi,j1, it means that the MEC server has cached the calculation result of the task, and yi,jIf the result is 0, the corresponding task result is not cached; when y isi,jWhen the value is 1, the MEC server directly transmits the calculation result to the mobile equipment without calculation; since the transmission power of the SCeNB is greater than the transmission power of the mobile device, the data volume of the calculation result is less than the data volume of the task itself, and the transmission delay on the wireless downlink is ignored, and the transmission delay required by the MEC caching scheme is considered to be 0; and the MEC cache model directly returns the results of frequently requested tasks to the mobile equipment, so that the time delay is reduced and the energy consumption is reduced.
4. The LTE electric power wireless private network task unloading and resource allocation method based on cloud edge coordination according to claim 1, which is characterized in that: step 3, analyzing the time delay existing in the cloud edge coordination system, including: in a cloud-edge-end task allocation model, assuming that each mobile device sends a task request due to limited computing capacity and electric quantity of the mobile device, and judging whether a result of the task exists in a cache or not; if yes, directly returning a task result; if not, the mobile device uploads the task to the edge node; and determining whether the task is processed by the edge node alone or the edge node and the cloud server cooperatively by the corresponding edge node.
5. The LTE electric power wireless private network task unloading and resource allocation method based on cloud edge coordination according to claim 4, characterized in that: if the task is not existed, the mobile device uploads the tasks to the edge node, and the edge node is also responsible for determining the unloading proportion of each task at the cloud edge.
6. The LTE electric power wireless private network task unloading and resource allocation method based on cloud edge coordination according to claim 1, which is characterized in that: step 3, analyzing the time delay existing in the cloud edge coordination system, specifically comprising the following steps:
step (1) the mobile device sends a corresponding task request to the connected edge node, and the task request is used for judging whether the data is wanted by the edge node, and the edge node judges whether the result of the task exists in a cache; if yes, directly returning a task result, and finishing the task processing; otherwise, continuing the next step;
the mobile equipment directly uploads the whole time delay sensitive task to a connected edge node through a wireless channel;
step (3) the MEC server positioned at each edge node divides the received task into two parts, wherein one part is left on the MEC server, and the other part is unloaded to the cloud server;
step (4), the MEC server distributes the available computing resources to each computing task, and simultaneously uploads partial data to the cloud server through a backhaul link;
step (5), the cloud server allocates the computing resources to corresponding computing tasks, so that parallel computing is achieved;
and (6) collecting the calculation result by each edge node, caching the calculation result through a task caching strategy, and then returning the calculation result to each mobile device.
7. The LTE electric power wireless private network task unloading and resource allocation method based on cloud edge coordination according to claim 6, characterized in that: the delay existing in the calculation process comprises the following steps:
a. transmission delay of mobile device: according to the Shannon formula, the MT is calculatediOf (2) an upload rate ri,jExpressed as:
where B denotes the channel bandwidth, piRepresents MTiOf the transmission power, σ2Representing the noise power of the mobile device, Ii,jRepresenting the interference power, g, within the celli,jRepresenting a channel gain of the communication;
sc is communicated with MEC through optical fiber, and the transmission rate c between the two is far greater than MTiOf (2) an upload rate ri,jAnd the distance between the two is very small, neglecting the communication delay of sc and MEC, useTo indicate that the mobile device will task qiUpload to scjThe resulting time delay:
in the above formula: MT (multiple terminal)iOf (2) an upload rate ri,j,SiRepresenting the size of the computing task;
b. computing time delay of edge node: after the edge node successfully receives the complete computing task sent by the mobile equipment, the MEC server immediately executes an unloading strategy, and divides each computing task into two parts, wherein one part is executed by the MEC server, and the other part is executed by the cloud server; it is assumed that each calculation task is arbitrarily split without considering the task content, corresponding to the scene of video compression, speech recognition, and the like](ii) a Definition ofi,j∈[0,1]In order to calculate the division ratio of the task,i,jrepresenting the proportion of the computing task data executed at the MEC server; by usingThe computing resource allocated to the ith mobile device by the jth edge node is represented, and the time delay generated by the task executed at the edge node is represented as:
in the above formula: ci,jIndicating the CPU computation cycle required to compute this task;
c. transmission delay of edge node: for each edge node, separating the communication module and the computation module (CPU/GPU); the communication module is a transceiver, and the computing module is a CPU/GPU;
in the edge node, the calculation of the task and the transmission of the task are executed in parallel; all the edge nodes are connected with the cloud server through different backhaul links; an optimal cooperative strategy of edge computing and cloud computing is provided, and a resource scheduling strategy and a routing algorithm are assumed to be determined; h is to bejExpressed as backhaul communication capacity per device associated with the jth edge node; the average backhaul transmission delay is proportional to the size of the transmitted data, and is expressed as:
whereinRepresenting the time, S, required to transmit data of 1 bit size over the backhaul linkiWhich represents the size of the computing task,i,jrepresenting the proportion of the computing task data executed at the MEC server;
d. computing time delay of the cloud server: when the cloud service successfully receives the task data sent from the edge node, the cloud server allocates available computing resources to each task to achieve parallel processing; (1-i,j)si,jci,jRepresenting the number of CPUs, i.e., computing resources, required to execute tasks offloaded to the cloud server; by usingRepresenting cloud computing resources allocated to an ith mobile device served by a jth edge node; the computing latency of the cloud server is expressed as:
in the above formula: ci,jIndicating the CPU computation cycles required to compute this task.
8. The LTE electric power wireless private network task unloading and resource allocation method based on cloud edge coordination according to claim 1, which is characterized in that: the different time delays existing in the cloud edge coordination system in the step 1 include: the method comprises the steps of transmitting delay of the mobile equipment, calculating delay of edge nodes, transmitting delay of the edge nodes and calculating delay of a cloud server; the total latency generated by each device includes:
assume that 1: since the offloading of tasks depends on specific parameters of each task;
assume 2: the task calculation depends on a specific data structure and the correlation between adjacent data, and the cloud server starts to process the task until the transmission between the edge node and the cloud server is finished;
based on the above assumptions, the total latency incurred by the ith mobile device served by the jth edge node can be expressed as:
in the above formula:representing a mobile device to task qiUpload to scjThe time delay that is generated is,representing the time delay incurred in executing the task at the edge node,representing the latency incurred by the edge node to upload the task to the cloud server,representing the time delay generated by the processing of the computing task at the cloud server;
the minimization of the total time delay of all mobile devices is realized by task unloading and computing resource allocation under limited resources, and the goal is expressed as:
in the above formula: y isi,jRepresenting the cache state of the task;
wherein (7a) and (7b) represent the number of computing resources that neither the edge computing resources nor the cloud computing resources allocated to the mobile device are likely to exceed their own maximum; optimization variables include the unload ratio of a taski,j}, allocation of computing resources
9. The LTE electric power wireless private network task unloading and resource allocation method based on cloud edge coordination according to claim 1, which is characterized in that: step 5, decomposing the constructed problem, including:
q1 is broken down into two parts:
task offloading and computing resource allocation only affect the solution of the latter part, dividing the problem:
10. The LTE electric power wireless private network task unloading and resource allocation method based on cloud edge coordination according to claim 1, which is characterized in that: the proposed solution to the constructed problem described in step 6 comprises:
(1) and (3) task caching strategy: before task unloading decision is carried out, the problem Q2 is further simplified, and a task caching strategy is used for solving a caching vector Yi,j=[y1,1,y1,2,...,yi,j]The algorithm process is shown in fig. 4:
the request times of the mobile equipment for requesting the task Q are assumed to follow the Poisson distributionC represents the request times of the tasks, and lambda represents the average occurrence rate of the tasks in unit time; setting a vector group Y to be null during initialization, determining the task type according to Poisson distribution for each task, judging whether the task exists in a task cache table, if so, directly returning an execution result to the mobile equipment by the MEC server, and if so, Yi,jIf not, continuing to judge whether the task exists in the task history record table; if present in the table, will yi,jSetting the value to be 1, directly returning an execution result, storing the task into a task cache table, and updating a task history record table; if neither table has recorded task qiThen y will bei,jSetting to 0, traversing the subtask space in such a way, and finally obtaining a vector group Yi,j=[y1,1,y1,2,...,yi,j];
Using vector set Yi,j=[y1,1,y1,2,...,yi,j]The simplification of Q2 yields:
(2) Task unloading strategy: determining an optimal task offload ratioi,jMaintained during the analysisThe value of (d) is unchanged;
calculating delay by analysisAnd task allocation ratioi,jDetermine an optimal division ratio for monotonicity betweenFirstly, according to the formulaDeduceIs monotonically increasing, withi,jIs increased with an increase in; when in usei,j∈[0,1]To obtainSecondly, according to the formulaDeduceIs monotonically decreasing withi,jIs increased or decreased, and is also a function ofValue rangeObservation formulaIn connection with the previous discussion, it was discoveredFirst followingi,jIs increased and decreased withi,jIs increased with an increase in; when in useWhen it is obtainedMinimum value of (d):wherein
Each mobile device defines two important parameters, including:
(2.1) normalized backhaul communication capacity is defined as the ratio of backhaul communication capacity to marginal computing power, i.e.
(2.2) normalized cloud computing power is defined as the ratio of cloud computing power to edge computing power, i.e.
Based on the definition, obtaining an optimal task unloading strategy; optimal task unloading strategy in cloud edge cooperative systemExpressed as:
in the above formula: mu.si,jDenotes the ratio of backhaul communication capacity to edge computing power, vi,jRepresenting a ratio of cloud computing power to edge computing power;
the different systems in which said formula (10) exists are as follows:
the 1 st: communication-limited systems: the edge node and the cloud server have sufficient computing resources, but the communication capacity is insufficient;
in this case, it is preferable that the air conditioner,this occurs when an edge node connects a large number of mobile devices and the capacity of the backhaul link is insufficient, simplifying the optimal task offload ratio:
equation (11) indicates that in this case, the optimal task offload ratio is determined only by the normalized backhaul communication capacity, and is not affected by the normalized cloud computing capacity; in this case, the communication resources reduce the main bottleneck of end-to-end delay per device; when mu isi,j→ 0, optimal task unload ratio1, all computing tasks are executed at the edge nodes;
the 2 nd: a compute-constrained system: the edge nodes and the cloud server have sufficient communication capacity, but insufficient computing resources;
in this case, it is preferable that the air conditioner,simplifying the optimal task unloading ratio:
the optimal task offloading rate is determined only by the normalized cloud computing capability, in this case, as the backhaul communication capacity is relatively sufficient, the time delay generated by communication becomes small, and the size of the computing time delay determines the size of the overall time delay of the mobile device; in this case, the edge node and the cloud server are considered as a whole, according to the ratio of their computing power, vi,jSplitting the tasks according to the proportion; if the computing power of the edge node is larger than that of the cloud server, vi,jIf the number of the data is less than 1, unloading more data to the edge node; otherwise, more data is offloaded to the cloud, at vi,jIn the special case of 1, data is evenly distributed between the edge node and the cloud server;
and (3) type: edge-dominated systems: the computing resources allocated to the mobile device by the edge node are far more than those allocated by the cloud server, i.e. the edge node is capable of allocating to the mobile deviceObtain vi,j→ 0, this case corresponds to a large scale small cell network, where the cloud server serves many edge nodes; in this system, the optimal task offload ratio is simplified as follows:
indicating that the entire task should be executed on the edge node, the entire task only needs to be offloaded to the edge node;
and 4, the method comprises the following steps: cloud-side dominant system: the cloud server has sufficient computing resources, while the edge nodes have limited computing resources, i.e.Obtain vi,j→ infinity, such a system corresponds to a scenario where the cloud server is powerful in computing power and the edge nodes are weak in computing power; v is toi,jSubstituting → ∞ into the formula gives:
according to the formula, the optimal task unloading rate is determined only by the normalized backhaul communication capacity; if backhaul communication capacity HjComputing power of large or edge nodesThe unloading rate of the optimal task is reduced along with the reduction of the unloading rate of the optimal task; because when v isi,jTime delay generated by the cloud end to execute the task can be ignored compared with time delay generated by the edge node to execute the task; so when mu isi,jIf the delay is less than 1, the overall delay of the mobile device will be dominated by the delay generated by the backhaul transmission, so that the proportion of task data unloaded to the edge side for processing is larger, namelyOtherwise, when μi,jWhen the data is more than 1, more data should be unloaded to the cloud server for execution;
(3) computing resource allocation policy: unloading the optimal task rateSubstituting into formula to calculate time delayThe simplification is as follows:
question Q3 is rewritten as:
s.t.(7a),(7b)
theorem 1: q4 is a convex optimization problem;
and (3) proving that: only by proving that the target function and the limiting condition of the target function are both convex functions, the theorem proves that the target function is a convex function; looking at the constraints (7a) and (7b), it can be seen that the constraints (7c) and (7d) are affine, reflecting their convexity; the following demonstrates that the objective function is also a convex function:
through mathematical calculations, all the former principals of W are obtained:
according to the linear algebra theory, when the front principal and the minor of a matrix are all positive definite, the matrix is a positive definite matrix; so as to obtain the compound with the characteristics of,is based on variablesAnda convex function of (d);and the objective functionIs the sum of a series of convex functions, and the obtained target function is also a convex function; theorem 1 obtaining the certificate;
according to theorem 1, Q4 meeting the KKT condition is obtained, and the optimal allocation value of the computing resources in the cloud edge cooperative system is obtained by using a Lagrange operator:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911365711.8A CN111585916B (en) | 2019-12-26 | 2019-12-26 | LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911365711.8A CN111585916B (en) | 2019-12-26 | 2019-12-26 | LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111585916A true CN111585916A (en) | 2020-08-25 |
CN111585916B CN111585916B (en) | 2023-08-01 |
Family
ID=72124227
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911365711.8A Active CN111585916B (en) | 2019-12-26 | 2019-12-26 | LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111585916B (en) |
Cited By (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111988805A (en) * | 2020-08-28 | 2020-11-24 | 重庆邮电大学 | End edge cooperation method for reliable time delay guarantee |
CN112039992A (en) * | 2020-09-01 | 2020-12-04 | 平安资产管理有限责任公司 | Model management method and system based on cloud computing architecture |
CN112118135A (en) * | 2020-09-14 | 2020-12-22 | 南昌市言诺科技有限公司 | Minimum resource configuration method and device for cloud edge cooperative architecture industrial internet platform |
CN112148492A (en) * | 2020-09-28 | 2020-12-29 | 南京大学 | Service deployment and resource allocation method considering multi-user mobility |
CN112217879A (en) * | 2020-09-24 | 2021-01-12 | 江苏方天电力技术有限公司 | Edge computing technology and cloud edge cooperation method based on power distribution Internet of things |
CN112256413A (en) * | 2020-10-16 | 2021-01-22 | 国网电子商务有限公司 | Scheduling method and device for edge computing task based on Internet of things |
CN112365658A (en) * | 2020-09-21 | 2021-02-12 | 国网江苏省电力有限公司信息通信分公司 | Charging pile resource allocation method based on edge calculation |
CN112437156A (en) * | 2020-11-23 | 2021-03-02 | 兰州理工大学 | Distributed cooperative caching method based on MEC-D2D |
CN112468547A (en) * | 2020-11-13 | 2021-03-09 | 广州中国科学院沈阳自动化研究所分所 | Regional-based industrial edge computing task cloud collaborative unloading method |
CN112491957A (en) * | 2020-10-27 | 2021-03-12 | 西安交通大学 | Distributed computing unloading method and system under edge network environment |
CN112506656A (en) * | 2020-12-08 | 2021-03-16 | 深圳市国电科技通信有限公司 | Distribution method based on distribution Internet of things computing task |
CN112512061A (en) * | 2020-11-05 | 2021-03-16 | 上海大学 | Task unloading and dispatching method in multi-access edge computing system |
CN112650338A (en) * | 2021-01-22 | 2021-04-13 | 褚东花 | Energy-saving and environment-friendly forestry seedling detection system and method based on Internet of things |
CN112689303A (en) * | 2020-12-28 | 2021-04-20 | 西安电子科技大学 | Edge cloud cooperative resource joint allocation method, system and application |
CN112702401A (en) * | 2020-12-15 | 2021-04-23 | 北京邮电大学 | Multi-task cooperative allocation method and device for power Internet of things |
CN112749012A (en) * | 2021-01-15 | 2021-05-04 | 北京智芯微电子科技有限公司 | Data processing method, device and system of terminal equipment and storage medium |
CN112861371A (en) * | 2021-03-02 | 2021-05-28 | 东南大学 | Steel industry cloud production scheduling method based on edge computing |
CN112887785A (en) * | 2021-01-13 | 2021-06-01 | 浙江传媒学院 | Remote video overlapping interactive computing method based on time delay optimization |
CN112996056A (en) * | 2021-03-02 | 2021-06-18 | 国网江苏省电力有限公司信息通信分公司 | Method and device for unloading time delay optimized computing task under cloud edge cooperation |
CN112989251A (en) * | 2021-03-19 | 2021-06-18 | 浙江传媒学院 | Mobile Web augmented reality 3D model data service method based on cooperative computing |
CN113015217A (en) * | 2021-02-07 | 2021-06-22 | 重庆邮电大学 | Edge cloud cooperation low-cost online multifunctional business computing unloading method |
CN113037877A (en) * | 2021-05-26 | 2021-06-25 | 深圳大学 | Optimization method for time-space data and resource scheduling under cloud edge architecture |
CN113114714A (en) * | 2020-11-03 | 2021-07-13 | 吉林大学 | Energy-saving method and system for unloading large-scale tasks to 5G edge server |
CN113128681A (en) * | 2021-04-08 | 2021-07-16 | 天津大学 | Multi-edge equipment assisted general CNN reasoning acceleration system |
CN113192322A (en) * | 2021-03-19 | 2021-07-30 | 东北大学 | Expressway traffic flow counting method based on cloud edge cooperation |
CN113254095A (en) * | 2021-04-25 | 2021-08-13 | 西安电子科技大学 | Task unloading, scheduling and load balancing system and method of cloud edge combined platform |
CN113301151A (en) * | 2021-05-24 | 2021-08-24 | 南京大学 | Low-delay containerized task deployment method and device based on cloud edge cooperation |
CN113315659A (en) * | 2021-05-26 | 2021-08-27 | 江西鑫铂瑞科技有限公司 | Task collaborative planning method and system for intelligent factory |
CN113315669A (en) * | 2021-07-28 | 2021-08-27 | 江苏电力信息技术有限公司 | Cloud edge cooperation-based throughput optimization machine learning inference task deployment method |
CN113361113A (en) * | 2021-06-09 | 2021-09-07 | 南京工程学院 | Energy-consumption-adjustable twin data distribution method for high-speed rail bogie |
CN113395679A (en) * | 2021-05-25 | 2021-09-14 | 安徽大学 | Resource and task allocation optimization system of unmanned aerial vehicle edge server |
CN113592077A (en) * | 2021-08-05 | 2021-11-02 | 哈尔滨工业大学 | Edge-intelligent cloud-side DNN collaborative reasoning acceleration method |
CN113778685A (en) * | 2021-09-16 | 2021-12-10 | 上海天麦能源科技有限公司 | Unloading method for urban gas pipe network edge computing system |
CN113961266A (en) * | 2021-10-14 | 2022-01-21 | 湘潭大学 | Task unloading method based on bilateral matching under edge cloud cooperation |
CN113961264A (en) * | 2021-09-30 | 2022-01-21 | 河海大学 | Intelligent unloading algorithm and system for video monitoring cloud edge coordination |
CN114051266A (en) * | 2021-11-08 | 2022-02-15 | 首都师范大学 | Wireless body area network task unloading method based on mobile cloud-edge computing |
CN114143355A (en) * | 2021-12-08 | 2022-03-04 | 华北电力大学 | Low-delay safety cloud side end cooperation method for power internet of things |
CN114301907A (en) * | 2021-11-18 | 2022-04-08 | 北京邮电大学 | Service processing method, system and device in cloud computing network and electronic equipment |
CN114384866A (en) * | 2020-10-21 | 2022-04-22 | 沈阳中科数控技术股份有限公司 | Data partitioning method based on distributed deep neural network framework |
CN114637608A (en) * | 2022-05-17 | 2022-06-17 | 之江实验室 | Calculation task allocation and updating method, terminal and network equipment |
CN114745389A (en) * | 2022-05-19 | 2022-07-12 | 电子科技大学 | Computing offloading method for mobile edge computing system |
CN114841952A (en) * | 2022-04-28 | 2022-08-02 | 华南理工大学 | Cloud-edge cooperative detection system and detection method for retinopathy of prematurity |
CN114844900A (en) * | 2022-05-05 | 2022-08-02 | 中南大学 | Edge cloud resource cooperation method based on uncertain demand |
CN114928653A (en) * | 2022-04-19 | 2022-08-19 | 西北工业大学 | Data processing method and device for crowd sensing |
CN114928607A (en) * | 2022-03-18 | 2022-08-19 | 南京邮电大学 | Collaborative task unloading method for multilateral access edge calculation |
CN114945025A (en) * | 2022-04-25 | 2022-08-26 | 国网经济技术研究院有限公司 | Price-driven just-game unloading method and system oriented to cloud-edge cooperation in power grid |
CN115002113A (en) * | 2022-05-26 | 2022-09-02 | 南京邮电大学 | Mobile base station edge computing power resource scheduling method, system and electronic equipment |
CN115102974A (en) * | 2021-12-08 | 2022-09-23 | 湘潭大学 | Cooperative content caching method based on bilateral matching game |
CN115225675A (en) * | 2022-07-18 | 2022-10-21 | 国网信息通信产业集团有限公司 | Charging station intelligent operation and maintenance system based on edge calculation |
CN115297013A (en) * | 2022-08-04 | 2022-11-04 | 重庆大学 | Task unloading and service cache joint optimization method based on edge cooperation |
CN116016522A (en) * | 2023-02-13 | 2023-04-25 | 广东电网有限责任公司中山供电局 | Cloud edge end collaborative new energy terminal monitoring architecture |
CN117240631A (en) * | 2023-11-15 | 2023-12-15 | 成都超算中心运营管理有限公司 | Method and system for connecting heterogeneous industrial equipment with cloud platform based on message middleware |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108541027A (en) * | 2018-04-24 | 2018-09-14 | 南京邮电大学 | A kind of communication computing resource method of replacing based on edge cloud network |
CN109684075A (en) * | 2018-11-28 | 2019-04-26 | 深圳供电局有限公司 | A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration |
CN109814951A (en) * | 2019-01-22 | 2019-05-28 | 南京邮电大学 | The combined optimization method of task unloading and resource allocation in mobile edge calculations network |
CN110035410A (en) * | 2019-03-07 | 2019-07-19 | 中南大学 | Federated resource distribution and the method and system of unloading are calculated in a kind of vehicle-mounted edge network of software definition |
-
2019
- 2019-12-26 CN CN201911365711.8A patent/CN111585916B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108541027A (en) * | 2018-04-24 | 2018-09-14 | 南京邮电大学 | A kind of communication computing resource method of replacing based on edge cloud network |
CN109684075A (en) * | 2018-11-28 | 2019-04-26 | 深圳供电局有限公司 | A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration |
CN109814951A (en) * | 2019-01-22 | 2019-05-28 | 南京邮电大学 | The combined optimization method of task unloading and resource allocation in mobile edge calculations network |
CN110035410A (en) * | 2019-03-07 | 2019-07-19 | 中南大学 | Federated resource distribution and the method and system of unloading are calculated in a kind of vehicle-mounted edge network of software definition |
Non-Patent Citations (2)
Title |
---|
张海波等: "超密集网络中基于移动边缘计算的任务卸载和资源优化", 《电子与信息学报》, no. 05, 14 May 2019 (2019-05-14) * |
谢人超等: "移动边缘计算卸载技术综述", 《通信学报》, no. 11, 25 November 2018 (2018-11-25) * |
Cited By (78)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111988805A (en) * | 2020-08-28 | 2020-11-24 | 重庆邮电大学 | End edge cooperation method for reliable time delay guarantee |
CN111988805B (en) * | 2020-08-28 | 2022-03-29 | 重庆邮电大学 | End edge cooperation method for reliable time delay guarantee |
CN112039992A (en) * | 2020-09-01 | 2020-12-04 | 平安资产管理有限责任公司 | Model management method and system based on cloud computing architecture |
CN112118135A (en) * | 2020-09-14 | 2020-12-22 | 南昌市言诺科技有限公司 | Minimum resource configuration method and device for cloud edge cooperative architecture industrial internet platform |
CN112365658A (en) * | 2020-09-21 | 2021-02-12 | 国网江苏省电力有限公司信息通信分公司 | Charging pile resource allocation method based on edge calculation |
CN112217879A (en) * | 2020-09-24 | 2021-01-12 | 江苏方天电力技术有限公司 | Edge computing technology and cloud edge cooperation method based on power distribution Internet of things |
CN112217879B (en) * | 2020-09-24 | 2023-08-01 | 江苏方天电力技术有限公司 | Edge computing technology and cloud edge cooperation method based on power distribution Internet of things |
CN112148492A (en) * | 2020-09-28 | 2020-12-29 | 南京大学 | Service deployment and resource allocation method considering multi-user mobility |
CN112148492B (en) * | 2020-09-28 | 2023-07-28 | 南京大学 | Service deployment and resource allocation method considering multi-user mobility |
CN112256413A (en) * | 2020-10-16 | 2021-01-22 | 国网电子商务有限公司 | Scheduling method and device for edge computing task based on Internet of things |
CN114384866A (en) * | 2020-10-21 | 2022-04-22 | 沈阳中科数控技术股份有限公司 | Data partitioning method based on distributed deep neural network framework |
CN112491957A (en) * | 2020-10-27 | 2021-03-12 | 西安交通大学 | Distributed computing unloading method and system under edge network environment |
CN113114714A (en) * | 2020-11-03 | 2021-07-13 | 吉林大学 | Energy-saving method and system for unloading large-scale tasks to 5G edge server |
CN113114714B (en) * | 2020-11-03 | 2022-03-01 | 吉林大学 | Energy-saving method and system for unloading large-scale tasks to 5G edge server |
CN112512061B (en) * | 2020-11-05 | 2022-11-22 | 上海大学 | Task unloading and assigning method in multi-access edge computing system |
CN112512061A (en) * | 2020-11-05 | 2021-03-16 | 上海大学 | Task unloading and dispatching method in multi-access edge computing system |
CN112468547B (en) * | 2020-11-13 | 2023-04-07 | 广州中国科学院沈阳自动化研究所分所 | Regional-based industrial edge computing task cloud collaborative unloading method |
CN112468547A (en) * | 2020-11-13 | 2021-03-09 | 广州中国科学院沈阳自动化研究所分所 | Regional-based industrial edge computing task cloud collaborative unloading method |
CN112437156A (en) * | 2020-11-23 | 2021-03-02 | 兰州理工大学 | Distributed cooperative caching method based on MEC-D2D |
CN112506656A (en) * | 2020-12-08 | 2021-03-16 | 深圳市国电科技通信有限公司 | Distribution method based on distribution Internet of things computing task |
CN112702401A (en) * | 2020-12-15 | 2021-04-23 | 北京邮电大学 | Multi-task cooperative allocation method and device for power Internet of things |
CN112702401B (en) * | 2020-12-15 | 2022-01-04 | 北京邮电大学 | Multi-task cooperative allocation method and device for power Internet of things |
CN112689303B (en) * | 2020-12-28 | 2022-07-22 | 西安电子科技大学 | Edge cloud cooperative resource joint allocation method, system and application |
CN112689303A (en) * | 2020-12-28 | 2021-04-20 | 西安电子科技大学 | Edge cloud cooperative resource joint allocation method, system and application |
CN112887785B (en) * | 2021-01-13 | 2023-05-02 | 浙江传媒学院 | Time delay optimization method based on remote video superposition interactive calculation |
CN112887785A (en) * | 2021-01-13 | 2021-06-01 | 浙江传媒学院 | Remote video overlapping interactive computing method based on time delay optimization |
CN112749012A (en) * | 2021-01-15 | 2021-05-04 | 北京智芯微电子科技有限公司 | Data processing method, device and system of terminal equipment and storage medium |
CN112650338A (en) * | 2021-01-22 | 2021-04-13 | 褚东花 | Energy-saving and environment-friendly forestry seedling detection system and method based on Internet of things |
CN113015217A (en) * | 2021-02-07 | 2021-06-22 | 重庆邮电大学 | Edge cloud cooperation low-cost online multifunctional business computing unloading method |
CN113015217B (en) * | 2021-02-07 | 2022-05-20 | 重庆邮电大学 | Edge cloud cooperation low-cost online multifunctional business computing unloading method |
CN112996056A (en) * | 2021-03-02 | 2021-06-18 | 国网江苏省电力有限公司信息通信分公司 | Method and device for unloading time delay optimized computing task under cloud edge cooperation |
CN112861371A (en) * | 2021-03-02 | 2021-05-28 | 东南大学 | Steel industry cloud production scheduling method based on edge computing |
CN112989251A (en) * | 2021-03-19 | 2021-06-18 | 浙江传媒学院 | Mobile Web augmented reality 3D model data service method based on cooperative computing |
CN113192322A (en) * | 2021-03-19 | 2021-07-30 | 东北大学 | Expressway traffic flow counting method based on cloud edge cooperation |
CN113128681B (en) * | 2021-04-08 | 2023-05-12 | 天津大学 | Multi-edge equipment-assisted general CNN reasoning acceleration system |
CN113128681A (en) * | 2021-04-08 | 2021-07-16 | 天津大学 | Multi-edge equipment assisted general CNN reasoning acceleration system |
CN113254095A (en) * | 2021-04-25 | 2021-08-13 | 西安电子科技大学 | Task unloading, scheduling and load balancing system and method of cloud edge combined platform |
CN113301151A (en) * | 2021-05-24 | 2021-08-24 | 南京大学 | Low-delay containerized task deployment method and device based on cloud edge cooperation |
CN113301151B (en) * | 2021-05-24 | 2023-01-06 | 南京大学 | Low-delay containerized task deployment method and device based on cloud edge cooperation |
CN113395679A (en) * | 2021-05-25 | 2021-09-14 | 安徽大学 | Resource and task allocation optimization system of unmanned aerial vehicle edge server |
CN113395679B (en) * | 2021-05-25 | 2022-08-05 | 安徽大学 | Resource and task allocation optimization system of unmanned aerial vehicle edge server |
CN113037877A (en) * | 2021-05-26 | 2021-06-25 | 深圳大学 | Optimization method for time-space data and resource scheduling under cloud edge architecture |
CN113315659A (en) * | 2021-05-26 | 2021-08-27 | 江西鑫铂瑞科技有限公司 | Task collaborative planning method and system for intelligent factory |
CN113361113A (en) * | 2021-06-09 | 2021-09-07 | 南京工程学院 | Energy-consumption-adjustable twin data distribution method for high-speed rail bogie |
CN113315669A (en) * | 2021-07-28 | 2021-08-27 | 江苏电力信息技术有限公司 | Cloud edge cooperation-based throughput optimization machine learning inference task deployment method |
CN113592077B (en) * | 2021-08-05 | 2024-04-05 | 哈尔滨工业大学 | Cloud edge DNN collaborative reasoning acceleration method for edge intelligence |
CN113592077A (en) * | 2021-08-05 | 2021-11-02 | 哈尔滨工业大学 | Edge-intelligent cloud-side DNN collaborative reasoning acceleration method |
CN113778685A (en) * | 2021-09-16 | 2021-12-10 | 上海天麦能源科技有限公司 | Unloading method for urban gas pipe network edge computing system |
CN113961264B (en) * | 2021-09-30 | 2024-01-09 | 河海大学 | Intelligent unloading algorithm and system for video monitoring cloud edge cooperation |
CN113961264A (en) * | 2021-09-30 | 2022-01-21 | 河海大学 | Intelligent unloading algorithm and system for video monitoring cloud edge coordination |
CN113961266B (en) * | 2021-10-14 | 2023-08-22 | 湘潭大学 | Task unloading method based on bilateral matching under edge cloud cooperation |
CN113961266A (en) * | 2021-10-14 | 2022-01-21 | 湘潭大学 | Task unloading method based on bilateral matching under edge cloud cooperation |
CN114051266B (en) * | 2021-11-08 | 2024-01-12 | 首都师范大学 | Wireless body area network task unloading method based on mobile cloud-edge calculation |
CN114051266A (en) * | 2021-11-08 | 2022-02-15 | 首都师范大学 | Wireless body area network task unloading method based on mobile cloud-edge computing |
CN114301907A (en) * | 2021-11-18 | 2022-04-08 | 北京邮电大学 | Service processing method, system and device in cloud computing network and electronic equipment |
CN114143355A (en) * | 2021-12-08 | 2022-03-04 | 华北电力大学 | Low-delay safety cloud side end cooperation method for power internet of things |
CN114143355B (en) * | 2021-12-08 | 2022-08-30 | 华北电力大学 | Low-delay safety cloud side end cooperation method for power internet of things |
CN115102974A (en) * | 2021-12-08 | 2022-09-23 | 湘潭大学 | Cooperative content caching method based on bilateral matching game |
CN114928607B (en) * | 2022-03-18 | 2023-08-04 | 南京邮电大学 | Collaborative task unloading method for polygonal access edge calculation |
CN114928607A (en) * | 2022-03-18 | 2022-08-19 | 南京邮电大学 | Collaborative task unloading method for multilateral access edge calculation |
CN114928653B (en) * | 2022-04-19 | 2024-02-06 | 西北工业大学 | Data processing method and device for crowd sensing |
CN114928653A (en) * | 2022-04-19 | 2022-08-19 | 西北工业大学 | Data processing method and device for crowd sensing |
CN114945025A (en) * | 2022-04-25 | 2022-08-26 | 国网经济技术研究院有限公司 | Price-driven just-game unloading method and system oriented to cloud-edge cooperation in power grid |
CN114945025B (en) * | 2022-04-25 | 2023-09-15 | 国网经济技术研究院有限公司 | Price-driven positive and game unloading method and system oriented to cloud-edge coordination in power grid |
CN114841952A (en) * | 2022-04-28 | 2022-08-02 | 华南理工大学 | Cloud-edge cooperative detection system and detection method for retinopathy of prematurity |
CN114841952B (en) * | 2022-04-28 | 2024-05-03 | 华南理工大学 | Cloud-edge cooperative retinopathy of prematurity detection system and detection method |
CN114844900B (en) * | 2022-05-05 | 2022-12-13 | 中南大学 | Edge cloud resource cooperation method based on uncertain demand |
CN114844900A (en) * | 2022-05-05 | 2022-08-02 | 中南大学 | Edge cloud resource cooperation method based on uncertain demand |
CN114637608A (en) * | 2022-05-17 | 2022-06-17 | 之江实验室 | Calculation task allocation and updating method, terminal and network equipment |
CN114745389B (en) * | 2022-05-19 | 2023-02-24 | 电子科技大学 | Computing offload method for mobile edge computing system |
CN114745389A (en) * | 2022-05-19 | 2022-07-12 | 电子科技大学 | Computing offloading method for mobile edge computing system |
CN115002113A (en) * | 2022-05-26 | 2022-09-02 | 南京邮电大学 | Mobile base station edge computing power resource scheduling method, system and electronic equipment |
CN115002113B (en) * | 2022-05-26 | 2023-08-01 | 南京邮电大学 | Mobile base station edge computing power resource scheduling method, system and electronic equipment |
CN115225675A (en) * | 2022-07-18 | 2022-10-21 | 国网信息通信产业集团有限公司 | Charging station intelligent operation and maintenance system based on edge calculation |
CN115297013B (en) * | 2022-08-04 | 2023-11-28 | 重庆大学 | Task unloading and service cache joint optimization method based on edge collaboration |
CN115297013A (en) * | 2022-08-04 | 2022-11-04 | 重庆大学 | Task unloading and service cache joint optimization method based on edge cooperation |
CN116016522A (en) * | 2023-02-13 | 2023-04-25 | 广东电网有限责任公司中山供电局 | Cloud edge end collaborative new energy terminal monitoring architecture |
CN117240631A (en) * | 2023-11-15 | 2023-12-15 | 成都超算中心运营管理有限公司 | Method and system for connecting heterogeneous industrial equipment with cloud platform based on message middleware |
Also Published As
Publication number | Publication date |
---|---|
CN111585916B (en) | 2023-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111585916B (en) | LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation | |
Guo et al. | Computation offloading for multi-access mobile edge computing in ultra-dense networks | |
CN111447619B (en) | Joint task unloading and resource allocation method in mobile edge computing network | |
Lee et al. | An online secretary framework for fog network formation with minimal latency | |
CN107766135B (en) | Task allocation method based on particle swarm optimization and simulated annealing optimization in moving cloud | |
US20200162574A1 (en) | System And Method For Joint Dynamic Forwarding And Caching In Content Distribution Networks | |
Wu et al. | An efficient offloading algorithm based on support vector machine for mobile edge computing in vehicular networks | |
Chamola et al. | An optimal delay aware task assignment scheme for wireless SDN networked edge cloudlets | |
Van Le et al. | Quality of service aware computation offloading in an ad-hoc mobile cloud | |
Zhao et al. | Task proactive caching based computation offloading and resource allocation in mobile-edge computing systems | |
WO2023039965A1 (en) | Cloud-edge computing network computational resource balancing and scheduling method for traffic grooming, and system | |
Wang et al. | Dynamic offloading scheduling scheme for MEC-enabled vehicular networks | |
Liu et al. | Multi-agent deep reinforcement learning for end—edge orchestrated resource allocation in industrial wireless networks | |
Wu et al. | A mobile edge computing-based applications execution framework for Internet of Vehicles | |
Wang et al. | Task allocation mechanism of power internet of things based on cooperative edge computing | |
Li et al. | Joint computation offloading and service caching for MEC in multi-access networks | |
Chen et al. | DDPG-based computation offloading and service caching in mobile edge computing | |
Krijestorac et al. | Hybrid vehicular and cloud distributed computing: A case for cooperative perception | |
Tayade et al. | Delay constrained energy optimization for edge cloud offloading | |
Huda et al. | Deep reinforcement learning-based computation offloading in uav swarm-enabled edge computing for surveillance applications | |
Wu et al. | A deep reinforcement learning approach for collaborative mobile edge computing | |
He et al. | An offloading scheduling strategy with minimized power overhead for internet of vehicles based on mobile edge computing | |
Galanopoulos et al. | Optimizing data analytics in energy constrained IoT networks | |
Chen et al. | Timeliness analysis of service-driven collaborative mobile edge computing in UAV swarm | |
CN114928611B (en) | IEEE802.11p protocol-based energy-saving calculation unloading optimization method for Internet of vehicles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |