CN112235387B - Multi-node cooperative computing unloading method based on energy consumption minimization - Google Patents

Multi-node cooperative computing unloading method based on energy consumption minimization Download PDF

Info

Publication number
CN112235387B
CN112235387B CN202011077355.2A CN202011077355A CN112235387B CN 112235387 B CN112235387 B CN 112235387B CN 202011077355 A CN202011077355 A CN 202011077355A CN 112235387 B CN112235387 B CN 112235387B
Authority
CN
China
Prior art keywords
node
energy consumption
edge
task
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011077355.2A
Other languages
Chinese (zh)
Other versions
CN112235387A (en
Inventor
韩东升
刘语
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN202011077355.2A priority Critical patent/CN112235387B/en
Publication of CN112235387A publication Critical patent/CN112235387A/en
Application granted granted Critical
Publication of CN112235387B publication Critical patent/CN112235387B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/70Services for machine-to-machine communication [M2M] or machine type communication [MTC]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a multi-node cooperative computing unloading method based on energy consumption minimization, which takes network energy consumption minimization as an objective function, comprehensively considers network delay and QoS requirements, converts an optimization process into an integer linear programming problem, and adopts a branch-and-bound method to realize an optimization target. Simulation analysis shows that compared with the traditional calculation unloading method, the multi-node cooperative calculation unloading method provided by the invention effectively reduces the network energy consumption and simultaneously ensures the execution of more data volume. The method can be applied to an intelligent home scene, green communication of the intelligent home is realized, and a local user side supporting an Internet of things (IoT) divides the computing task and then parallelly unloads the computing task to a plurality of MEC nodes or cloud sides.

Description

Multi-node cooperative computing unloading method based on energy consumption minimization
Technical Field
The invention belongs to the field of cloud computing in a communication network, and particularly relates to a multi-node cooperative computing unloading method based on energy consumption minimization.
Background
With the development of internet of things (IoT) technology in recent years, devices in the IoT network have sensing and communication capabilities, and network clients can extend to any article in life to exchange and communicate information. Meanwhile, the IoT technology is also applied to various aspects in industrial production and daily life, and application scenarios include smart home, smart industry, smart city, and the like, aiming at transmission and network performance optimization. The application scene of the smart home is mainly considered, and because the IoT local user side in the smart home scene can be any article, the user data in the IoT has the characteristic of diversity, and meanwhile, the smart electrical appliance is required to process the task data more quickly and efficiently. Therefore, for some users with large data volume or sensitive time delay, a faster, efficient and safe task processing mode is needed to meet the user requirements. The traditional single cloud model cannot meet the user requirements, and for this reason, the concept of edge computing (MEC) is proposed on the basis of cloud computing. The edge calculation is a novel calculation model for providing intelligent services at the edge of a network close to an object or a data source, edge nodes are widely distributed and are closer to a user side, and the edge nodes can be installed on an edge server, such as a vehicle and the like, so that the connection requirements of different users are met. By combining the MEC, the data tasks realize multi-node cooperation between the IoT local nodes and the MEC nodes through data transmission, so that IoT local user data can be unloaded to adjacent MEC servers, the problem of limited computing processing capacity of IoT local user sides in an intelligent home scene is solved, and the computing task pressure of users is shared. Due to the limited computing capability of the MEC nodes, when facing a computing task with a large data volume, a cooperation mode among multiple MEC nodes is needed.
The IoT local client can offload computing tasks to the edge node, and the offloading method can be divided into full offloading and partial offloading. The all-offload scheme is to offload all the computation tasks to a certain edge node for execution, and document 1 (j.liu, y.mao, j.zhang, and k.b.letaief, "Delay-optimal computation task scheduling for mobile-edge computing systems," in proc.ieee int.symp.inf.therapy (ISIT), barcelona, spain, jul.2016, pp.1451-1455) uses a one-dimensional search algorithm that reduces the execution Delay to the maximum extent, and comprehensively considers the application buffer queuing state and the available processing capacity. However, for the all-off method, the computing task is completely off-loaded to the edge node for processing, and there may be a problem that the computing power of the edge node cannot be satisfied, and a large transmission delay is caused. For this reason, a Partial offload Scheme is proposed, which means that part of the Computation task is executed locally, and the rest is offloaded to an Edge node for execution, and the specific content of the Partial offload is described in document 2 (z.nin, p.dong, x.kong and f.xia, "a Cooperative Partial Computation Offloading Scheme for Mobile Edge Computing Enabled Internet of Things," in IEEE Internet of Things (joints), vol.6, no.3, pp.4804-4814, joint 2019). In the partial offloading scheme, the allocation location of the data task needs to be determined, and for this purpose document 3 (l.yang, j.cao, h.cheng, and y.ji, "Multi-user computing sharing for related mobile closed applications," IEEE trans.computer., vol.64, no.8, pp.2253-2266, aug.2015) proposes the concept of task partition, whose purpose is to determine which module to offload and how to execute, i.e. to offload to the edge and cloud nodes locally or remotely. Document 4 (Y.ZHao, S.ZHou, T.ZHao, and Z.Niu, "Energy-efficiency task oriented for multiuser mobile closed computing," in Proc.IEEE/CIC Int.Conf.Commun.China (ICCC), shenzhen, china, nov.2015, pp.1-5) converts the partial unloading problem into a nonlinear constraint problem usingAnd solving by using a linear programming method to achieve the purpose of optimization processing. Due to the diversity of network data, different sizes of data are generated, for which resource limitation becomes a key problem in a task offload process, document 5 (Zhao 31441; uyu. Resource-limited Mobile edge computing systems [ D)]Beijing university of post and telecommunications, 2019), document 6 (O.
Figure BDA0002717413830000021
The problem was intensively studied by A.Pascal-island, and J.Visal, "Optimization of radio and comparative resources for energy efficiency in latency-constrained application of flooding," IEEE Trans.Veh.Technola, vol.64, no.10, pp.4738-4755, oct.2015), document 7 (C.You and K.Huang, "Multi user resource allocation for Mobile compliance," in Proc.IEEE Global.Commin.Conf. (GLOBOM), washingg, DC, USA, dec.2016, pp.1-6). In the document 5, for the problem of resource limitation, analysis is performed from the perspective of network capacity and data allocation, and a suitable processing position is selected for data, thereby ensuring smooth execution of more data tasks. Document 6 adopts a partial offloading method to perform analysis processing, segment a data task, and sequentially transmit the segmented data to edge nodes and cloud nodes for execution, thereby solving the problem of resource limitation. Document 7 proposes an optimal resource allocation strategy for handling the waiting and sequencing situation of tasks in the case that the number of nodes and the processing capacity are limited in the tdma system, so as to ensure the processing efficiency of resources.
When the computing task of the user is large, even if a partial unloading method is adopted, a single MEC node still cannot meet the processing requirement of unloading the task from the user side, so that a plurality of nodes need to be selected for cooperative processing. Therefore, a multi-node cooperation method is proposed in document 8 (w.fan, y.liu, b.tang, f.wu and z.wang, "calculation off-scaling Based on operation of Mobile Edge Computing-Enabled Base states," in IEEE Access, vol.6, pp.22622-22633, 2018), and for the case that a user end has a large calculation task and one MEC node cannot bear all data tasks, an adjacent MEC node is selected to share the calculation pressure of a target node, and an algorithm for solving an optimization problem is designed by using an interior point method and a logarithmic barrier function to optimize the energy consumption problem of a multi-node cooperation system. The multi-node cooperation method is mainly used for solving the problem that the computing capacity of a single node is insufficient, and for this document 9 (k.habak, m.amar, k.a.harras, and e.zegura, "Femto clusters: legacy mobile device to product closed service at the edge," in proc.ieee 8th int.conf.cloud com., jun.2015, pp.9-16), a dynamic, self-configured multi-device mobile cloud system is used, and the surrounding idle mobile devices are used as edge servers to execute corresponding computing tasks, so that the cloud system range is expanded to solve the problem that the network load is greater than the computing capacity of the node. With the multi-node cooperation method, the computing task should be distributed to multiple nodes to execute the task, which relates to the problem of node distribution and deployment, and document 10 (y.c. hu, m.patel, d.sabella, n.sphere, and v.young, "Mobile edge computing-a key technology towards 5G," eur.telecom.standards institute, sophia Antipolis, france, ETSI White Paper 11,2015, pp.1-16) mainly introduces the deployment location of the MEC nodes, which are widely distributed and diverse, such as e macro base station, multi-Radio Access Technology (RAT) cell aggregation station, etc. With the increasing popularization of MEC technology, multi-node cooperation technology is also increasingly applied to actual work and life.
In the above documents, a user unloads all or part of a computing task to one or more MEC nodes, optimizes a network structure and increases the capacity of the network to process the task, and explores resource optimization under a multi-node combined network structure. However, as the MEC nodes are widely distributed around the local user node, in the wireless network, a plurality of MEC nodes are selected to participate in the calculation, the more nodes are involved in the network, the greater the overall energy consumption of the network is, and with the proposal of the green MEC concept, the network energy consumption becomes one of the key concerns of people.
In the multi-node task allocation model, in order to reduce network energy consumption, edge nodes for executing computing tasks need to be reasonably selected. Document 11 (w.zhang, y.wen, k.guan, d.kilper, h.luo, and d.o.wu, "Energy-optimal mobile computing under stored wireless channels," IEEE trans.wireless communications, vol.12, no.9, pp.4569-4581, sep.2013) proposes a single-user edge computing offload (MECO) method that determines an appropriate offload policy by comprehensively considering variable CPU cycles and variable transmission rates in the network with network Energy consumption as an optimization target. In addition to the above document, document 12 (x.chen, l.jiao, w.li, and x.fu, "Efficient multi-user computing offloading for mobile-end computing," IEEE trans.net.,. Vol.24, no.5, pp.2795-2808, oct.2016) aims at a multi-user MECO distributed computing offloading model, comprehensively considers energy consumption and time delay of end users, and implements optimized allocation of resources in the computing offloading process by using game theory. In document 13 (d.han, s.li, y.peng and z.chen, "Energy Sharing-Based Energy and User Joint Allocation Method in Heterogeneous networks," in IEEE Access, vol.8, pp.37077-37086, 2020), in order to cope with Energy shortage in Heterogeneous networks, shared links are established between a plurality of Base Stations (BSs) and expanded to the macro and micro domains for analysis. Document 14 (d.w.k.ng, e.s.lo and r.schober, "Energy-Efficient Resource Allocation in Multi-Cell OFDMA Systems with Limited Backhaul Capacity," in IEEE Transactions on Wireless Communications, vol.11, no.10, pp.3618-3631, october 2012) aims at the offloading priority problem, determines an offloading priority function by comprehensively considering quantization fairness, transmission channels and local computation conditions, implements optimal network Resource Allocation by analyzing the offloading priority function, and takes the overall network Energy consumption as a measure.
In summary, although many studies have been made in the aspects of multi-node cooperation and data offloading in the above documents 1 to 14, a user transmits data to a multi-node for execution, and generally adopts a step-by-step transmission method, when a data amount at a user end is large, step-by-step transmission will generate large delay accumulation, thereby destroying a delay constraint condition of the user and causing large network energy consumption.
Object of the Invention
The invention aims to overcome the defects in multi-node cooperation and data unloading in the prior art, comprehensively consider the distance between an MEC node and a user end, channel characteristics, CPU energy consumption and other conditions, and provide a multi-node cooperation calculation unloading method based on energy consumption minimization.
Disclosure of Invention
The invention provides a multi-node cooperative computing unloading method based on energy consumption minimization, which comprises the following steps:
s1: constructing a system model, specifically a local-edge-cloud edge computing network, wherein the system is provided with K IoT local user terminals which are served by N wireless base stations, and each base station is provided with an edge server, namely N edge nodes; the energy consumption in the local-edge-cloud edge computing network is as follows: total energy consumption = computational energy consumption + transmission energy consumption, wherein the computational energy consumption includes computational energy consumption of IoT local users, edge node cooperation, and a cloud server; the transmission energy consumption comprises wireless transmission energy consumption between the IoT user and the edge node, and transmission energy consumption between the IoT local user and the cloud server; the network delay of the local-edge-cloud-edge computing network comprises computing delay and transmission delay, and network energy consumption needs to be minimized on the premise of meeting the requirement of the network delay;
s2: constructing a target function of a multi-node cooperative computing unloading model so as to realize the minimum overall energy consumption of the network under the condition that the time delay meets the time constraint;
s3: the objective function described in step S2 is optimized based on a branch definition algorithm.
Further, in the step S1, in the constructed local-edge-cloud edge computing network model, the computing model of the network is defined as a k (R k ,s k ),R k Represents the task value of the local user terminal K, K represents the K-th user terminal, where K = {1,2.. K }, s k The task execution time of the user k is represented, and the computational energy consumption of the user k can be represented as shown in equation (1):
Figure BDA0002717413830000061
wherein C is k Denotes the number of CPU revolutions, m, required for 1-bit data to execute a calculation task k Representing the energy consumed by the CPU per revolution;
when the computing task cannot be performed locally in the IoT, the computing task needs to be offloaded to a suitable node to perform the computing task, and the offloading may bring a certain transmission energy consumption, where the transmission energy consumption is related to the transmission time and the transmission power of the task, and the transmission power is represented by formula (2):
Figure BDA0002717413830000062
wherein t is k Representing the transmission time, p, of the computing task of user k k Represents the transmission power between user k and the offload node; the overall energy consumption of the user k for executing the task is the sum of the transmission energy consumption and the calculation energy consumption, and is expressed as formula (3):
Figure BDA0002717413830000071
further, let parameters
Figure BDA0002717413830000072
For the CPU revolution number required by the 1bit task of the user k when the local, edge and cloud nodes execute,
Figure BDA0002717413830000073
respectively representing the energy consumed by each turn of the CPU when the computing task of the user k is executed at the local, edge and cloud ends; setting a data unit
Figure BDA00027174138300000711
Expressing the data of the IoT local user end in the form of data units, and dividing the data of the user k into M k A data unit represented as
Figure BDA0002717413830000075
Setting a parameter ρ for each nodeWhere ρ is k→0 Represents the number of data units computed locally by user k, since there are N MEC nodes in the network, where N = {1,2.. N }, ρ k→n Representing offload from IoT local user k to MEC n Number of unit data for executing task, ρ k→N+1 The data unit number unloaded from the IoT local user side to the cloud end node to execute the task is shown, wherein N +1 represents the cloud end node; setting a parameter β for selection between IoT local user side and MEC node m,n The computing task of the mth block is unloaded to the node n for execution;
the IoT local user side offloads the computing task to a plurality of MECs and cloud nodes in blocks, and for one data unit, one data unit can be offloaded to only one node, which is represented as
Figure BDA0002717413830000076
While an edge node receives a plurality of data units, denoted as
Figure BDA0002717413830000077
When N =0, the computing task is executed locally on the IoT, and when N = N +1, the computing task is executed on the cloud end node;
data for the kth user is as shown in equation 4):
Figure BDA0002717413830000078
let F 0 ,F n ,F N+1 Respectively representing the computing capacities of local nodes, edge nodes and cloud nodes, namely the number of CPU revolutions required by executing a computing task; in formula (4)
Figure BDA0002717413830000079
Which indicates the size of the task to be performed,
Figure BDA00027174138300000710
represents the number of CPU revolutions required for the execution of a 1-bit task for user k, represents the execution of the task on the IoT local user side when N =0, and represents the execution of the task on the IoT local user side when N = {1,2.. N }The task is executed at the MEC node, and when N = N +1, the task is executed at the cloud node.
Further, the computation latency of the local-edge-cloud-edge computing network is determined by the computation amount of the nodes, the number of CPU revolutions and the node capacity, and when the computation task is executed on the IoT local client, the computation latency of the data of the kth client is represented by equation (5):
Figure BDA0002717413830000081
calculating the time delay to be the maximum value of the calculated time delay of each node, wherein the calculated time delay of a single node is shown in the formula (6):
Figure BDA0002717413830000082
the computation delay executed in the cloud is shown in equation (7):
Figure BDA0002717413830000083
setting a data unit
Figure BDA0002717413830000084
Is expressed by t k Wherein t is k >0, represented by formula (8):
Figure BDA0002717413830000085
wherein r is k Representing the data transfer rate from an IoT user k to a selected node by determining the number of bit cells ρ that the user offloads to the node k→n To calculate the total transmission time in the unloading process;
the transmission time unloaded from the IoT local user side to the node is the transmission time corresponding to the node with the largest unloading task, and the node satisfies the formula (9):
Figure BDA0002717413830000086
wherein T represents the delay to meet the QoS requirement of the user;
in the transmission model from the IoT local user terminal k to the selected node, the transmission rate from the IoT user k to the node n is expressed as formula (10):
Figure BDA0002717413830000091
where W is the channel bandwidth, p k,n Is the transmission power, h, between user k and node n k The channel characteristics between the two; PL k Satisfy a large-scale fading characteristic, is related to a transmission distance, and is expressed as
Figure BDA0002717413830000092
Wherein d represents a transmission distance, d 0 Denotes a reference distance, n denotes a path loss exponent, X σ Denotes a mean value of 0 and a standard deviation of σ 2 (ii) a gaussian random variable; w E ,W C Representing the bandwidth between IoT local users and edge nodes and users and cloud nodes, respectively, when N = {1,2.. N }, p k,n And PL k,n Representing users k and MEC n When N = N +1, the transmission power and the transmission loss between the IoT local user terminal k and the cloud node are represented;
the transmission power from the IoT local user terminal k to the node n is represented by equation (11):
Figure BDA0002717413830000093
further, in step S2, the energy consumption when the task is executed on the IoT local user terminal is represented by equation (12):
Figure BDA0002717413830000094
wherein, a selection parameter rho is set k→n N = {1,2.. N }, which represents the selection condition of the edge node, the overall energy consumption of the edge node includes the calculation energy consumption and the transmission energy consumption, and the overall energy consumption is expressed as formula (13):
Figure BDA0002717413830000095
when the computing task is partially offloaded to the cloud server, the overall energy consumption of the cloud node is expressed as shown in equation (14):
Figure BDA0002717413830000101
the total energy consumption in the local-edge-cloud-edge computing network is the sum of the energy consumptions of the IoT local user side, the edge and the cloud, and is expressed as formula (15):
E total =E L +E E +E C (15);
the network delay of the local-edge-cloud edge computing network includes a computation delay and a transmission delay, where the computation delay of the IoT local user of the kth user is represented by equation (16):
Figure BDA0002717413830000102
the calculation time delay is the maximum value of the calculation time delay of each MEC node and the cloud node, and is shown in formula (17):
Figure BDA0002717413830000103
the overall computation delay of the local-edge-cloud edge computing network is represented by formula (18):
Figure BDA0002717413830000104
the overall time delay of the local-edge-cloud edge computing network is the sum of the computing time delay and the transmission time delay, and is represented by formula (20):
D k =s k +t k (20);
the constraint condition of the multi-node cooperative computing unloading model is shown in the formula (21, 21-1 to 21-7):
min E total (21)
D k ≤T (21-1)
t k,n >0 (21-2)
Figure BDA0002717413830000105
Figure BDA0002717413830000111
Figure BDA0002717413830000112
Figure BDA0002717413830000113
Figure BDA0002717413830000114
in the formula, the limiting conditions (21-1) and (21-2) are transmission time limiting conditions, wherein (21-1) indicates that the overall time delay of the network should be smaller than the time delay limit of the user side, (21-3) and (21-4) indicate the distribution situation after the user side performs data segmentation, wherein (21-3) indicates that one unit data can only be unloaded to one edge node, (21-4) indicates the number of unit tasks processed by a certain edge node, and (21-5) to (21-7) respectively indicate the calculation capacity limits of the MEC node, the IoT local node, the MECs node and the cloud node.
Still further, in the step S3, with the formula (21) as an objective function, ρ in the formula is calculated k→0k→1 ,…ρ k→Nk→N+1 Viewed as an argument, the objective function is viewed as a linear programming problem represented by the argument, where the argument ρ satisfies the condition as shown in equation (22):
ρ k→0k→1 +…+ρ k→Nk→N+1 =M k (22)
for the above objective function (21), it is expressed as a condition shown by equation (23):
E total =v 0 ρ k→0 +v 1 ρ k→1 +…v N ρ k→N +v N+1 ρ k→N+1 (23)
wherein v is 0 ,v 1 ,…v N ,v N+1 Denotes the coefficient preceding the argument ρ, let f = [ v = 0 v 1 …v N v N+1 ] T F represents a coefficient vector of an argument in the objective function;
according to equation (21-1), equation (21) is converted to that shown as equation (24):
Figure BDA0002717413830000115
wherein
Figure BDA0002717413830000116
Represents the coefficient before the argument ρ in the objective function formula (21-1). The formula (21) is converted into the formulas (25) to (27) according to the constraints (21-6) to (21-8) of the formula (21):
a 10 ρ k→0 +a 11 ρ k→1 +…a 1N ρ k→N +a 1N+1 ρ k→N+1 ≤F n (25)
a 20 ρ k→0 +a 21 ρ k→1 +…a 2N ρ k→N +a 2N+1 ρ k→N+1 ≤F 0 (26)
a 30 ρ k→0 +a 31 ρ k→1 +…+a 3N ρ k→N +a 3N+1 ρ k→N+1 ≤F N+1 (27)
the above (22), (24), (25), (26) and (27) convert the constraint of the objective function (21) into a standard form with respect to the argument ρ, as shown in the formula (28)
Figure BDA0002717413830000121
Wherein A represents a constraint matrix formed by the constraint equation system, and let b = [ TF = [ [ F ] n F 0 F N+1 M k ] T And b represents a right-hand vector of the system of constraint equations.
Drawings
FIG. 1 illustrates a Benzender-edge-cloud edge computing network model constructed in accordance with the present invention.
FIG. 2 is a graph comparing energy consumption for three unloading models.
FIG. 3 is a diagram illustrating task allocation of nodes of a multi-node collaborative computing offload model.
Fig. 4 is a comparison of energy consumption at different network bandwidths.
Fig. 5 is a comparison of the offloading scheme for the case of large task volumes.
Detailed Description
The following detailed description of the invention refers to the accompanying drawings.
First, a system model is constructed.
The present invention constructs a local-edge-cloud edge computing network, as shown in fig. 1, a system has K IoT local clients, which are served by N wireless base stations, and each base station is configured with an edge server, that is, N edge nodes. The computing task at the IoT local user side can be selected to be executed locally, or can be selected to be partially offloaded to an edge node or transmitted to a cloud server through a router of a base station to be executed. Before task unloading, the IoT local user side divides the computing task according to a certain rule, and the divided task is based onAnd selecting appropriate tasks to unload the edge nodes or the cloud servers according to parameters such as time delay, energy consumption and computing capacity of the edge nodes. For example, without loss of generality, we assume that the local user UE k The calculation task at a certain time can be divided into N task blocks, and the lower subscript k represents the kth local user end. Task block 1 may perform computational tasks on IoT local clients and task block 2 may choose to offload to node MEC 1 Where the computing task is executed, subscript 1 denotes the 1 st edge node, and task blocks 3, 4, 5 may be optionally offloaded to the MEC 2 The nodes execute the calculation tasks, the lower corner 2 represents the 2 nd edge node, and the task blocks 6 and 7 can be selectively unloaded to the node MEC 3 And executing a calculation task, wherein a subscript 3 represents a 3 rd edge node, and the remaining task blocks are selectively transmitted to the cloud to execute calculation due to the fact that the calculation capacity of the edge node cannot meet the requirements of the remaining task blocks. By adopting the method of unloading a plurality of task blocks in parallel, the network delay can be effectively reduced, and the overall energy consumption of the network can be reduced.
In the constructed local-edge-cloud edge computing network model, since IoT local user data is offloaded to multiple nodes in parallel for execution, in order to minimize network energy consumption, computation and transmission of the network should be comprehensively considered, that is, computation energy consumption for the IoT local user side to transmit computation tasks to the MEC node and the cloud end node in parallel, and transmission energy consumption for the IoT local user side to transmit the computation tasks to each node. Wherein the computational model of the network is defined as A k (R k ,s k ),R k Represents the task value of the local user terminal K, K represents the K-th user terminal, where K = {1,2.. K }, s k Representing the task execution time of user k. The calculated energy consumption of user k can be expressed as shown in equation (1):
Figure BDA0002717413830000131
wherein C k M represents the number of CPU revolutions required for 1-bit data to perform a calculation task k Representing the energy consumed by the CPU per revolution.
When the computing task cannot be performed locally in the IoT, the computing task needs to be offloaded to a suitable node to perform the computing task, and the offloading may bring a certain transmission energy consumption, where the transmission energy consumption is related to the transmission time and the transmission power of the task, and the transmission power is represented by formula (2):
Figure BDA0002717413830000141
wherein t is k Representing the transmission time, p, of a computational task for user k k Representing the transmission power between user k and the offload node. The overall energy consumption of the user k for executing the task is the sum of the transmission energy consumption and the calculation energy consumption, and is represented by formula (3):
Figure BDA0002717413830000142
setting parameters
Figure BDA0002717413830000143
For the CPU revolution number required by the 1bit task of the user k when the local, edge and cloud nodes execute,
Figure BDA0002717413830000144
the energy consumed by the CPU per revolution when the computing task of the user k is executed by the cloud at the local part and the edge is represented. The invention sets a multi-node cooperation mode, so that data is segmented at an IoT local user side, and the segmented data is transmitted to the MEC or MCC node for calculation. In order to conveniently observe the unloading condition of the divided task, a data unit is arranged
Figure BDA0002717413830000145
Expressing IoT local user data in the form of data units, and dividing the data of user k into M k A data unit represented as
Figure BDA0002717413830000146
Setting a parameter rho for each node, wherein rho k→0 Indicating that user k is localThe number of data units calculated, since there are N MEC nodes in the network, where N = {1,2.. N }, ρ k→n Representing offload from IoT local user k to MEC n Number of unit data for executing task, ρ k→N+1 The number of data units offloaded from the IoT local client to the cloud node to perform the task is shown, where N +1 represents the cloud node. For the selection problem between the IoT local user side and the MEC node, a parameter β is set m,n Indicating that the computing task of the mth block is unloaded to the node n for execution. In the model set up herein, since IoT local clients offload computing tasks to multiple MECs and cloud nodes in blocks, for one data unit, one data unit can only be offloaded to one node, denoted as
Figure BDA0002717413830000151
But at the same time one edge node can receive a plurality of data units, denoted as
Figure BDA0002717413830000152
When N =0, it means that the computation task is executed locally on the IoT, and when N = N +1, it means that the computation task is executed on the cloud end node.
Since data is split at the IoT local user side and transmitted to multiple node computers at the same time, the data allocated to each node should meet the computing capacity of the node. Analyzing the data of the kth user, and showing the formula (4):
Figure BDA0002717413830000153
let F 0 ,F n ,F N+1 Respectively representing the computing capacities of local, edge nodes and cloud nodes, namely the number of CPU revolutions required for executing computing tasks. In the above formula
Figure BDA0002717413830000154
Which indicates the size of the task to be performed,
Figure BDA0002717413830000155
the number of CPU revolutions required for the execution of a 1-bit task of user k is represented, when N =0, the task is executed on the IoT local user side, when N = {1,2.. N } is represented on the MEC node, and when N = N +1, the task is represented on the cloud node.
The computation delay is determined by the computation amount of the node, the number of CPU turns, and the node capacity, and when the computation task is executed at the IoT local client, the computation delay of the data at the kth client is expressed as formula (5):
Figure BDA0002717413830000156
since data is split at an IoT local user side and transmitted to multiple MEC nodes to be executed, the calculation delay is the maximum value of the calculation delay for each node, where the calculation delay of a single node is shown in formula (6):
Figure BDA0002717413830000157
for a computation task that cannot be executed at the IoT local user side nor at the MEC node, the computation task needs to be transmitted to the cloud server to be executed, and the computation delay executed at the cloud is represented by equation (7):
Figure BDA0002717413830000158
the transmission links in the network are divided into wireless communication links between the edge servers and the UE, VLANs for transmission between the edge servers, and transmission links between the edge servers and the cloud servers. In the network transmission process, the relationship between the computation capacity of the network and the network transmission capacity should be considered, and if the computation capacity is too large, channel resources in the network cannot be orderly allocated to IoT users, which may cause channel congestion and increase network delay. Let IoT local user terminal k need to process data size R k (bit) wherein
Figure BDA0002717413830000161
Representing the size of the computing task performed locally on the IoT,
Figure BDA0002717413830000162
indicating the size of the task performed at the MEC node,
Figure BDA0002717413830000163
representing the size of the task performed at the cloud node. Tasks executed locally on the IoT do not need to be transmitted, there is no transmission energy consumption, and offloading of computing tasks to the MEC and execution by the MCC node results in transmission energy consumption.
Setting a data unit
Figure BDA0002717413830000164
Is expressed by t k Wherein t is k >0, which can be expressed by the formula (8):
Figure BDA0002717413830000165
wherein r is k Representing the data transfer rate from an IoT user k to a selected node, determining the number of bit cells ρ that the user offloads to the node k→n To calculate the total transfer time during the unloading process. Suppose n edge servers receive data from clients, and since IoT local clients segment the data, the transmission time is the same. But p is in each part of the task transmission process k→n One computing task will experience ρ k→n t k Because the basic parameters of different nodes are different and the data volume to be offloaded is also different, the transmission time to be offloaded from the IoT local ue to a node should be the transmission time corresponding to the node with the largest offloading task, and the node should satisfy equation (9):
Figure BDA0002717413830000166
where T denotes the delay to meet the QoS requirements of the user.
In the transmission model from the IoT local user terminal k to the selected node, the transmission rate from the IoT user k to the node n is represented by equation (10):
Figure BDA0002717413830000167
where W is the channel bandwidth, p k,n Is the transmission power, h, between user k and node n k The channel characteristics are different between the nodes due to the different distances between the nodes and the users. PL k The value of (A) satisfies a large-scale fading characteristic, and is expressed as a transmission distance
Figure BDA0002717413830000171
Wherein d represents the transmission distance, d 0 Denotes a reference distance, n denotes a path loss exponent, X σ Denotes a mean value of 0 and a standard deviation of σ 2 Gaussian random variables of (2). W E ,W C Representing the bandwidth between IoT local users and edge nodes and users and cloud nodes, respectively, when N = {1,2.. N }, p k,n And PL k,n Representing user k and MEC n When N = N +1, the transmission power and the transmission loss between the IoT local user terminal k and the cloud node are represented.
According to the formulas (1) and (3), the data transmission rate r k The transmission power from the IoT local user terminal k to the node n is expressed by equation (11):
Figure BDA0002717413830000172
in summary, through the analysis of the transmission and the computation conditions in the local-edge-cloud edge computing network, the energy consumption in the network is as follows: total energy consumption = computational energy consumption + transmission energy consumption, and the computational energy consumption includes the computational energy consumption of IoT local users, edge node cooperation and cloud servers. The transmission energy consumption comprises wireless transmission energy consumption between the IoT user and the edge node, and transmission energy consumption between the IoT local user and the cloud server. Meanwhile, on the basis of considering network energy consumption, network delay is also comprehensively considered, wherein the network delay comprises calculation delay and transmission delay, and the network energy consumption is minimized on the premise of meeting the network delay requirement.
Next, the multi-node cooperative computing offloading method of the present invention is described
For the local-edge-cloud edge computing network model constructed by the invention, a part of computing tasks are executed at an IoT local user side, the rest computing tasks are unloaded to proper nodes, ioT user data is segmented according to a certain rule, and the IoT user data is unloaded to a plurality of nodes at the same time. Since the edge node is close to the IoT user and the data transmission time is short, but the computing power of the edge node is limited, the offloading position needs to be reasonably selected according to the user requirement.
The invention discloses a multi-node cooperative computing unloading method, which comprises the following steps:
step one, constructing an objective function.
According to the formula (3), the overall network energy consumption is composed of calculation and transmission energy consumption, and as the model is a local-edge-cloud edge calculation network model, each level can be assigned with a certain task to process, namely, the network energy consumption is composed of calculation and transmission energy consumption of an IoT local user side, an edge network and a cloud network.
When the task is executed on the IoT local user side, only the computing energy consumption is involved, and the energy consumption of the task when executed locally is represented by equation (12):
Figure BDA0002717413830000181
when the computing task is unloaded to the edge node for execution, because a plurality of edge nodes exist around the IoT local user, the edge node is selected, and a selection parameter rho is set k→n N = {1,2.. N }, which represents the selection of edge nodes, and the overall energy consumption of the edge nodes includes the calculation energy consumption and the transmission energy consumption, and the whole of the energy consumptionThe bulk energy consumption is expressed by formula (13):
Figure BDA0002717413830000182
when the computing task is partially offloaded to the cloud server, the overall energy consumption of the cloud node is represented by equation (14):
Figure BDA0002717413830000183
the overall energy consumption in the network is the sum of the energy consumption of the IoT local user side, the edge and the cloud, and is represented by formula (15):
E total =E L +E E +E C (15)
for network latency, the latency includes computation latency and transmission latency because computation and transmission of IoT local user data are considered in the network model. As for the computation delay, due to the local-edge-cloud edge computation network model, computation delays of IoT local users, edges, and cloud ends need to be considered, where the computation delay of the IoT local user of the kth user is shown in equation (16):
Figure BDA0002717413830000191
different from the node-to-node computation offload model, a task is split at an IoT local user side and is transmitted to multiple nodes in parallel for simultaneous processing, so that the computation delay should be the maximum value of the computation delay of each MEC node and a cloud node, as shown in formula (17):
Figure BDA0002717413830000192
the calculated delay of the entire network is expressed by equation (18):
Figure BDA0002717413830000193
the network delay also comprises transmission delay, and the transmission delay mainly comprises a wireless transmission link from the IoT local user end to the edge node and a VLAN transmission network from the IoT local user end to the cloud end. Since data is split at the IoT local user side and the split data is transmitted by selecting an appropriate node, the overall transmission delay is shown in equation (19) because of the parallel transmission of data:
t k =max{ρ k→n t k,n ,n=0,1…N,N+1} (19)
the overall network delay is the sum of the calculated delay and the transmission delay, and is represented by formula (20):
D k =s k +t k (20)
the goal of constructing the multi-node cooperative computing unloading model is to realize optimization of network energy, and realize minimum overall energy consumption of the network under the condition that time delay meets time constraint. The multi-node cooperative computing unloading optimization system is shown in the formulas (21, 21-1-21-7):
min E total (21)
D k ≤T (21-1)
t k,n >0 (21-2)
Figure BDA0002717413830000201
Figure BDA0002717413830000202
Figure BDA0002717413830000203
Figure BDA0002717413830000204
Figure BDA0002717413830000205
in the formula, the limiting conditions (21-1) and (21-2) are transmission time limiting conditions, wherein (21-1) represents that the overall time delay of the network should be smaller than the time delay limit of the user side, (21-3) (21-4) represents the distribution situation after the data division is performed on the user side, wherein (21-3) represents that one unit data can only be unloaded to one edge node, (21-4) represents the number of unit tasks processed by a certain edge node, and (21-5) - (21-7) represent the calculation capacity limits of the MEC node, the IoT local node, the MECs node and the cloud node respectively.
For the above problem, in the formula (21), the calculation task is offloaded to the parameter ρ for the size of each node k→n By analyzing ρ k→n The node task allocation condition under the condition of realizing optimal energy consumption is judged according to the value taking condition. Due to the parameter ρ k→n The optimization problem becomes an integer programming problem by determining the number of data units so that the values can only take integer values.
And step two, optimizing the objective function in the step one based on a branch definition algorithm.
A branch definition algorithm (BB) is adopted to decide the resource allocation scheme of each MEC node, and the basic idea of the algorithm is to search all feasible solution spaces of the optimization problem with constraint conditions. The algorithm, when executed in detail, partitions the overall feasible solution space into smaller and smaller subsets and computes a lower or upper bound for the values of the solution within each subset. The branch-and-bound method solves the general linear programming problem by adopting the simplex on the integer programming problem, divides the decision variable of the non-integer value into two nearest integers, divides the two integers into a column condition, adds the two integers into the original problem, and solves the updated constraint vector. From which the upper or lower bound of the value is sought.
Table 1 lists the basic flow of the branch bounding algorithm (BB).
TABLE 1 basic flow of Branch-and-bound Algorithm
Figure BDA0002717413830000211
Solving the energy consumption optimization problem by using a branch definition algorithm, taking a formula (21) as an objective function, and taking rho in the formula k→0k→1 ,…ρ k→Nk→N+1 Viewed as an argument, the objective function can be viewed as a linear programming problem represented by the argument. Wherein the argument ρ satisfies the condition shown by the formula (22):
ρ k→0k→1 +…+ρ k→Nk→N+1 =M k (22)
for the above objective function (21), it can be expressed as a condition as shown in equation (23):
E total =v 0 ρ k→0 +v 1 ρ k→1 +…v N ρ k→N +v N+1 ρ k→N+1 (23)
wherein v is 0 ,v 1 ,…v N ,v N+1 Denotes the coefficient preceding the argument ρ, with the order f = [ v = 0 v 1 …v N v N+1 ] T And f denotes a coefficient vector of an argument in the objective function. For the constraint (1) on latency in equation (21), it is converted as shown in equation (24):
Figure BDA0002717413830000221
wherein
Figure BDA0002717413830000222
A coefficient before an argument ρ in a constraint condition (1) in an objective function (21) is expressed. The constraints (6) to (8) of the equation (21) can be converted into equations (25) to (27):
a 10 ρ k→0 +a 11 ρ k→1 +…a 1N ρ k→N +a 1N+1 ρ k→N+1 ≤F n (25)
a 20 ρ k→0 +a 21 ρ k→1 +…a 2N ρ k→N +a 2N+1 ρ k→N+1 ≤F 0 (26)
a 30 ρ k→0 +a 31 ρ k→1 +…+a 3N ρ k→N +a 3N+1 ρ k→N+1 ≤F N+1 (27)
the above (22), (24), (25), (26) and (27) convert the constraint of the objective function (21) into a standard form with respect to the argument ρ, as shown in the formula (28)
Figure BDA0002717413830000223
Wherein, A represents a constraint matrix formed by the constraint equation system, and b = [ TF = [ F = [ ] n F 0 F N+1 M k ] T And b represents the right vector of the system of constraint equations. The range of the argument ρ can be expressed by the constraints (6) to (8) of the objective function, and is expressed by lb, ub in the algorithm.
Through simulation, the multi-node computing unloading method is compared with the traditional unloading method, the traditional unloading method comprises a single cloud unloading method and a node mutual transmission computing unloading method, and the network energy consumption of the three unloading methods under the condition of processing different data volumes is respectively judged by taking the overall network energy consumption as a measurement standard. For the unloading model of multi-node cooperation, 3 MEC nodes are set, and rho is set 01234 The number of the user data units allocated to the IoT local user terminal, the three MEC nodes, and the cloud node is respectively represented. The three MEC nodes correspond to different CPU parameters and have distances d from IoT local user terminals respectively 1 ,d 2 ,d 3 Analyzing the network energy consumption problem of the data task on one IoT local user terminal.
Changing the network bandwidth from the user side to the MEC node, wherein the network bandwidth influences the transmission rate of the data according to a formula
Figure BDA0002717413830000231
L E,i Represents the transmission task amount, r, of the ith MEC node E,i Indicating the data transmission rate of the ith MEC node. Three different situations are set for the network bandwidth, and the corresponding transmission rates are shown in table 2:
table 2 transmission rate table for three situations
Figure BDA0002717413830000232
Figure BDA0002717413830000241
The basic parameters used for the simulation are shown in table 3 below:
table 3 sets forth simulation parameters
Figure BDA0002717413830000242
Figure BDA0002717413830000251
The following are relevant comparison results:
energy consumption comparison of three calculation unloading models
Selecting the calculated data quantity M k (1000, 2500), analyzing the network energy consumption of the three computation unloading models, as shown in fig. 2, the network energy consumption of the multi-node cooperation computation unloading model is less than that of the single-cloud computation unloading model and the node mutual transmission computation unloading model. When the data volume unloaded by the IoT local user side is less than 1500kbit, the network energy consumption of the data parallel transmission multi-node cooperative computing unloading model and the node mutual transmission computing unloading model is basically the same, and when the unloading task volume is greater than the value, the network energy consumption of the multi-node cooperative computing unloading model is less than the energy consumption of the node mutual transmission model. Analysis can be carried out, and when the unloading data volume is smaller, the new method is providedThe unloading model has similar network optimization effect with the traditional node mutual transmission calculation unloading model, but has certain advantages in the aspect of network time delay because the calculation tasks of the multi-node cooperation calculation unloading model are transmitted in parallel. When the unloading data volume of the network is large, the multi-node cooperative computing unloading model has great advantages in terms of network energy consumption and network time delay.
The task allocation of each node of the multi-node cooperative computing unloading model is shown in fig. 3, and the resource allocation condition of the node with the unloading data volume in the range of 1000-2500 kbit is shown. Analysis can be carried out, as the amount of calculation unloading data increases, the number of nodes for resource allocation gradually increases, and when the amount of IoT local user data is not large, data is only processed between the local server and the MEC server. And the task is executed without being unloaded to the cloud node. The larger the amount of tasks to be processed, the larger the requirement on the number of nodes, and the larger the number of nodes. When the number of the IoT local user tasks is larger, the method has greater advantages and is beneficial to realizing the overall optimization of the network.
When the network bandwidth is changed, the influence of the network information transmission rate on the network energy consumption is considered. For example, as shown in fig. 4, the energy consumption ratio under different network bandwidths is that the network energy consumption in case 3 is the smallest, the network transmission rate in case 3 is the largest, and the transmission rate in case 1 is the smallest, and the larger the network transmission rate is, the smaller the overall energy consumption of the network is. In the multi-node cooperative computing offloading model, all computing tasks are transmitted simultaneously, and the larger the transmission rate of the network is, the larger the data volume that can be transmitted simultaneously is, and the larger the total offloading amount that the IoT local user side can perform offloading operations is. The simulation result shows that the total amount of data processed in case 1 is 5000kbit, the total amount of data that can be processed in case 2 is 7500kbit, and the total amount of data that can be processed in case 3 is 17500kbit. In summary, when the network transmission rate is higher, the overall data amount that the network can process is increased, and meanwhile, under the condition of calculating the same data amount, the network energy consumption is reduced.
When the data volume processed by the IoT local user side is very large, the overall data transmission efficiency of the network needs to be improved, and when the data volume of the processing task in the network calculation is 10000-50000 kbit, the network data transmission rate is increased to 2Gbit/s, and the task allocation situation of the node is as shown in fig. 5.
When the data volume of the IoT local user is large, the data transmission rate of the network is increased, and it can be seen from analyzing fig. 5 that the data volume of the IoT local user is increased and then is unloaded to ρ ″ 3 The data volume of the node is obviously increased due to rho 3 The CPU power consumption of the node is minimal. When the computing task is large, the energy consumption of the CPU becomes a main influence factor of the network energy consumption, and meanwhile, the total amount of data unloaded to the cloud nodes is continuously increased along with the increase of the unloaded data volume, so that the superiority of the edge cloud cooperation mechanism under the condition of large data volume is reflected.
The invention has the following advantages:
1. the method comprises the steps that data tasks are transmitted to an edge by an IoT local user side and a cloud server executes computing tasks, and comprehensively considers computing energy consumption formed by the fact that the IoT local user side distributes the computing tasks to each node to execute, transmission energy consumption of the IoT local user side to unload the computing tasks to each node and the like. In contrast to past studies, network computations and transmissions are considered herein comprehensively. The energy consumed by the computation of the local user side of the internet of things, the MEC node and the cloud is covered in the former, the energy consumed by the computation of the local user side of the cloud mainly comprises transmission energy consumption between the IoT user side and the MEC node and between the IoT user side and the cloud node, the overall energy consumption of the network is taken as an optimization target, and the network delay is comprehensively considered.
2. By means of a multi-MEC node cooperation method and in combination with a partial unloading theory, computing tasks of an IoT local user side are transmitted to a plurality of nodes in parallel to be executed, and energy loss caused by data transmission among the nodes is solved through task parallel transmission. Meanwhile, in the aspect of time delay, due to the fact that computing tasks of the user side are unloaded in parallel, the distribution position of data is determined by comprehensively considering network bandwidth and node parameters, due to the fact that data are transmitted in parallel, time delay of each part cannot be accumulated directly, and time for each node to receive and process the data is analyzed to determine the total time delay of the network. And forming an integer linear programming problem based on a network energy consumption optimization target, and analyzing the task unloading condition of a single user by using a branch-and-bound algorithm to minimize the overall network energy consumption.
3. The simulation result shows that with the increase of the unloading data volume of the IoT user side, the demand of the quantity of the MEC nodes is larger, and meanwhile, compared with the traditional calculation unloading mode, the multi-node cooperation model has obvious advantages in the aspects of energy consumption and time delay, and the advantages are more obvious with the increase of the unloading data volume. For the parameter setting of the nodes, the overall energy consumption of the network is determined by parameters such as network bandwidth and CPU energy consumption, and the optimal network energy consumption is realized by selecting the unloading nodes of the IoT client data.
To sum up, in order to realize green communication in an intelligent home scene, the invention firstly creates a multi-node cooperative computing unloading model. In the model, an IoT local user side divides a calculation task according to a certain rule, reasonably distributes the divided data, and transmits the data to a plurality of nodes in parallel to execute operation. On the basis, two traditional unloading methods, namely a single-cloud computing unloading model and a node mutual-transmission computing unloading model, are comprehensively analyzed, the overall energy consumption of a network is taken as an optimization target, time delay is taken as an optimization condition, the resource distribution condition between the MEC nodes is determined through a branch-and-bound algorithm, and the resource distribution condition is compared with the two traditional models. When the unloading task amount is large, the network energy consumption of the data segmentation transmission multi-node cooperative computing unloading model is small, the time delay characteristic is good, meanwhile, the analysis of parameters among the MEC nodes is carried out, and the influence of CPU parameters of the nodes on the network energy consumption is large. For the case of large data volume, the multi-MEC node and the edge-cloud cooperation model have better network characteristics, and the network bandwidth and the information transmission rate have certain influence on the data unloading condition of the network. In the multi-node cooperative computing unloading model, the divided data tasks are transmitted in parallel, and when the data transmission rate is higher, the amount of the computing tasks which can be processed is larger, and the overall energy consumption of the network is smaller.
It will be understood by those skilled in the art that the foregoing detailed description is only exemplary of the spirit and concepts of the invention, and should not be taken as limiting the scope of the invention, which is defined by the appended claims, as various modifications and substitutions can be made therein without departing from the spirit and concepts of the invention.

Claims (4)

1. A multi-node cooperative computing unloading method based on energy consumption minimization is characterized by comprising the following steps:
s1: constructing a system model, specifically a local-edge-cloud edge computing network, wherein the system is provided with K IoT local user terminals which are served by N wireless base stations, and each base station is provided with an edge server, namely N edge nodes; the energy consumption in the local-edge-cloud edge computing network is as follows: total energy consumption = computational energy consumption + transmission energy consumption, wherein the computational energy consumption includes computational energy consumption of IoT local users, edge node cooperation, and a cloud server; the transmission energy consumption comprises wireless transmission energy consumption between the IoT user and the edge node, and transmission energy consumption between the IoT local user and the cloud server; the network delay of the local-edge-cloud-edge computing network comprises computing delay and transmission delay, and network energy consumption needs to be minimized on the premise of meeting the requirement of the network delay; in the constructed local-edge-cloud edge computing network model, the computing model of the network is defined as A k (R k ,s k ),R k Represents the task value of the local user terminal K, K represents the kth user terminal, where K = {1,2.. K }, s k The task execution time of the user k is represented, and the computational energy consumption of the user k can be represented as shown in formula (1):
Figure FDA0003904077860000011
wherein C is k M represents the number of CPU revolutions required for 1-bit data to perform a calculation task k Representing the energy consumed by the CPU per revolution;
when the computing task cannot be executed locally in the IoT, the computing task needs to be offloaded to a suitable node to execute the computing task, and the offloading may cause a certain transmission energy consumption, where the transmission energy consumption is related to the transmission time and the transmission power of the task, and the transmission power is represented by equation (2):
Figure FDA0003904077860000012
wherein t is k Representing the transmission time, p, of a computational task for user k k Represents the transmission power between user k and the offload node; the overall energy consumption of the user k for executing the task is the sum of the transmission energy consumption and the calculation energy consumption, and is represented by formula (3):
Figure FDA0003904077860000021
setting parameters
Figure FDA0003904077860000022
For the CPU revolution number required by the 1bit task of the user k when the local, edge and cloud nodes execute,
Figure FDA0003904077860000023
respectively representing the energy consumed by each turn of the CPU when the computing task of the user k is executed at the local, edge and cloud ends; setting a data unit
Figure FDA0003904077860000024
Expressing IoT local user data in the form of data units, and dividing the data of user k into M k A data unit represented as
Figure FDA0003904077860000025
Setting a parameter rho for each node, wherein rho k→0 Represents the number of data units calculated locally by user k, since there are N MEC nodes in the network, where N = {1,2.. N }, ρ k→n Representing offload from IoT local user k to MEC n Number of unit data for executing task, ρ k→N+1 The data unit number unloaded from the IoT local user side to the cloud end node to execute the task is shown, wherein N +1 represents the cloud end node; for IoT local user terminal and MEC nodeIs selected, a parameter beta is set m,n The method comprises the steps of (1) unloading the computing task of an m-th block to a node n for execution;
the IoT local user side unloads the computing task to a plurality of MECs and cloud nodes in a blocking manner, and for one data unit, one data unit can only be unloaded to one node, which is represented as
Figure FDA0003904077860000026
While an edge node receives multiple data units, denoted as
Figure FDA0003904077860000027
When N =0, the computing task is executed locally on the IoT, and when N = N +1, the computing task is executed on the cloud end node;
data for the kth user is as shown in equation 4):
Figure FDA0003904077860000031
let F 0 ,F n ,F N+1 Respectively representing the computing capacities of local nodes, edge nodes and cloud nodes, namely the number of CPU revolutions required by executing a computing task; in the formula (4)
Figure FDA0003904077860000032
Which indicates the size of the task to be performed,
Figure FDA0003904077860000033
representing the number of CPU revolutions required by the execution of a 1-bit task of a user k, representing that the task is executed on an IoT local user side when N =0, representing that the task is executed on an MEC node when N = {1,2.. N }, and representing that the task is executed on a cloud node when N = N + 1;
s2: constructing a target function of a multi-node cooperative computing unloading model so as to realize the minimum overall energy consumption of the network under the condition that the time delay meets the time constraint;
s3: the objective function described in step S2 is optimized based on a branch definition algorithm.
2. The method of claim 1, wherein the computation latency of the local-edge-cloud-edge computing network is determined by the computation amount of nodes, the number of CPU revolutions and the node capacity, and when the computation task is executed on the IoT local client, the computation latency of the kth client data is represented by formula (5):
Figure FDA0003904077860000034
calculating the time delay to be the maximum value of the calculated time delay of each node, wherein the calculated time delay of a single node is shown in the formula (6):
Figure FDA0003904077860000035
the computation delay executed in the cloud is shown in equation (7):
Figure FDA0003904077860000036
setting a data unit
Figure FDA0003904077860000037
Is expressed by t k Wherein t is k >0, represented by the formula (8):
Figure FDA0003904077860000041
wherein r is k Representing the data transfer rate from an IoT user k to a selected node by determining the number of bit cells ρ that the user offloads to the node k→n Calculating the total transmission time in the unloading process;
the transmission time unloaded from the IoT local user side to the node is the transmission time corresponding to the node with the largest unloading task, and the node satisfies the formula (9):
Figure FDA0003904077860000042
wherein T represents the delay to meet the QoS requirement of the user;
in the transmission model from the IoT local user terminal k to the selected node, the transmission rate from the IoT user k to the node n is represented by equation (10):
Figure FDA0003904077860000043
where W is the channel bandwidth, p k,n Is the transmission power, h, between user k and node n k The channel characteristics between the two; PL k Satisfy a large-scale fading characteristic, is related to a transmission distance, and is expressed as
Figure FDA0003904077860000044
Wherein d represents the transmission distance, d 0 Denotes a reference distance, n denotes a path loss exponent, X σ Denotes a mean value of 0 and a standard deviation of σ 2 (ii) a gaussian random variable; w E ,W C Representing the bandwidth between IoT local users and edge nodes and users and cloud nodes, respectively, when N = {1,2.. N }, p k,n And PL k,n Representing users k and MEC n When N = N +1, the transmission power and the transmission loss between the IoT local user terminal k and the cloud node are represented;
the transmission power from the IoT local user terminal k to the node n is represented by equation (11):
Figure FDA0003904077860000045
3. the multi-node cooperative computing offloading method according to claim 2, wherein in step S2, energy consumption when the task is executed at the IoT local user end is expressed as formula (12):
Figure FDA0003904077860000051
wherein, a selection parameter rho is set k→n N = {1,2.. N }, which represents the selection condition of the edge node, the overall energy consumption of the edge node includes the calculation energy consumption and the transmission energy consumption, and the overall energy consumption is expressed as formula (13):
Figure FDA0003904077860000052
when the computing task is partially offloaded to the cloud server, the overall energy consumption of the cloud node is represented by equation (14):
Figure FDA0003904077860000053
the overall energy consumption in the local-edge-cloud edge computing network is the sum of the energy consumption of the IoT local user side, the edge and the cloud, and is represented by formula (15):
E total =E L +E E +E C (15);
the network delay of the local-edge-cloud edge computing network includes a computation delay and a transmission delay, where the computation delay of the IoT local user of the kth user is represented by equation (16):
Figure FDA0003904077860000054
the calculation delay is the maximum value of the calculation delay of each MEC node and each cloud node, and is shown in formula (17):
Figure FDA0003904077860000061
the overall computation delay of the local-edge-cloud-edge computing network is expressed by formula (18):
Figure FDA0003904077860000062
the overall time delay of the local-edge-cloud-edge computing network is the sum of the computing time delay and the transmission time delay, and is expressed as a formula (20):
D k =s k +t k (20);
the constraint conditions of the multi-node cooperative computing unloading model are shown in the formulas (21, 21-1-21-7):
minE total (21),
D k ≤T (21-1),
t k,n >0 (21-2),
Figure FDA0003904077860000063
Figure FDA0003904077860000064
Figure FDA0003904077860000065
Figure FDA0003904077860000066
Figure FDA0003904077860000067
in the formula, the limiting conditions (21-1) and (21-2) are transmission time limiting conditions, wherein (21-1) indicates that the overall time delay of the network should be smaller than the time delay limit of the user side, (21-3) and (21-4) indicate the distribution situation after the user side performs data segmentation, wherein (21-3) indicates that one unit data can only be unloaded to one edge node, (21-4) indicates the number of unit tasks processed by a certain edge node, and (21-5) to (21-7) respectively indicate the calculation capacity limits of the MEC node, the IoT local node, the MECs node and the cloud node.
4. The multi-node cooperative computing offloading method according to claim 3, wherein in step S3, with equation (21) as an objective function, p in the equation is expressed by k→0k→1 ,…ρ k→Nk→N+1 Viewed as an argument, the objective function is viewed as a linear programming problem represented by the argument, where the argument ρ satisfies the condition as shown in equation (22):
ρ k→0k→1 +…+ρ k→Nk→N+1 =M k (22),
for the above objective function (21), it is expressed as a condition shown in equation (23):
E total =v 0 ρ k→0 +v 1 ρ k→1 +…v N ρ k→N +v N+1 ρ k→N+1 (23),
wherein v is 0 ,v 1 ,…v N ,v N+1 Denotes the coefficient preceding the argument ρ, let f = [ v = 0 v 1 …v N v N+1 ] T F represents a coefficient vector of an argument in the objective function;
according to equation (21-1), equation (21) is converted to that shown as equation (24):
Figure FDA0003904077860000071
wherein
Figure FDA0003904077860000072
A coefficient representing the argument ρ before in the objective function formula (21-1); according to the constraint conditions (21-6) to (21-8) of the formula (21), the formula (21) is converted into the following formulas (25) to (27), respectively:
a 10 ρ k→0 +a 11 ρ k→1 +…a 1N ρ k→N +a 1N+1 ρ k→N+1 ≤F n (25),
a 20 ρ k→0 +a 21 ρ k→1 +…a 2N ρ k→N +a 2N+1 ρ k→N+1 ≤F 0 (26),
a 30 ρ k→0 +a 31 ρ k→1 +…+a 3N ρ k→N +a 3N+1 ρ k→N+1 ≤F N+1 (27),
the above-mentioned (22), (24), (25), (26) and (27) convert the constraint of the objective function (21) into a standard form with respect to the argument ρ, as shown in the formula (28)
Figure FDA0003904077860000081
Wherein a represents a constraint matrix formed by a constraint equation system composed of equations (24), (25), (26), (27) and let b = [ tf = [ F = n F 0 F N+1 M k ] T And b represents a right-end vector of the constraint equation system composed of the equations (24), (25), (26), (27).
CN202011077355.2A 2020-10-10 2020-10-10 Multi-node cooperative computing unloading method based on energy consumption minimization Active CN112235387B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011077355.2A CN112235387B (en) 2020-10-10 2020-10-10 Multi-node cooperative computing unloading method based on energy consumption minimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011077355.2A CN112235387B (en) 2020-10-10 2020-10-10 Multi-node cooperative computing unloading method based on energy consumption minimization

Publications (2)

Publication Number Publication Date
CN112235387A CN112235387A (en) 2021-01-15
CN112235387B true CN112235387B (en) 2022-12-13

Family

ID=74111800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011077355.2A Active CN112235387B (en) 2020-10-10 2020-10-10 Multi-node cooperative computing unloading method based on energy consumption minimization

Country Status (1)

Country Link
CN (1) CN112235387B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949919B (en) * 2021-02-25 2024-03-19 内蒙古科技大学包头师范学院 Energy-saving-targeted computing and unloading model optimization method
CN115277770B (en) * 2022-07-20 2023-04-25 华北电力大学(保定) Unmanned aerial vehicle information collection method based on joint optimization of node access and flight strategy

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110351352A (en) * 2019-07-03 2019-10-18 中山大学 Edge calculations or mist calculate micro- computing cluster forming method based on incentive mechanism under environment
CN110545584A (en) * 2019-08-20 2019-12-06 浙江科技学院 Communication processing method of full-duplex mobile edge computing communication system
CN111726854A (en) * 2020-04-24 2020-09-29 浙江工业大学 Method for reducing calculation unloading energy consumption of Internet of things

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110351352A (en) * 2019-07-03 2019-10-18 中山大学 Edge calculations or mist calculate micro- computing cluster forming method based on incentive mechanism under environment
CN110545584A (en) * 2019-08-20 2019-12-06 浙江科技学院 Communication processing method of full-duplex mobile edge computing communication system
CN111726854A (en) * 2020-04-24 2020-09-29 浙江工业大学 Method for reducing calculation unloading energy consumption of Internet of things

Also Published As

Publication number Publication date
CN112235387A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
Ren et al. Collaborative cloud and edge computing for latency minimization
Zhang et al. Dynamic task offloading and resource allocation for mobile-edge computing in dense cloud RAN
CN110493360B (en) Mobile edge computing unloading method for reducing system energy consumption under multiple servers
Dai et al. Joint load balancing and offloading in vehicular edge computing and networks
CN108809695B (en) Distributed uplink unloading strategy facing mobile edge calculation
CN111930436B (en) Random task queuing unloading optimization method based on edge calculation
CN112600921B (en) Heterogeneous mobile edge network-oriented dynamic task unloading method
Labidi et al. Joint multi-user resource scheduling and computation offloading in small cell networks
CN109151864B (en) Migration decision and resource optimal allocation method for mobile edge computing ultra-dense network
Cui et al. A novel offloading scheduling method for mobile application in mobile edge computing
CN111475274B (en) Cloud collaborative multi-task scheduling method and device
CN110121212B (en) Uplink transmission method for periodic URLLC service
Oueis et al. On the impact of backhaul network on distributed cloud computing
CN112235387B (en) Multi-node cooperative computing unloading method based on energy consumption minimization
CN110545584A (en) Communication processing method of full-duplex mobile edge computing communication system
CN113784373A (en) Combined optimization method and system for time delay and frequency spectrum occupation in cloud edge cooperative network
Tanzil et al. A distributed coalition game approach to femto-cloud formation
Cheng et al. Energy-efficient resource allocation for UAV-empowered mobile edge computing system
Nguyen et al. Joint computation offloading, SFC placement, and resource allocation for multi-site MEC systems
Tian et al. Asynchronous federated learning empowered computation offloading in collaborative vehicular networks
Huang et al. A Practical Approach for Load Balancing in LTE Networks.
Al-Abiad et al. Task offloading optimization in NOMA-enabled dual-hop mobile edge computing system using conflict graph
Han et al. Research on multinode collaborative computing offloading algorithm based on minimization of energy consumption
Noghani et al. A generic framework for task offloading in mmWave MEC backhaul networks
Lin et al. Joint Optimization of Offloading and Resource Allocation for SDN‐Enabled IoV

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant