CN114024970A - Power internet of things work load distribution method based on edge calculation - Google Patents

Power internet of things work load distribution method based on edge calculation Download PDF

Info

Publication number
CN114024970A
CN114024970A CN202111144620.9A CN202111144620A CN114024970A CN 114024970 A CN114024970 A CN 114024970A CN 202111144620 A CN202111144620 A CN 202111144620A CN 114024970 A CN114024970 A CN 114024970A
Authority
CN
China
Prior art keywords
edge
layer
computing
delay
particle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111144620.9A
Other languages
Chinese (zh)
Inventor
赵梦晴
赵涛
王兴
安宁
王雷
孟凡博
王东东
李欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinzhou Electric Power Supply Co Of State Grid Liaoning Electric Power Supply Co ltd
State Grid Corp of China SGCC
State Grid Liaoning Electric Power Co Ltd
Original Assignee
Jinzhou Electric Power Supply Co Of State Grid Liaoning Electric Power Supply Co ltd
State Grid Corp of China SGCC
State Grid Liaoning Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinzhou Electric Power Supply Co Of State Grid Liaoning Electric Power Supply Co ltd, State Grid Corp of China SGCC, State Grid Liaoning Electric Power Co Ltd filed Critical Jinzhou Electric Power Supply Co Of State Grid Liaoning Electric Power Supply Co ltd
Priority to CN202111144620.9A priority Critical patent/CN114024970A/en
Publication of CN114024970A publication Critical patent/CN114024970A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0876Aspects of the degree of configuration automation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method for distributing work load of an electric power Internet of things based on edge calculation comprises the following specific steps: establishing an electric power Internet of things architecture based on edge calculation, and transmitting acquired data to a nearest edge calculation layer by a field layer for calculation; defining a service delay minimization workload model for an SDN network layer, collecting network link parameters and an idle edge computing layer by the SDN network layer, taking charge of global control and scheduling, and finally deciding whether to unload a task to the edge computing layer on a computing node or a local edge computing layer through a task unloading algorithm according to the service delay condition. Peripheral edge servers are effectively utilized to dispersedly process task requests of the terminals, so that time delay is reduced, pressure of a core network is reduced, network congestion is effectively controlled, diversified services can be automatically deployed, and limited network resources are optimally utilized.

Description

Power internet of things work load distribution method based on edge calculation
Technical Field
The invention relates to a method for distributing the working load of an electric power internet of things based on edge calculation.
Background
The power industry is a basic industry related to the national civilization, and in order to meet the requirements of economic and social development, the popularization of the Internet of things (loT) and the intelligent application of the terminal can ensure that the intelligent power grid can operate orderly and efficiently.
With the gradual development of the electric power internet of things service, the applications which need to be processed by a single node are gradually increased, the delay requirement is diversified, although a single edge node can process a large amount of ubiquitous services, under the condition that the service terminal requests frequently, such as frequent movement of the inspection terminal, simultaneous uploading of a large amount of data of the acquisition terminal in an abnormal environment and the like, the task queuing caused by the limitation of computing resources of the single edge node cannot meet the delay requirement of all services.
Disclosure of Invention
The invention aims to solve the technical problems and provides an electric power internet of things workload distribution method based on edge computing, wherein task requests of terminals are dispersedly processed by effectively utilizing peripheral edge servers through a deployed SDN network layer, so that time delay is reduced, the pressure of a core network is reduced, network congestion is effectively controlled, diversified services can be automatically deployed, and limited network resources are optimally utilized.
The technical solution of the invention is as follows:
a method for distributing the work load of the power Internet of things based on edge calculation is characterized by comprising the following specific steps:
stp1, establishing an electric power internet of things architecture based on edge computing, wherein the electric power internet of things architecture based on edge computing comprises a field layer, an edge computing layer, an SDN network layer and a QOS application layer; the field devices in the field layer comprise client devices or sensor devices; the edge computing layer is composed of a gateway and an edge server, and the edge server is arranged on the side part of the edge close to the field device; the network layer comprises switches and an SDN controller, and the gateways in the edge computing layer exchange data with the SDN controller through corresponding TSN switches;
the stp2, the field layer uploads the collected data to the nearest edge calculation layer;
the stp3 and the gateway in the edge computing layer are used for processing simple tasks, and the edge server is used as a general interface of data communication and is responsible for processing complex tasks;
st4, the SDN layer is responsible for global control and scheduling, the SDN controller firstly makes a global topological graph and collects network link parameters and idle computing nodes; when a task computing request is obtained, the SDN controller searches for available computing nodes, establishes a service delay minimization workload model through a task unloading algorithm, and unloads the task to the available computing nodes or a local edge computing layer.
Further, the service delay minimization workload model establishing process is as follows:
the set of Edge Nodes (EN) in the area is I, denoted as { EN1,EN2,EN3,...ENi}∈UEIThe set of terminal equipments (UEs) is J, denoted as { UE1,UE2,UE3,...UEj}∈UEJWherein the UEjThe APP set of (A) is KjThe request of the k-th APP uses a vector to represent wjk=[ljkjk],ljkRepresents the data quantity, omega, of the kth APP to be transmitted in the jth terminal equipmentjkThe method comprises the steps of representing the workload of the kth APP task in the jth terminal device, namely the number of instructions to be executed by a CPU (central processing unit), and according to the mathematical statistics of the workload in a period of time, omegajkSubject to the poisson distribution, the service delay d (x) is defined as the time from the generation of the request on the UE to the completion of the processing on the EN, i.e. the optimization objective is as follows:
P1:min d(X)
d(X)=Σj∈Jdj
wherein d (x) is the time from the generation of the request on the UE to the completion of the processing on the EN, i.e. the sum of the time from the transmission of the requests by all the UEs to the completion of the processing on the EN;
the service delay of the terminal is the maximum value of the APP delay in the UE, dijkRepresenting a UEjω of class k APP abovejkDistribution to ENiTime delay of, UEjThe task assignment problem of (1) is thatj→ENiThe mapping problem of (1), i.e. mapping on the ith EN of the kth APP on the jth UE, and the set of all UE's entire APPs
Figure BDA0003284947200000021
Is a J × K → ENiThe problem of the mapping of (2) is,
Figure BDA0003284947200000022
the network delay includes transmission delay due to port rate and propagation delay due to physical distance, BjIs a UEjI.e. the amount of data that can be transmitted per unit time, rijIs a UEjTo ENiC is the propagation speed of an infinite or wired channel, such as the following networksTime delay
Figure BDA0003284947200000031
Calculating time delay
Figure BDA0003284947200000032
The method is characterized in that the delay caused by the CPU computing rate is adopted, EN processes requests in two modes, one mode is based on a queuing theory, the other mode is that all requests start to be processed after arriving, and the computing delay is expressed by the following formula:
Figure BDA0003284947200000033
V=(vik)I×Kis to represent the VM assignment matrix in EN, matrix element vikRepresents ENiMiddle VMkCPU processing rate of viRepresenting the processing rate of the CPU in an EN, the sum of the computational power of all VMs in an EN should not be greater than the actual computational power of the EN, i.e. the following constraints are satisfied,
Figure BDA0003284947200000034
X=(xijk)I×J×Kis a three-dimensional array representing APP application request and EN mapping in UE, and the value of array elements is specified as follows
Figure BDA0003284947200000035
One of the tasks omegajkCan only be assigned to one EN for processing, with the following constraints,
Figure BDA0003284947200000036
express the original problem
Figure BDA0003284947200000037
The resource allocation algorithm of the improved particle swarm is provided, and a balanced task allocation mode is taken as a condition of the edge node resource allocation problem, namely the following problem
Using balanced task allocation as condition of edge node resource allocation problem, i.e. solving problem
Figure BDA0003284947200000041
Order to
Figure BDA0003284947200000042
Obtaining a resource allocation matrix P, element PikRepresenting edge nodes ENiMiddle VMkEdge node EN occupied by computing resourcesiRatio of total computing resources,
Figure BDA0003284947200000043
The optimization experience information of all particles is stored in the form of pheromone in the ant colony algorithm, the speed of a particle swarm is influenced in a path selection mode, the particle attributes mainly comprise position and speed, and the position of a particle epsilon is defined as a resource allocation matrix PεRepresenting a feasible solution to the resource allocation problem, the speed is defined as a matrix UεThe direction of the particle motion is shown, and the velocity update formula is
Uε(n+1)
=g[wUε(n)+c1·r1·(Pbε(n)-Pε(n))+c2·r2·(Gbε(n)-Pε(n))
Where w is the inertial weight, c1 c2Is a learning factor, r1 r2Is a random number within the interval (0,1), Pbε(n) is the first n iterations of the particle epsilonSearched individual optimal position, Gbε(n) is the global optimum position searched by the first n times of iteration of the population, and the position updating is disclosed as
Pε(n+1)=Pε(n)+Uε(n+1)
Function(s)
Figure BDA0003284947200000051
The effect of (a) is to limit the speed to [ -u [)min,umax]In the range of uik∈[-umin,umax],umaxIs the maximum value of the particle velocity, ensures that the particle position does not exceed the boundary,
Figure BDA0003284947200000052
is defined as
Figure BDA0003284947200000053
Wherein
Figure BDA0003284947200000054
Is PεRepresents the velocity of the particle epsilon after the nth iteration,
Figure BDA0003284947200000055
is UεThe element of (2), the position of the table particle epsilon after the nth iteration, the problem target is the minimum value of the service delay, therefore, the fitness function is the reciprocal of the service experiment function, and the fitness function is expressed as
Figure BDA0003284947200000056
When the algorithm falls into local optimum when the elite particles cannot be updated in time, the elite particles are the minimum value of service delay, so that the optimum unloading position is selected.
Furthermore, the field device comprises a device detection device, a device inspection device, a line detection device, a video monitoring device, and an intelligent home or remote meter reading service terminal.
Further, the complex tasks are field device monitoring, collected data uploading and storing, and data calculation.
The invention has the beneficial effects that:
the centralized computing power is changed into distributed computing power, and an edge server is deployed on the edge side close to the equipment, so that the purpose of reducing time delay is achieved. The SDN network layer is responsible for overall control and scheduling, can quickly complete the unloading decision of the calculation task request, makes a data forwarding rule according to the actual service requirement and provides a basis for task unloading. The edge calculation layer is mainly used for processing complex tasks with high data volume, is also responsible for scheduling, arranging and other work among a plurality of task centers, can be matched with edge calculation, flexibly meets different requirements, can acquire network resources in real time, arranges services quickly, and greatly improves the utilization rate of the whole network resources.
Drawings
FIG. 1 is a schematic structural view of the present invention;
Detailed Description
As shown in fig. 1, a method for distributing a workload of an internet of things based on edge computing specifically includes the following steps:
stp1, establishing an electric power internet of things architecture based on edge computing, wherein the electric power internet of things architecture based on edge computing comprises a field layer, an edge computing layer, an SDN network layer and a QOS application layer; the field devices in the field layer comprise client devices or sensor devices; the field equipment comprises an equipment detection device, an equipment inspection device, a line detection device, a video monitoring device and an intelligent home or remote meter reading service terminal, wherein the edge calculation layer consists of a gateway and an edge server, and the edge server is arranged at the side part of the edge close to the field equipment; the network layer comprises switches and an SDN controller, and the gateways in the edge computing layer exchange data with the SDN controller through corresponding TSN switches;
the stp2, the field layer uploads the collected data to the nearest edge calculation layer;
the stp3 and the gateway in the edge computing layer are used for processing simple tasks, the edge server is used as a general interface of data communication and is responsible for processing complex tasks, and the complex tasks are field device monitoring, data acquisition, uploading and storing and data computing;
st4, the SDN layer is responsible for global control and scheduling, the SDN controller firstly makes a global topological graph and collects network link parameters and idle computing nodes; when a task computing request is obtained, the SDN controller searches for available computing nodes, establishes a service delay minimization workload model through a task unloading algorithm, and unloads the task to the available computing nodes or a local edge computing layer.
The service delay minimization workload model establishing process is as follows:
the set of Edge Nodes (EN) within a region is I, denoted as { EN1,EN2,EN3,...ENi}∈UEIThe set of terminal equipments (UEs) is J, denoted as { UE1,UE2,UE3,...UEj}∈UEJWherein the UEjThe APP set of (A) is KjThe request of the kth APP can represent w by a vectorjk=[ljkjk],ljkRepresents the data quantity, omega, of the kth APP to be transmitted in the jth terminal equipmentjkThe method comprises the steps of representing the workload of the kth APP task in the jth terminal device, namely the number of instructions to be executed by a CPU (central processing unit), and according to the mathematical statistics of the workload in a period of time, omegajkObeying a poisson distribution. Defining the service delay d (x) as the time from the generation of the request on the UE to the completion of the processing on the EN, i.e. the optimization objective is as follows:
P1:min d(X)
d(X)=Σj∈Jdj
where d (x) is the time from the generation of the request on the UE to the completion of the processing on the EN, i.e. the sum of the time from the transmission of the request by all UEs to the completion of the processing on the EN.
The service delay of the terminal is the maximum value of the APP delay in the UE. Assuming that a request of an APP in the UE is an indivisible task, each APP request is allocated to only one EN for processing, dijkRepresenting a UEjω of class k APP abovejkDistribution to ENiThe delay of (2). So UEjThe task assignment problem of (1) is thatj→ENiI.e. mapping at the ith EN of the kth app on the jth UE. Considering all UEs, set of overall APP
Figure BDA0003284947200000071
Is a J × K → ENiThe mapping problem of (2).
Figure BDA0003284947200000072
Network latency includes transmission latency due to port rate and propagation latency due to physical distance. B isjIs a UEjI.e. the amount of data that can be transmitted per unit time. r isijIs a UEjTo ENiC is the propagation speed of an infinite or wired channel. In practical environment, the size of the data packet requested to be sent by the application is in KB-MB level, the port sending rate is in 100 MB/S-GB/S level, and the network port with large sending rate can be selected, and the channel distance is in km level, and 1km needs to go through 3 to 5 route forwarding processes due to the coverage range limitation of the wireless router. Considering the interference of buildings in the channel and the store-and-forward process of the intermediate gateway, the propagation speed of the channel is 100 km/s-1000 km/s. Assumption B in model analysisjSufficiently large that, in comparison,/ijk/Bj<<rijC, so that propagation delays can be ignored, the following network delays will be considered in the following
Figure BDA0003284947200000081
Calculating time delay
Figure BDA0003284947200000082
Because of the time delay caused by the CPU calculation rate, the EN has two parties for processing the requestOne is based on queuing theory and one is to start processing after all requests arrive. For analytical convenience, we assume that EN starts processing after all requests arrive, and the computational latency is the average of all tasks processed by the CPU over a period of time. The EN can process various APP requests, because the processing modes of different requests are different, in order to improve the efficiency of the EN for processing different requests, the interference caused by the mixing of heterogeneous working modes is reduced, the task calculation time delay is reduced, and the EN is divided into a plurality of Virtual Machines (VMs) to process the requests of different APPs. The VM can be dynamically started and deleted in the EN according to needs, and the method can simplify the work of a work developer and reduce the programming complexity of one physical server for multiple types of services. The calculated time delay can be represented by the following equation:
Figure BDA0003284947200000083
V=(vik)I×Kis to represent the VM assignment matrix in EN, matrix element vikRepresents ENiMiddle VMkCPU processing rate of viRepresenting the processing rate of the CPU in the EN. The service time delay in the UE is the minimum by reasonably distributing the occupation ratio of different VMs on the EN and adjusting and processing the CPU resources of different types of applications. The sum of the computing power of all VMs in an EN should not be greater than the actual computing power of the EN, i.e., the following constraints are satisfied.
Figure BDA0003284947200000084
X=(xijk)I×J×KIs a three-dimensional array representing APP application request and EN mapping in UE, and the value of array elements is specified as follows
Figure BDA0003284947200000085
One of the tasks omegajkAnd can only be assigned to one EN for processing, there are constraints,
Figure BDA0003284947200000086
considering the special case of only one UE, ignoring the network transmission delay between the UE and the EN, this workload distribution problem, which is equivalent to the scheduling problem of the time-out, then the original problem can be expressed
Figure BDA0003284947200000091
The resource allocation algorithm of the improved particle swarm is provided, and a balanced task allocation mode is taken as a condition of the edge node resource allocation problem, namely the following problem
Using balanced task allocation as condition of edge node resource allocation problem, i.e. solving problem
Figure BDA0003284947200000092
For solving conveniently, the matrix is normalized and ordered
Figure BDA0003284947200000093
Obtaining a resource allocation matrix P, element PikRepresenting edge nodes ENiMiddle VMkEdge node EN occupied by computing resourcesiThe total calculates the proportion of resources.
Figure BDA0003284947200000094
In the traditional particle swarm optimization, local optimization is easy to fall into due to lack of information interaction among particles. Aiming at the problem, the improved particle swarm algorithm is provided, the optimization experience information of all particles is stored in the form of pheromone in the ant colony algorithm, and the speed of the particle swarm is influenced in a path selection mode, so that the faster convergence is kept, and the reduction of the diversity of the population due to precocity is avoided.
The particle attributes are mainly position and velocity, and the position of the particle epsilon is defined as a resource allocation matrix PεRepresenting a feasible solution to the resource allocation problem, the speed is defined as a matrix UεIndicating the direction of particle motion. The velocity update formula is
Uε(n+1)
=g[wUε(n)+c1·r1·(Pbε(n)-Pε(n))+c2·r2·(Gbε(n)-Pε(n))
Where w is the inertial weight, c1 c2Is a learning factor, r1 r2Is a random number within the interval (0,1), Pbε(n) is the individual optimal position Gb searched by the previous n iterations of the particle epsilonεAnd (n) the global optimal position searched by the previous n times of iteration of the population. Location update is disclosed as
Pε(n+1)=Pε(n)+Uε(n+1)
Function(s)
Figure BDA0003284947200000101
The effect of (a) is to limit the speed to [ -u [)min,umax]In the range of uik∈[-umin,umax],umaxIs the maximum value of the particle velocity, ensures that the particle position does not exceed the boundary,
Figure BDA0003284947200000102
is defined as
Figure BDA0003284947200000103
Wherein
Figure BDA0003284947200000104
Is PεRepresents the velocity of the particle epsilon after the nth iteration,
Figure BDA0003284947200000105
is UεTable particle epsilon position after the nth iteration. The problem is aimed at solving the minimum value of the service delay, so that the fitness function is the reciprocal of the service experiment function. The fitness function is expressed as
Figure BDA0003284947200000106
When the algorithm falls into local optimum when the elite particles cannot be updated in time, the elite particles are the minimum value of service delay, so that the optimum unloading position is selected.
The above description is only exemplary of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. A method for distributing the work load of the power Internet of things based on edge calculation is characterized by comprising the following specific steps:
stp1, establishing an electric power internet of things architecture based on edge computing, wherein the electric power internet of things architecture based on edge computing comprises a field layer, an edge computing layer, an SDN network layer and a QOS application layer; the field devices in the field layer comprise client devices or sensor devices; the edge computing layer is composed of a gateway and an edge server, and the edge server is arranged on the side part of the edge close to the field device; the network layer comprises switches and an SDN controller, and the gateways in the edge computing layer exchange data with the SDN controller through corresponding TSN switches;
the stp2, the field layer uploads the collected data to the nearest edge calculation layer;
the stp3 and the gateway in the edge computing layer are used for processing simple tasks, and the edge server is used as a general interface of data communication and is responsible for processing complex tasks;
st4, the SDN layer is responsible for global control and scheduling, the SDN controller firstly makes a global topological graph and collects network link parameters and idle computing nodes; when a task computing request is obtained, the SDN controller searches for available computing nodes, establishes a service delay minimization workload model through a task unloading algorithm, and unloads the task to the available computing nodes or a local edge computing layer.
2. The method for distributing the workload of the power internet of things based on the edge computing as claimed in claim 1, wherein: the service delay minimization workload model establishing process is as follows:
the set of Edge Nodes (EN) within the region is I, denoted as { EN1,EN2,EN3,...ENi}∈UEIThe set of terminal equipments (UEs) is J, denoted as { UE1,UE2,UE3,...UEj}∈UEJWherein the UEjThe APP set of (A) is KjThe request of the k-th APP uses a vector to represent wjk=[ljkjk],ljkRepresents the data quantity, omega, of the kth APP to be transmitted in the jth terminal equipmentjkThe method comprises the steps of representing the workload of the kth APP task in the jth terminal device, namely the number of instructions to be executed by a CPU (central processing unit), and according to the mathematical statistics of the workload in a period of time, omegajkSubject to the poisson distribution, the service delay d (x) is defined as the time from the generation of the request on the UE to the completion of the processing on the EN, i.e. the optimization objective is as follows:
P1:min d(X)
Figure FDA0003284947190000021
wherein d (x) is the time from the generation of the request on the UE to the completion of the processing on the EN, i.e. the sum of the time from the transmission of the requests by all the UEs to the completion of the processing on the EN;
the service delay of the terminal is the maximum value of the APP delay in the UE, dijkRepresenting a UEjω of class k APP abovejkDistribution to ENiTime delay of, UEjThe task allocation problem of is oneA Kj→ENiThe mapping problem of (1), i.e. mapping on the ith EN of the kth APP on the jth UE, and the set of all UE's entire APPs
Figure FDA0003284947190000022
Is a J × K → ENiThe problem of the mapping of (2) is,
Figure FDA0003284947190000023
the network delay includes transmission delay due to port rate and propagation delay due to physical distance, BjIs a UEjI.e. the amount of data that can be transmitted per unit time, rijIs a UEjTo ENiC is the propagation speed of an infinite or wired channel, e.g. the network delay
Figure FDA0003284947190000024
Calculating time delay
Figure FDA0003284947190000025
The method is characterized in that the delay caused by the CPU computing rate is adopted, EN processes requests in two modes, one mode is based on a queuing theory, the other mode is that all requests start to be processed after arriving, and the computing delay is expressed by the following formula:
Figure FDA0003284947190000026
V=(vik)I×Kis to represent the VM assignment matrix in EN, matrix element vikRepresents ENiMiddle VMkCPU processing rate of viRepresenting the processing rate of the CPU in an EN, the sum of the computational power of all VMs in an EN should not be greater than the actual computational power of the EN, i.e. the following constraints are satisfied,
Figure FDA0003284947190000027
0≤vik≤vi
X=(xijk)I×J×Kis a three-dimensional array representing APP application request and EN mapping in UE, and the value of array elements is specified as follows
Figure FDA0003284947190000031
One of the tasks omegajkCan only be assigned to one EN for processing, with the following constraints,
Figure FDA0003284947190000032
express the original problem
Figure FDA0003284947190000033
Figure FDA0003284947190000034
xik∈{0,1}
The resource allocation algorithm of the improved particle swarm is provided, and a balanced task allocation mode is taken as a condition of the edge node resource allocation problem, namely the following problem
Using balanced task allocation as condition of edge node resource allocation problem, i.e. solving problem
Figure FDA0003284947190000035
s.t.
Figure FDA0003284947190000036
0≤vik≤vi
Order to
Figure FDA0003284947190000037
Obtaining a resource allocation matrix P, element PikRepresenting edge nodes ENiMiddle VMkEdge node EN occupied by computing resourcesiRatio of total computing resources,
Figure FDA0003284947190000038
s.t.
Figure FDA0003284947190000039
0≤pik≤1
The optimization experience information of all particles is stored in the form of pheromone in the ant colony algorithm, the speed of a particle swarm is influenced in a path selection mode, the particle attributes mainly comprise position and speed, and the position of a particle epsilon is defined as a resource allocation matrix PεRepresenting a feasible solution to the resource allocation problem, the speed is defined as a matrix UεThe direction of the particle motion is shown, and the velocity update formula is
Figure FDA0003284947190000041
Where w is the inertial weight, c1 c2Is a learning factor, r1 r2Is a random number within the interval (0,1), Pbε(n) is the individual optimal position Gb searched by the previous n iterations of the particle epsilonε(n) is the global optimum position searched by the first n times of iteration of the population, and the position updating is disclosed as
Pε(n+1)=Pε(n)+Uε(n+1)
Function(s)
Figure FDA0003284947190000042
The effect of (a) is to limit the speed to [ -u [)min,umax]In the range of uik∈[-umin,umax],umaxIs the maximum value of the particle velocity, ensures that the particle position does not exceed the boundary,
Figure FDA0003284947190000043
is defined as
Figure FDA0003284947190000044
Wherein
Figure FDA0003284947190000045
Is PεRepresents the velocity of the particle epsilon after the nth iteration,
Figure FDA0003284947190000046
is UεThe element of (2), the position of the table particle epsilon after the nth iteration, the problem target is the minimum value of the service delay, therefore, the fitness function is the reciprocal of the service experiment function, and the fitness function is expressed as
Figure FDA0003284947190000047
When the algorithm falls into local optimum when the elite particles cannot be updated in time, the elite particles are the minimum value of service delay, so that the optimum unloading position is selected.
3. The method for distributing the workload of the power internet of things based on the edge computing as claimed in claim 1, wherein: the field equipment comprises an equipment detection device, an equipment inspection device, a line detection device, a video monitoring device and an intelligent home or remote meter reading service terminal.
4. The method for distributing the workload of the power internet of things based on the edge computing as claimed in claim 1, wherein: the complex tasks are field device monitoring, collected data uploading and storing, and data calculation.
CN202111144620.9A 2021-09-28 2021-09-28 Power internet of things work load distribution method based on edge calculation Pending CN114024970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111144620.9A CN114024970A (en) 2021-09-28 2021-09-28 Power internet of things work load distribution method based on edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111144620.9A CN114024970A (en) 2021-09-28 2021-09-28 Power internet of things work load distribution method based on edge calculation

Publications (1)

Publication Number Publication Date
CN114024970A true CN114024970A (en) 2022-02-08

Family

ID=80055047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111144620.9A Pending CN114024970A (en) 2021-09-28 2021-09-28 Power internet of things work load distribution method based on edge calculation

Country Status (1)

Country Link
CN (1) CN114024970A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225675A (en) * 2022-07-18 2022-10-21 国网信息通信产业集团有限公司 Charging station intelligent operation and maintenance system based on edge calculation
CN116668447A (en) * 2023-08-01 2023-08-29 贵州省广播电视信息网络股份有限公司 Edge computing task unloading method based on improved self-learning weight
CN116684483A (en) * 2023-08-02 2023-09-01 北京中电普华信息技术有限公司 Method for distributing communication resources of edge internet of things proxy and related products
CN117939572A (en) * 2024-03-25 2024-04-26 国网江苏省电力有限公司 Electric power Internet of things terminal access method
CN117939572B (en) * 2024-03-25 2024-05-28 国网江苏省电力有限公司 Electric power Internet of things terminal access method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170086191A1 (en) * 2015-09-23 2017-03-23 Google Inc. Distributed software defined wireless packet core system
KR20180123228A (en) * 2017-05-08 2018-11-16 충북대학교 산학협력단 System and method for load balancing on distributed datastore in sdn controller cluster
US20190261187A1 (en) * 2016-11-04 2019-08-22 Huawei Technologies Co., Ltd. Communication Method, Terminal, Access Network Device, And Core Network Device
CN110365753A (en) * 2019-06-27 2019-10-22 北京邮电大学 Internet of Things service low time delay load allocation method and device based on edge calculations
CN110891093A (en) * 2019-12-09 2020-03-17 中国科学院计算机网络信息中心 Method and system for selecting edge computing node in delay sensitive network
EP3826368A1 (en) * 2019-11-19 2021-05-26 Commissariat à l'énergie atomique et aux énergies alternatives Energy efficient discontinuous mobile edge computing with quality of service guarantees
CN113268341A (en) * 2021-04-30 2021-08-17 国网河北省电力有限公司信息通信分公司 Distribution method, device, equipment and storage medium of power grid edge calculation task

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170086191A1 (en) * 2015-09-23 2017-03-23 Google Inc. Distributed software defined wireless packet core system
US20190261187A1 (en) * 2016-11-04 2019-08-22 Huawei Technologies Co., Ltd. Communication Method, Terminal, Access Network Device, And Core Network Device
KR20180123228A (en) * 2017-05-08 2018-11-16 충북대학교 산학협력단 System and method for load balancing on distributed datastore in sdn controller cluster
CN110365753A (en) * 2019-06-27 2019-10-22 北京邮电大学 Internet of Things service low time delay load allocation method and device based on edge calculations
EP3826368A1 (en) * 2019-11-19 2021-05-26 Commissariat à l'énergie atomique et aux énergies alternatives Energy efficient discontinuous mobile edge computing with quality of service guarantees
CN110891093A (en) * 2019-12-09 2020-03-17 中国科学院计算机网络信息中心 Method and system for selecting edge computing node in delay sensitive network
CN113268341A (en) * 2021-04-30 2021-08-17 国网河北省电力有限公司信息通信分公司 Distribution method, device, equipment and storage medium of power grid edge calculation task

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225675A (en) * 2022-07-18 2022-10-21 国网信息通信产业集团有限公司 Charging station intelligent operation and maintenance system based on edge calculation
CN116668447A (en) * 2023-08-01 2023-08-29 贵州省广播电视信息网络股份有限公司 Edge computing task unloading method based on improved self-learning weight
CN116668447B (en) * 2023-08-01 2023-10-20 贵州省广播电视信息网络股份有限公司 Edge computing task unloading method based on improved self-learning weight
CN116684483A (en) * 2023-08-02 2023-09-01 北京中电普华信息技术有限公司 Method for distributing communication resources of edge internet of things proxy and related products
CN116684483B (en) * 2023-08-02 2023-09-29 北京中电普华信息技术有限公司 Method for distributing communication resources of edge internet of things proxy and related products
CN117939572A (en) * 2024-03-25 2024-04-26 国网江苏省电力有限公司 Electric power Internet of things terminal access method
CN117939572B (en) * 2024-03-25 2024-05-28 国网江苏省电力有限公司 Electric power Internet of things terminal access method

Similar Documents

Publication Publication Date Title
Shu et al. Multi-user offloading for edge computing networks: A dependency-aware and latency-optimal approach
CN114024970A (en) Power internet of things work load distribution method based on edge calculation
Grosu et al. Noncooperative load balancing in distributed systems
Islambouli et al. Optimized 3D deployment of UAV-mounted cloudlets to support latency-sensitive services in IoT networks
CN112039965B (en) Multitask unloading method and system in time-sensitive network
Chamola et al. An optimal delay aware task assignment scheme for wireless SDN networked edge cloudlets
CN110784366B (en) Switch migration method based on IMMAC algorithm in SDN
Wu et al. Computation offloading method using stochastic games for software defined network-based multi-agent mobile edge computing
CN109947574B (en) Fog network-based vehicle big data calculation unloading method
Li Resource optimization scheduling and allocation for hierarchical distributed cloud service system in smart city
Vakilian et al. Using the cuckoo algorithm to optimizing the response time and energy consumption cost of fog nodes by considering collaboration in the fog layer
Zhang et al. Theoretical analysis on edge computation offloading policies for IoT devices
CN114567598A (en) Load balancing method and device based on deep learning and cross-domain cooperation
Dao et al. Pattern-identified online task scheduling in multitier edge computing for industrial IoT services
Guan et al. A novel mobility-aware offloading management scheme in sustainable multi-access edge computing
CN113452956A (en) Intelligent distribution method and system for power transmission line inspection tasks
Maia et al. A multi-objective service placement and load distribution in edge computing
Tham et al. A load balancing scheme for sensing and analytics on a mobile edge computing network
Beraldi et al. Power of random choices made efficient for fog computing
Kaur et al. Packet optimization of software defined network using lion optimization
Xu et al. Online learning algorithms for offloading augmented reality requests with uncertain demands in MECs
Moreira et al. Task allocation framework for software-defined fog v-RAN
Liu et al. Scalable traffic management for mobile cloud services in 5G networks
CN116302404A (en) Resource decoupling data center-oriented server non-perception calculation scheduling method
CN112256415B (en) Micro cloud load balancing task scheduling method based on PSO-GA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination