CN115037620A - Edge-oriented intelligent gateway resource allocation method and equipment - Google Patents

Edge-oriented intelligent gateway resource allocation method and equipment Download PDF

Info

Publication number
CN115037620A
CN115037620A CN202210538531.0A CN202210538531A CN115037620A CN 115037620 A CN115037620 A CN 115037620A CN 202210538531 A CN202210538531 A CN 202210538531A CN 115037620 A CN115037620 A CN 115037620A
Authority
CN
China
Prior art keywords
app
resource allocation
edge
intelligent gateway
gateway
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210538531.0A
Other languages
Chinese (zh)
Other versions
CN115037620B (en
Inventor
沈奕菲
罗华峰
阮黎翔
王松
李心宇
张胜
陆熠晨
方芳
孙文文
陈明
曹文斌
钱政旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
State Grid Zhejiang Electric Power Co Ltd
Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Nanjing University
State Grid Zhejiang Electric Power Co Ltd
Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University, State Grid Zhejiang Electric Power Co Ltd, Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd filed Critical Nanjing University
Priority to CN202210538531.0A priority Critical patent/CN115037620B/en
Publication of CN115037620A publication Critical patent/CN115037620A/en
Application granted granted Critical
Publication of CN115037620B publication Critical patent/CN115037620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a resource allocation method and equipment for an edge-oriented intelligent gateway. The resource allocation method of the invention comprises the following steps: establishing a system model comprising a resource allocation model and an availability analysis model; converting the resource allocation problem into an optimization problem using a centralized solution according to the system model; the optimization problem is computed using a distributed solution and the allocation is output. The invention combines edge calculation, container technology and micro-service architecture, ensures long-term stable operation of each APP deployed in the intelligent gateway through flexible management and control of the resource obtaining mode of the business function APP, and achieves the effects of improving the resource utilization rate and reducing the network delay.

Description

Edge-oriented intelligent gateway resource allocation method and equipment
Technical Field
The invention relates to an edge computing, container technology and micro-service architecture, in particular to a resource allocation method and equipment for an edge intelligent gateway.
Background
Edge computing refers to an open platform fusing network, computing, storage and application core capabilities at the edge side of a network close to an object or a data source, so that edge intelligent services are provided nearby, and the key requirements of industry digitization on aspects of agile connection, real-time service, data optimization, application intelligence, safety, privacy protection and the like are met. With the trend of everything interconnection, edge data is coming to explosive growth, which is both a challenge and an opportunity. Significant advantages of edge calculation include: 1) edge computing can perform data processing and analysis in real time or faster, bringing data processing closer to the source, rather than an external data center or cloud, can reduce latency. 2) The cost budget can be greatly reduced in cost precaution. The cost of the data management solution of the enterprise on the local device is much lower than that of the cloud and data center network; 3) reducing network traffic. As the number of internet of things devices increases, data generation continues to grow at the rate of creating records. As a result, network bandwidth becomes more limited, overwhelming the cloud, resulting in a larger data bottleneck; 4) and the efficiency of the application program is improved. By reducing the level of latency, applications can run more efficiently and more quickly; 5) personalization: through edge calculation, continuous learning can be realized, and the model is adjusted according to personal requirements, so that personalized interactive experience is brought; 6) security and privacy protection. Network edge data relates to personal privacy, and the traditional cloud computing mode needs to upload the privacy data to a cloud computing center, which increases the risk of revealing user privacy data. In the edge computing, the research of the identity authentication protocol should draw the advantages of the existing scheme as reference, and simultaneously combine the characteristics of distribution, mobility and the like in the edge computing to strengthen the research of the uniform authentication, cross-domain authentication and switching authentication technology so as to ensure the data and privacy safety of the user in different trust domains and heterogeneous network environments.
Container technology is an operating system virtualization technology in computer science, and compared with a traditional operating system, a container allows an application program and dependent items thereof to run in the process of resource isolation. As a lightweight operating system level virtualization technique, all necessary components needed to run an application can be packaged as a single image that can be reused without affecting any processes outside the container. Docker, as representative of a new generation of container technology, aims to provide a standardized runtime environment, and has major advantages including: 1) lightweight, multiple Docker containers running on one machine can share the operating system kernel of that machine; they can be started quickly and occupy very little computational and memory resources. The image is constructed through a file system layer and shares some common files. Therefore, the use amount of the disk can be reduced as much as possible, and the mirror image can be downloaded more quickly; 2) standards, the Docker container is based on open standards, capable of running on all mainstream Linux versions, Microsoft Windows, and any infrastructure including VMs, bare-metal servers, and clouds; 3) security, the isolation that Docker gives applications is not limited to isolating from each other, but is independent of the underlying infrastructure. Docker defaults to providing the strongest isolation and therefore application problems, as well as single container problems, do not reach the entire machine.
The microservice architecture is a method for developing a single application into a set of small services, each application running in its own process and communicating through a lightweight communication mechanism, usually an API based on HTTP resources. These services are built around business functions and can be deployed independently through a fully automated deployment mechanism. These services should employ as little centralized management as possible and use different programming languages and data stores as needed. The micro-service technology has the advantages of easier isomerism, freer application of new technology, mutual promotion and mutual optimization of architecture and organization architecture, better building and exercising team, easy expansion, simple deployment (updating and rollback), high reusability, high elasticity, easier old component replacement and the like.
In recent years, some network and provincial companies develop exploration and practice work in the aspects of remote operation and maintenance and state monitoring of secondary equipment of a transformer substation, and realize the remote operation and maintenance of automatic equipment of the transformer substation to a certain extent, but still have some problems, including insufficient operation and maintenance information acquisition of the automatic equipment, difficulty in expanding business functions due to the development of a customized system and insufficient intelligence degree of application level of a data station. Therefore, it is necessary to develop a research on a new technology for debugging, operating and maintaining transformer substation automation equipment, introduce new internet technologies such as edge calculation and virtualization into management and services of operation, maintenance and maintenance, construct a transformer substation automation equipment operation, maintenance and maintenance system with convenient and fast extension, flexible function deployment and platform open sharing, implement full acquisition and edge on-site processing of heterogeneous operation and maintenance information of automation equipment, solve the problems of difficulty in accurate positioning of faults, inflexible service function extension, low intelligence degree and the like, create a new cloud edge collaborative mode for operation and maintenance services of automation equipment, meet the requirements of unattended transformer substations and improvement of debugging, operation and maintenance efficiency, and strengthen the capability of observation, analysis and control of transformer substation automation equipment.
Disclosure of Invention
The invention aims to provide a resource allocation method facing an edge intelligent gateway by comprehensively utilizing three technologies of edge calculation, a container and micro-service, and the method ensures the long-term stable operation of each APP deployed in the intelligent gateway through the flexible control of a resource obtaining mode of a service function App, and achieves the effects of improving the resource utilization rate and reducing the network delay.
In order to achieve the above object, a first aspect of the present invention provides a resource allocation method for an edge-oriented intelligent gateway, which includes the steps of:
s1, establishing a system model, wherein the system model comprises a resource allocation model and an availability analysis model;
s2, converting the resource allocation problem into an optimization problem by using a centralized solution according to the system model;
and S3, calculating the optimization problem by using a distributed solution and outputting a distribution scheme.
Further, the step S1 includes:
s11, establishing a resource allocation model: setting EN sets and APP sets owned by a gateway as MS and NS respectively, setting the number of the EN sets and the number of the APP sets as M and N respectively, and setting each EN set j Has c j A calculation unit; let x be i,j Represents EN j To APP i Computing unit of (2), allocated to APP i Is x i =(x i,1 ,x i,2 ,x i,3 ,…,x i,M ) (ii) a Let p be j Represents EN j For the whole system, define EN price vector p ═ (p) for 1 ,p 2 ,…,p j ,…,p M ) (ii) a Let U i (x i P) represents APP i Of the resource allocation vector x occupied by it i And a price vector p decision;
the limitation of the N APPs owned by the gateway by each EN-owned computing resource can be expressed as the following constraint:
Figure BDA0003649294390000031
s12, establishing an availability analysis model: apply APP i The revenue that can be generated from the acquired resources is denoted as u i (x i ) In the subsequent model, the numerical value is used as an input item to calculate a specific resource allocation scheme; apply APP to i Can be driven from EN j Is defined as a i,j And (3) pushing out:
Figure BDA0003649294390000032
in an actual production environment, the time of each request and reply sent by a user comprises three parts: round-trip delay between a subscriber and an edge intelligent gateway
Figure BDA0003649294390000033
Round-trip network delay between edge intelligent gateway and edge computing node EN
Figure BDA0003649294390000034
And processing delay at EN
Figure BDA0003649294390000035
In most cases
Figure BDA0003649294390000036
Are very small and are ignored; apply APP to i The maximum tolerable delay is defined as
Figure BDA0003649294390000037
Obtaining:
Figure BDA0003649294390000038
analyzing the processing time delay of EN based on M/G/1 queuing model, and EN assuming that the working loads are uniformly distributed to each computing unit j Is APP i Processing time at service
Figure BDA0003649294390000039
The calculation is made by the following formula:
Figure BDA00036492943900000310
in the above formula,. mu. i,j Represents EN j In which a single computing unit is processing APP i Occupancy ratio of λ i,j Denotes APP i To EN j The rate of requests issued, which is required to ensure queue stability
Figure BDA00036492943900000311
According to the constraint conditions
Figure BDA00036492943900000312
The results obtained were:
Figure BDA00036492943900000313
in the above formula, r i Indicating each successful response APP i By setting the maximum tolerable delay
Figure BDA00036492943900000314
And confirming that the resource allocation scheme calculates x i,ji,j And
Figure BDA00036492943900000315
after corresponding values, each APP can be calculated through the formula i Availability of the implementation from the acquired computing resources.
Further, in said step S11, for all APPs deployed by Docker used in the gateway, the goal they need to reach is to maximize their availability under the condition of budget constraint, and overall, the following two conditions must be satisfied:
1) according to the balanced price vector
Figure BDA0003649294390000041
Figure BDA0003649294390000042
Is APP i Optimal resource allocation of (2):
Figure BDA0003649294390000043
2) all resources are fully utilized:
Figure BDA0003649294390000044
further, the specific content of step S2 is as follows:
will be u in the system model i (x i ) Using sigma j a i,j x i,j Is shown in which
Figure BDA0003649294390000045
When p represents a price vector, a i,j /p j Is defined as APP i Occupation of EN by costing cost 1 j Obtaining the income; and for each APP deployed at the gateway, there is an EN with the highest profit, and MBB represents the highest profit rate that can be obtained by an APP in the current EN set:
Figure BDA0003649294390000046
requirement set D of each APP i (p) includes all compounds capable of giving itEN for MBB is formulated as:
Figure BDA0003649294390000047
in order to maximize the profit, each APP tries to invest all the budgets and, for a given price, its own D i (p) consumption at EN and guarantee of clearing budget; using χ to represent the resource allocation matrix, converting into a convex function optimization problem:
Figure BDA0003649294390000048
the constraint conditions include:
Figure BDA0003649294390000051
according to the KKT condition, the optimal allocation of resources also needs to satisfy the following conditions:
Figure BDA0003649294390000052
further, in step S3, the intelligent gateway simultaneously applies a double decomposition method and a proportional dynamic control strategy to obtain a resource allocation scheme, and determines a specific resource allocation scheme according to the final total profit.
Further, the double decomposition method equivalently transforms the convex function optimization problem into:
Figure BDA0003649294390000053
the sub-problems that each APP needs to compute are:
Figure BDA0003649294390000054
further, an equation with a CES function is used for double decomposition, and the specific solution includes:
61) initializing parameters including a price p (0) p for the first EN 0 Step length alpha (0) and tolerance value gamma, and the loop iteration time t is 0;
62) the edge intelligent gateway broadcasts and informs all the APPs deployed at the current gateway of the price of p (t);
63) each APP calculates the optimal demand x for all known ENs at the number of iterations t by the following equation i,j (t) and sending it to the intelligent gateway, where ρ is a parameter in the CES function that is approximately 1 but strictly less than 1;
Figure BDA0003649294390000061
64) the edge intelligent gateway updates the price vector:
Figure BDA0003649294390000062
65) when in use
Figure BDA0003649294390000063
Or when the iteration times t are overlarge, outputting the final balance price vector p * And optimal planning X *
Further, the content of the dynamic control strategy according to the proportion is as follows: in each iteration, each APP will proportionally update its own bid for each EN according to the total availability obtained in the last iteration case, that is:
Figure BDA0003649294390000064
the price of an EN is equivalent to the sum of all APPs bidding on it, i.e. p j (t)=∑ i b i,j (t);
In each iteration, App i The following algorithm is required to be executed to update the bidding vector of the edge intelligent device and output the bidding vector to the edge intelligent deviceA gateway:
71) all EN are according to a i,j /b i,j Performing descending arrangement, and outputting an array after the arrangement:
L i ={i 1 ,i 2 ,…,i M }
72) finding the largest k that satisfies the following inequality:
Figure BDA0003649294390000065
73) let l > k
Figure BDA0003649294390000066
Satisfies k is less than or equal to 1 l:
Figure BDA0003649294390000067
in the above formula, the first and second carbon atoms are,
Figure BDA0003649294390000068
representing the current App i To i k The rate of return for the number EN,
Figure BDA0003649294390000069
representing the current App i Pair i in the last iteration k The bid for the number EN,
Figure BDA0003649294390000071
represents the current APP pair i in this iteration l The gateway broadcasts the price vector to the APP deployed in the Docker container after receiving all bidding conditions of the current round so as to carry out iteration of the next round; and when the change of the price vector after iteration is small enough, ending the iteration process, and outputting the final result price vector, the resource allocation condition and the income obtained by each APP.
The invention has the following beneficial effects: the invention combines edge calculation, container technology and micro-service architecture, ensures long-term stable operation of each APP deployed in the intelligent gateway through flexible management and control of the resource obtaining mode of the business function APP, and achieves the effects of improving the resource utilization rate and reducing the network delay.
Drawings
FIG. 1 is a flow chart of a resource allocation plan according to an embodiment of the present invention;
fig. 2 is a framework diagram of an embodiment of a service function apprizing and flexible management and control technology provided by an embodiment of the present invention;
fig. 3 is an interaction diagram between a peer and a peer in the case of using an edge smart gateway according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, but it should be understood that the following descriptions of the specific embodiments are only for the purpose of clearly understanding the technical solutions of the present invention, and are not intended to limit the present invention.
Referring to fig. 1, the resource allocation method for an edge-oriented intelligent gateway provided by the present invention includes the following steps:
and step S1, establishing a system model including a resource allocation model and an availability analysis model.
S1-1, establishing a resource allocation model: for the edge intelligent gateway, all the business functions APP deployed at the gateway by using Docker perform actual computing services through the computing nodes (EN) owned by the gateway. The invention assumes that the computing power that each computing node can provide is fixed and does not change over time, and each APP is responsible for independent business functions. As shown in fig. 2, several APPs and their dependent files are deployed through a Docker, and are managed uniformly by APP management microservices deployed at the edge smart gateway. The micro-service is responsible for automatically allocating specific resources such as a computing unit and a memory for the deployed APP, monitoring the running state of each service, ensuring the stability of the service and realizing centralized management of the resources.
For the resource allocation model, it is assumed that the EN set and APP set owned by the gateway are MS and NS, respectively, the number of ENs and APP are M and N, respectively, and each specific EN j Has c j A computing unit. Even when an EN contains multiple types of computational units (e.g., GPUs and CPUs), the EN may be divided into groups, each having only a single type of EN, in which case each group may still be considered a separate EN.
The objective of the model is to improve the resource utilization rate, and the standard capable of embodying the resource utilization rate includes two aspects, namely the amount of computing resources allocated to each APP by the gateway and the total revenue generated by the APP. In order to avoid resource abuse and improve utilization efficiency, the cost of the APPs is required for occupying any number of computing units in the EN, and each APP needs to set own expense according to the priority and the demand. When the EN is provided by a third party company, the occupation cost can be set as a function related to the concrete price, and if the EN is provided in the system, the corresponding price function can be abstracted according to the number of the calculation units owned by each EN.
Let x i,j Represents EN j To APP i Number of calculation units allocated to APP i Is x i =(x i,1 ,x i,2 ,x i,3 ,…,x i,M ). B is to be i Defined as being provided to APP i Maximum resource limit for consumption.
Let p be j Represents EN j The EN price vector p ═ (p) can be defined for the whole system 1 ,p 2 ,…,p M )。
Order U i (x i P) represents APP i Of the total number of calculation units x occupied by the availability function of i And resource overhead p.
The limitation of N APPs owned by a gateway by the computational resources owned by each EN can be expressed as the following constraint:
Figure BDA0003649294390000081
c owned by each EN for simplicity of expression and subsequent equations j The computing unit carries out standardized operation to order
Figure BDA0003649294390000082
And to scale up other parameters corresponding to each EN, including price, resource allocation, etc. The normalized total computation resource constraint described above can be converted into:
Figure BDA0003649294390000083
for all APPs deployed by Docker used in the gateway, the goal they need to achieve is to maximize their availability under budget constraints, and overall, the following two conditions must be met:
1. according to the balanced price vector
Figure BDA0003649294390000084
Figure BDA0003649294390000085
Is APP i Optimal resource allocation of (2):
Figure BDA0003649294390000086
2. all resources are fully utilized:
Figure BDA0003649294390000087
the above two conditions enable each APP to acquire the corresponding resource to maximize its availability, and each EN to maximize its availability, respectively. Under the simultaneous action of these two conditions, all APP compete for maximum benefit, behind which it is actually the price that plays a regulatory role.
S1-2, establishing an availability analysis model:
apply APP to i The revenue that can be generated from the acquired resources is denoted as u i (x i ) In the subsequent model, the value is taken asAnd (4) inputting items to calculate a specific resource allocation scheme.
Apply APP to i Can be driven from EN j Is defined as a in one calculation unit of i,j And can deduce:
Figure BDA0003649294390000091
the following illustrates how this entry is computed in an actual production environment, and typically, the services provided by edge computing are only sensitive to time delay, such as remote monitoring and fault location functions implemented in a substation using internet of things sensors. For simplicity of explanation, it is assumed that the transmission bandwidth is sufficiently large and the amount of requested data is small, and therefore the transmission delay is ignored, and only the propagation delay and the processing delay are considered.
In the actual production environment, the time of each request and reply sent by the user includes three parts, and the round-trip delay between the user and the edge intelligent gateway
Figure BDA0003649294390000092
And the round-trip network time delay between the edge intelligent gateway and the edge computing node EN
Figure BDA0003649294390000093
And processing delay at EN
Figure BDA0003649294390000094
In most cases
Figure BDA0003649294390000095
Are very small and are ignored in this example. Apply APP to i The maximum tolerable delay is defined as
Figure BDA0003649294390000096
The following can be obtained:
Figure BDA0003649294390000097
it can be directly concluded that EN j Capable of handling the maximum number of requests
Figure BDA0003649294390000098
If it is not
Figure BDA0003649294390000099
The processing delays of the various ENs are modeled using an M/G/1 queuing model, assuming that the workloads are evenly distributed across the various computational units. EN j To APP i The calculated average response time may be expressed as:
Figure BDA00036492943900000910
mu in the above formula i,j Represents EN j In which a single computing unit is processing APP i Occupied ratio of (a) ("lambda i,j Denotes APP i To EN j The rate at which the request is issued. To guarantee the stability of the queue, it is necessary to guarantee
Figure BDA00036492943900000911
According to the constraint conditions
Figure BDA00036492943900000912
The results obtained were:
Figure BDA00036492943900000913
r in the formula i Indicating each successful response APP i The gain of (c), including the value,
Figure BDA00036492943900000914
four values can be obtained through measurement, and each APP is calculated through the measurement i Availability of the implementation from the acquired computing resources.
Step S2, converting the resource allocation problem into an optimization problem using a centralized solution according to the system model.
In order to better analyze the model, u in the system model is used i (x i ) Using Σ j a i,j x i,j Is shown in which
Figure BDA0003649294390000101
When p represents a price vector, a i,j /p j Can be defined as APP i Occupation of EN by costing cost 1 j The acquired benefit (availability). For each APP deployed at the gateway, it has the EN with the highest income, and the abbreviation MBB in english indicates the highest income rate that a certain APP can obtain in the current EN set:
Figure BDA0003649294390000102
requirement set D of each APP i All the ENs that can supply MBBs to it should be included in (p), and expressed by the formula:
Figure BDA0003649294390000103
in order to maximize their availability, each APP tries to invest all the budgets to its own D for a given price i Consumption at EN in (p) and guarantee to empty budget. Can be converted into a convex function optimization problem:
Figure BDA0003649294390000104
the constraint conditions include:
Figure BDA0003649294390000105
according to the KKT condition, the optimal allocation of resources also needs to satisfy the following condition:
Figure BDA0003649294390000106
step S3, calculate the optimization problem using the distributed solution and output the allocation.
Due to the fact that complex conditions possibly occurring in reality are responded, the two algorithms are provided for solving the optimization problem, the intelligent gateway simultaneously applies the two algorithms to obtain the resource allocation scheme, and the specific resource allocation scheme is determined according to the final total income. As shown in fig. 3, the APP at the edge intelligent network as the client performs the functions of obtaining the price vector and updating the resource allocation.
S3-1 double decomposition method
The convex function optimization problem can be equivalently expressed as:
Figure BDA0003649294390000111
using the lagrange multiplier method, the constraint is converted to:
Figure BDA0003649294390000112
thus, the sub-problem that each APP needs to compute is:
Figure BDA0003649294390000113
the equation with the CES function is used for carrying out double decomposition, and the specific solution comprises the following steps:
(i) initializing parameters, including: price p (0) ═ p of first EN 0 The relatively small step size α (0) and the tolerance value γ, the loop iteration number t is 0.
(ii) The edge intelligent gateway broadcasts and informs all the APPs deployed at the current gateway of the price of p (t).
(iii) Each APP calculates the optimal demand x for all known ENs at the number of iterations t by the following equation i,j (t) and sending it to the intelligent gateway. Where ρ is a parameter in the CES function that is approximately 1 but strictly less than 1.
Figure BDA0003649294390000121
(iv) The edge intelligent gateway updates the price vector:
Figure BDA0003649294390000122
(v) when in use
Figure BDA0003649294390000123
Or when the iteration times t are overlarge, outputting the final balance price vector p * And optimal planning X *
S3-2, dynamic regulation strategy according to proportion
In each iteration, each APP will scale up its bid for each EN based on the total availability obtained in the last iteration. Namely:
Figure BDA0003649294390000124
since the capacity of all ENs is normalized, the price of an EN can be equated to the sum of all APPs bidding on it, i.e., p j (t)=∑ i b i,j (t)。
In each iteration, each APP needs to execute the following algorithm to update its own bid vector and output it to the edge intelligent gateway:
(i) all EN are according to a i,j /b i,j Performing descending arrangement, and outputting an array after the arrangement:
L i ={i 1 ,i 2 ,…,i M }
(ii) find the largest k that satisfies the following inequality:
Figure BDA0003649294390000125
(iii) let l > k
Figure BDA0003649294390000126
L is more than or equal to 1 and less than or equal to k:
Figure BDA0003649294390000131
in the above formula, the first and second carbon atoms are,
Figure BDA0003649294390000132
indicating the current App i To i k The rate of return for the number EN,
Figure BDA0003649294390000133
indicating the current App i Pair i in the last iteration k The bid for the number EN,
Figure BDA0003649294390000134
represents the current APP pair i in this iteration l For the final bid of number EN, the gateway will broadcast all bid vectors received to the APP deployed in the Docker container for the next iteration. And when the change of the price vector after iteration is small enough, ending the iteration process, and outputting the final result price vector, the resource allocation condition and the income obtained by each APP.
Based on the same technical concept as the method embodiment, according to another embodiment of the present invention, there is provided a computer apparatus including: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, which when executed by the processors implement the steps in the method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (10)

1. A resource allocation method facing to an edge intelligent gateway is characterized by comprising the following steps:
s1, establishing a system model, wherein the system model comprises a resource allocation model and an availability analysis model;
s2, converting the resource allocation problem into an optimization problem by using a centralized solution according to the system model;
and S3, calculating the optimization problem by using a distributed solution and outputting a distribution scheme.
2. The method for resource allocation to an edge-oriented intelligent gateway according to claim 1, wherein the step S1 includes:
s11, establishing a resource allocation model: setting EN sets and APP sets owned by a gateway as MS and NS respectively, setting the number of the EN sets and the number of the APP sets as M and N respectively, and setting each EN set j Has c j A calculation unit; let x i,j Represents EN j To APP i Computing unit of (2), allocated to APP i Is x i =(x i,1 ,x i,2 ,x i,3 ,...,x i,M ) (ii) a Let p be j Represents EN j For the whole system, define EN price vector p ═ (p) for 1 ,p 2 ,...,p j ,...,p M ) (ii) a Let U i (x i P) represents the availability function of the APPI, the resource allocation vector x occupied by it i And price directionQuantity p is determined;
the limitation of the N APPs owned by the gateway by each EN-owned computing resource can be expressed as the following constraint:
Figure FDA0003649294380000011
s12, establishing an availability analysis model: apply APP to i The revenue that can be generated from the acquired resources is denoted as u i (x i ) In the subsequent model, the numerical value is used as an input item to calculate a specific resource allocation scheme; apply APP to i Can be driven from EN j Is defined as a in one calculation unit of i,j And (3) pushing out:
Figure FDA0003649294380000012
in an actual production environment, the time of each request and reply sent by a user comprises three parts: round-trip delay between a subscriber and an edge intelligent gateway
Figure FDA0003649294380000013
Round-trip network delay between edge intelligent gateway and edge computing node EN
Figure FDA0003649294380000014
And processing delay at EN
Figure FDA0003649294380000015
In most cases
Figure FDA0003649294380000016
Are all very small and are ignored; apply APP to i The maximum tolerable delay is defined as
Figure FDA0003649294380000017
Obtaining:
Figure FDA0003649294380000018
analyzing the processing time delay of EN based on M/G/1 queuing model, and EN assuming that the working loads are uniformly distributed to each computing unit j Is APP i Processing delay while servicing
Figure FDA0003649294380000019
The calculation is made by the following formula:
Figure FDA0003649294380000021
in the above formula,. mu. i,j Represents EN j In which a single computing unit is processing APP i Occupancy ratio of λ i,j Denotes APP i To EN j The rate of requests issued, which is required to ensure queue stability
Figure FDA0003649294380000022
According to the constraint conditions
Figure FDA0003649294380000023
The results obtained were:
Figure FDA0003649294380000024
in the above formula, r i Indicating each successful response APP i By setting
Figure FDA0003649294380000025
And confirming that the resource allocation scheme calculates x i,j 、μ i,j And
Figure FDA0003649294380000026
after corresponding values, each APP can be calculated by the above formula i Availability of the implementation from the acquired computing resources.
3. The method for resource allocation to an edge-oriented intelligent gateway according to claim 2, wherein in step S11, for all APPs deployed by using Docker in the gateway, their required goal is to maximize their availability under the condition of budget constraint, and as a whole, the following two conditions must be satisfied:
1) according to the balanced price vector
Figure FDA0003649294380000027
Figure FDA0003649294380000028
Is APP i Optimal resource allocation of (2):
Figure FDA0003649294380000029
B i defined as being provided to APP i Maximum resource limit of consumption;
2) all resources are fully utilized:
Figure FDA00036492943800000210
4. the method for resource allocation to an edge-oriented intelligent gateway according to claim 2, wherein the specific content of the step S2 is as follows:
will be u in the system model i (x i ) Using Σ j a i,j x i,j Is shown in which
Figure FDA00036492943800000211
When p represents a price vector, a i,j /p j Is defined as APP i Occupation of EN by costing cost 1 j Obtaining the income; and for each APP deployed at the gateway, there is an EN with the highest profit, and MBB represents the highest profit rate that can be obtained by an APP in the current EN set:
Figure FDA0003649294380000031
requirement set D of each APP i (p) includes all the ENs that can supply it with MBB, formulated as:
Figure FDA0003649294380000032
in order to maximize the profit, each APP tries to invest all the budgets, under the given price conditions, its own D i Consumption at EN in (p) and guarantee budget clearing, using χ to represent resource allocation matrix, and transforming into convex function optimization problem:
Figure FDA0003649294380000033
the constraint conditions include:
Figure FDA0003649294380000034
according to the KKT condition, the optimal allocation of resources also needs to satisfy the following condition:
Figure FDA0003649294380000035
5. the method of claim 4, wherein in step S3, the intelligent gateway obtains the resource allocation scheme by applying a double decomposition method and a dynamic scaling strategy simultaneously, and determines the specific resource allocation scheme according to the final total profit.
6. The method of claim 5, wherein the double decomposition method equivalently transforms the convex function optimization problem into:
Figure FDA0003649294380000041
the sub-problems that each APP needs to compute are:
Figure FDA0003649294380000042
7. the method of claim 6, wherein an equation with a CES function is used for performing double decomposition, and the specific solution includes:
61) initializing parameters including a price p (0) p for the first EN 0 Step length alpha (0) and tolerance value gamma, and the loop iteration time t is 0;
62) the edge intelligent gateway broadcasts and informs all the APPs deployed at the current gateway of the price of p (t);
63) each APP calculates the optimal demand x for all known ENs at the number of iterations t by the following equation i,j (t) and sending it to the intelligent gateway, where ρ is a parameter in the CES function that is approximately 1 but strictly less than 1;
Figure FDA0003649294380000043
64) the edge intelligent gateway updates the price vector:
Figure FDA0003649294380000044
65) when | p j (t+1)-p j (t)|<γ,
Figure FDA0003649294380000046
Or when the iteration times t are overlarge, outputting the final balance price vector p * And optimal planning X *
8. The method of claim 5, wherein the content of the dynamic scaling strategy is as follows: in each iteration, each APP will proportionally update its own bid for each EN according to the total availability obtained in the last iteration case, that is:
Figure FDA0003649294380000045
the price of an EN is equivalent to the sum of all APPs bidding on it, i.e. p j (t)=∑ i b i,j (t)。
9. The method for resource allocation to an edge-oriented intelligent gateway according to claim 8, wherein in each iteration, each APP needs to execute the following algorithm to update its own bid vector and output it to the edge intelligent gateway:
71) all EN are according to a i,j /b i,j Performing descending arrangement, and outputting an array after the arrangement:
L i ={i 1 ,i 2 ,...,i M }
72) finding the largest k that satisfies the following inequality:
Figure FDA0003649294380000051
73) let l > k
Figure FDA0003649294380000052
L is more than or equal to 1 and less than or equal to k:
Figure FDA0003649294380000053
in the above formula, the first and second carbon atoms are,
Figure FDA0003649294380000054
representing the current App i To i k The rate of return for the number EN,
Figure FDA0003649294380000055
indicating the current App i Pair i in the last iteration k The bid for the number EN,
Figure FDA0003649294380000056
represents the current APP pair i in this iteration l The gateway broadcasts the price vector to the APP deployed in the Docker container after receiving all bidding conditions of the current round so as to carry out the iteration of the next round; and when the change of the price vector after iteration is small enough, ending the iteration process, and outputting the final result price vector, the resource allocation condition and the income obtained by each APP.
10. An edge-oriented intelligent gateway resource allocation device, comprising:
a processor; and
a memory storing computer instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1-9.
CN202210538531.0A 2022-05-18 2022-05-18 Resource allocation method and equipment for edge intelligent gateway Active CN115037620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210538531.0A CN115037620B (en) 2022-05-18 2022-05-18 Resource allocation method and equipment for edge intelligent gateway

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210538531.0A CN115037620B (en) 2022-05-18 2022-05-18 Resource allocation method and equipment for edge intelligent gateway

Publications (2)

Publication Number Publication Date
CN115037620A true CN115037620A (en) 2022-09-09
CN115037620B CN115037620B (en) 2024-05-10

Family

ID=83121147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210538531.0A Active CN115037620B (en) 2022-05-18 2022-05-18 Resource allocation method and equipment for edge intelligent gateway

Country Status (1)

Country Link
CN (1) CN115037620B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6965867B1 (en) * 1998-04-29 2005-11-15 Joel Jameson Methods and apparatus for allocating, costing, and pricing organizational resources
US20080103793A1 (en) * 2006-10-27 2008-05-01 Microsoft Corporation Sequence of algorithms to compute equilibrium prices in networks
US20120095940A1 (en) * 2010-10-13 2012-04-19 Microsoft Corporation Pricing mechanisms for perishable time-varying resources
CN109041130A (en) * 2018-08-09 2018-12-18 北京邮电大学 Resource allocation methods based on mobile edge calculations
CN110147915A (en) * 2018-02-11 2019-08-20 陕西爱尚物联科技有限公司 A kind of method and its system of resource distribution
CN110380891A (en) * 2019-06-13 2019-10-25 中国人民解放军国防科技大学 Edge computing service resource allocation method and device and electronic equipment
US20200007460A1 (en) * 2018-06-29 2020-01-02 Intel Corporation Scalable edge computing
US20200104184A1 (en) * 2018-09-27 2020-04-02 Intel Corporation Accelerated resource allocation techniques
CN111935205A (en) * 2020-06-19 2020-11-13 东南大学 Distributed resource allocation method based on alternative direction multiplier method in fog computing network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6965867B1 (en) * 1998-04-29 2005-11-15 Joel Jameson Methods and apparatus for allocating, costing, and pricing organizational resources
US20080103793A1 (en) * 2006-10-27 2008-05-01 Microsoft Corporation Sequence of algorithms to compute equilibrium prices in networks
US20120095940A1 (en) * 2010-10-13 2012-04-19 Microsoft Corporation Pricing mechanisms for perishable time-varying resources
CN110147915A (en) * 2018-02-11 2019-08-20 陕西爱尚物联科技有限公司 A kind of method and its system of resource distribution
US20200007460A1 (en) * 2018-06-29 2020-01-02 Intel Corporation Scalable edge computing
CN109041130A (en) * 2018-08-09 2018-12-18 北京邮电大学 Resource allocation methods based on mobile edge calculations
US20200104184A1 (en) * 2018-09-27 2020-04-02 Intel Corporation Accelerated resource allocation techniques
CN110380891A (en) * 2019-06-13 2019-10-25 中国人民解放军国防科技大学 Edge computing service resource allocation method and device and electronic equipment
CN111935205A (en) * 2020-06-19 2020-11-13 东南大学 Distributed resource allocation method based on alternative direction multiplier method in fog computing network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHIJIA CHEN, YANQIANG DI: "Intelligent Cloud Training System based on Edge Computing and Cloud Computing", 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKINGMELECTRONIC AND AUTOMATION CONTROL CONFERENCE *
岑伯维,蔡泽祥: "电力物联网边缘计算终端的微服务建模与计算资源配置方法", 电力***自动化 *

Also Published As

Publication number Publication date
CN115037620B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
Priya et al. Resource scheduling algorithm with load balancing for cloud service provisioning
Hosseinioun et al. A new energy-aware tasks scheduling approach in fog computing using hybrid meta-heuristic algorithm
Lin et al. On scientific workflow scheduling in clouds under budget constraint
US10404067B2 (en) Congestion control in electric power system under load and uncertainty
CN104657220A (en) Model and method for scheduling for mixed cloud based on deadline and cost constraints
Mechalikh et al. PureEdgeSim: A simulation framework for performance evaluation of cloud, edge and mist computing environments
Dias et al. Parallel computing applied to the stochastic dynamic programming for long term operation planning of hydrothermal power systems
Long et al. Agent scheduling model for adaptive dynamic load balancing in agent-based distributed simulations
Ralha et al. Multiagent system for dynamic resource provisioning in cloud computing platforms
Sebastio et al. Optimal distributed task scheduling in volunteer clouds
CN107317836A (en) One kind mixing cloud environment lower time appreciable request scheduling method
CN109815009B (en) Resource scheduling and optimizing method under CSP
Saravanan et al. Enhancing investigations in data migration and security using sequence cover cat and cover particle swarm optimization in the fog paradigm
CN115134371A (en) Scheduling method, system, equipment and medium containing edge network computing resources
CN105808341A (en) Method, apparatus and system for scheduling resources
Ivanovic et al. Elastic grid resource provisioning with WoBinGO: A parallel framework for genetic algorithm based optimization
Nguyen et al. Optimizing resource utilization in NFV dynamic systems: New exact and heuristic approaches
CN116991558A (en) Computing power resource scheduling method, multi-architecture cluster, device and storage medium
Taghinezhad-Niar et al. QoS-aware online scheduling of multiple workflows under task execution time uncertainty in clouds
Yin et al. An improved ant colony optimization job scheduling algorithm in fog computing
Medishetti et al. An Improved Dingo Optimization for Resource Aware Scheduling in Cloud Fog Computing Environment
Chen et al. Data-driven task offloading method for resource-constrained terminals via unified resource model
CN115037620B (en) Resource allocation method and equipment for edge intelligent gateway
Cao et al. Online cost-rejection rate scheduling for resource requests in hybrid clouds
Kontos et al. Cloud-Native Applications' Workload Placement over the Edge-Cloud Continuum.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant