CN115037620A - Edge-oriented intelligent gateway resource allocation method and equipment - Google Patents
Edge-oriented intelligent gateway resource allocation method and equipment Download PDFInfo
- Publication number
- CN115037620A CN115037620A CN202210538531.0A CN202210538531A CN115037620A CN 115037620 A CN115037620 A CN 115037620A CN 202210538531 A CN202210538531 A CN 202210538531A CN 115037620 A CN115037620 A CN 115037620A
- Authority
- CN
- China
- Prior art keywords
- app
- resource allocation
- edge
- intelligent gateway
- gateway
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013468 resource allocation Methods 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000005457 optimization Methods 0.000 claims abstract description 17
- 238000004364 calculation method Methods 0.000 claims abstract description 15
- 238000004458 analytical method Methods 0.000 claims abstract description 9
- 239000013598 vector Substances 0.000 claims description 32
- 230000006870 function Effects 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 19
- 238000000354 decomposition reaction Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 238000004519 manufacturing process Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 125000004432 carbon atom Chemical group C* 0.000 claims description 3
- 238000009826 distribution Methods 0.000 claims description 3
- PXUQTDZNOHRWLI-OXUVVOBNSA-O malvidin 3-O-beta-D-glucoside Chemical compound COC1=C(O)C(OC)=CC(C=2C(=CC=3C(O)=CC(O)=CC=3[O+]=2)O[C@H]2[C@@H]([C@@H](O)[C@H](O)[C@@H](CO)O2)O)=C1 PXUQTDZNOHRWLI-OXUVVOBNSA-O 0.000 claims description 3
- 238000013439 planning Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 2
- 102100022704 Amyloid-beta precursor protein Human genes 0.000 claims 5
- 101710151993 Amyloid-beta precursor protein Proteins 0.000 claims 4
- 101000823051 Homo sapiens Amyloid-beta precursor protein Proteins 0.000 claims 1
- DZHSAHHDTRWUTF-SIQRNXPUSA-N amyloid-beta polypeptide 42 Chemical compound C([C@@H](C(=O)N[C@@H](C)C(=O)N[C@@H](CCC(O)=O)C(=O)N[C@@H](CC(O)=O)C(=O)N[C@H](C(=O)NCC(=O)N[C@@H](CO)C(=O)N[C@@H](CC(N)=O)C(=O)N[C@@H](CCCCN)C(=O)NCC(=O)N[C@@H](C)C(=O)N[C@H](C(=O)N[C@@H]([C@@H](C)CC)C(=O)NCC(=O)N[C@@H](CC(C)C)C(=O)N[C@@H](CCSC)C(=O)N[C@@H](C(C)C)C(=O)NCC(=O)NCC(=O)N[C@@H](C(C)C)C(=O)N[C@@H](C(C)C)C(=O)N[C@@H]([C@@H](C)CC)C(=O)N[C@@H](C)C(O)=O)[C@@H](C)CC)C(C)C)NC(=O)[C@H](CC=1C=CC=CC=1)NC(=O)[C@@H](NC(=O)[C@H](CC(C)C)NC(=O)[C@H](CCCCN)NC(=O)[C@H](CCC(N)=O)NC(=O)[C@H](CC=1N=CNC=1)NC(=O)[C@H](CC=1N=CNC=1)NC(=O)[C@@H](NC(=O)[C@H](CCC(O)=O)NC(=O)[C@H](CC=1C=CC(O)=CC=1)NC(=O)CNC(=O)[C@H](CO)NC(=O)[C@H](CC(O)=O)NC(=O)[C@H](CC=1N=CNC=1)NC(=O)[C@H](CCCNC(N)=N)NC(=O)[C@H](CC=1C=CC=CC=1)NC(=O)[C@H](CCC(O)=O)NC(=O)[C@H](C)NC(=O)[C@@H](N)CC(O)=O)C(C)C)C(C)C)C1=CC=CC=C1 DZHSAHHDTRWUTF-SIQRNXPUSA-N 0.000 claims 1
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 13
- 238000007726 management method Methods 0.000 abstract description 7
- 230000000694 effects Effects 0.000 abstract description 3
- 230000007774 longterm Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 10
- 238000012423 maintenance Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 238000003860 storage Methods 0.000 description 4
- 238000002955 isolation Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000011217 control strategy Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000013341 scale-up Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/66—Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a resource allocation method and equipment for an edge-oriented intelligent gateway. The resource allocation method of the invention comprises the following steps: establishing a system model comprising a resource allocation model and an availability analysis model; converting the resource allocation problem into an optimization problem using a centralized solution according to the system model; the optimization problem is computed using a distributed solution and the allocation is output. The invention combines edge calculation, container technology and micro-service architecture, ensures long-term stable operation of each APP deployed in the intelligent gateway through flexible management and control of the resource obtaining mode of the business function APP, and achieves the effects of improving the resource utilization rate and reducing the network delay.
Description
Technical Field
The invention relates to an edge computing, container technology and micro-service architecture, in particular to a resource allocation method and equipment for an edge intelligent gateway.
Background
Edge computing refers to an open platform fusing network, computing, storage and application core capabilities at the edge side of a network close to an object or a data source, so that edge intelligent services are provided nearby, and the key requirements of industry digitization on aspects of agile connection, real-time service, data optimization, application intelligence, safety, privacy protection and the like are met. With the trend of everything interconnection, edge data is coming to explosive growth, which is both a challenge and an opportunity. Significant advantages of edge calculation include: 1) edge computing can perform data processing and analysis in real time or faster, bringing data processing closer to the source, rather than an external data center or cloud, can reduce latency. 2) The cost budget can be greatly reduced in cost precaution. The cost of the data management solution of the enterprise on the local device is much lower than that of the cloud and data center network; 3) reducing network traffic. As the number of internet of things devices increases, data generation continues to grow at the rate of creating records. As a result, network bandwidth becomes more limited, overwhelming the cloud, resulting in a larger data bottleneck; 4) and the efficiency of the application program is improved. By reducing the level of latency, applications can run more efficiently and more quickly; 5) personalization: through edge calculation, continuous learning can be realized, and the model is adjusted according to personal requirements, so that personalized interactive experience is brought; 6) security and privacy protection. Network edge data relates to personal privacy, and the traditional cloud computing mode needs to upload the privacy data to a cloud computing center, which increases the risk of revealing user privacy data. In the edge computing, the research of the identity authentication protocol should draw the advantages of the existing scheme as reference, and simultaneously combine the characteristics of distribution, mobility and the like in the edge computing to strengthen the research of the uniform authentication, cross-domain authentication and switching authentication technology so as to ensure the data and privacy safety of the user in different trust domains and heterogeneous network environments.
Container technology is an operating system virtualization technology in computer science, and compared with a traditional operating system, a container allows an application program and dependent items thereof to run in the process of resource isolation. As a lightweight operating system level virtualization technique, all necessary components needed to run an application can be packaged as a single image that can be reused without affecting any processes outside the container. Docker, as representative of a new generation of container technology, aims to provide a standardized runtime environment, and has major advantages including: 1) lightweight, multiple Docker containers running on one machine can share the operating system kernel of that machine; they can be started quickly and occupy very little computational and memory resources. The image is constructed through a file system layer and shares some common files. Therefore, the use amount of the disk can be reduced as much as possible, and the mirror image can be downloaded more quickly; 2) standards, the Docker container is based on open standards, capable of running on all mainstream Linux versions, Microsoft Windows, and any infrastructure including VMs, bare-metal servers, and clouds; 3) security, the isolation that Docker gives applications is not limited to isolating from each other, but is independent of the underlying infrastructure. Docker defaults to providing the strongest isolation and therefore application problems, as well as single container problems, do not reach the entire machine.
The microservice architecture is a method for developing a single application into a set of small services, each application running in its own process and communicating through a lightweight communication mechanism, usually an API based on HTTP resources. These services are built around business functions and can be deployed independently through a fully automated deployment mechanism. These services should employ as little centralized management as possible and use different programming languages and data stores as needed. The micro-service technology has the advantages of easier isomerism, freer application of new technology, mutual promotion and mutual optimization of architecture and organization architecture, better building and exercising team, easy expansion, simple deployment (updating and rollback), high reusability, high elasticity, easier old component replacement and the like.
In recent years, some network and provincial companies develop exploration and practice work in the aspects of remote operation and maintenance and state monitoring of secondary equipment of a transformer substation, and realize the remote operation and maintenance of automatic equipment of the transformer substation to a certain extent, but still have some problems, including insufficient operation and maintenance information acquisition of the automatic equipment, difficulty in expanding business functions due to the development of a customized system and insufficient intelligence degree of application level of a data station. Therefore, it is necessary to develop a research on a new technology for debugging, operating and maintaining transformer substation automation equipment, introduce new internet technologies such as edge calculation and virtualization into management and services of operation, maintenance and maintenance, construct a transformer substation automation equipment operation, maintenance and maintenance system with convenient and fast extension, flexible function deployment and platform open sharing, implement full acquisition and edge on-site processing of heterogeneous operation and maintenance information of automation equipment, solve the problems of difficulty in accurate positioning of faults, inflexible service function extension, low intelligence degree and the like, create a new cloud edge collaborative mode for operation and maintenance services of automation equipment, meet the requirements of unattended transformer substations and improvement of debugging, operation and maintenance efficiency, and strengthen the capability of observation, analysis and control of transformer substation automation equipment.
Disclosure of Invention
The invention aims to provide a resource allocation method facing an edge intelligent gateway by comprehensively utilizing three technologies of edge calculation, a container and micro-service, and the method ensures the long-term stable operation of each APP deployed in the intelligent gateway through the flexible control of a resource obtaining mode of a service function App, and achieves the effects of improving the resource utilization rate and reducing the network delay.
In order to achieve the above object, a first aspect of the present invention provides a resource allocation method for an edge-oriented intelligent gateway, which includes the steps of:
s1, establishing a system model, wherein the system model comprises a resource allocation model and an availability analysis model;
s2, converting the resource allocation problem into an optimization problem by using a centralized solution according to the system model;
and S3, calculating the optimization problem by using a distributed solution and outputting a distribution scheme.
Further, the step S1 includes:
s11, establishing a resource allocation model: setting EN sets and APP sets owned by a gateway as MS and NS respectively, setting the number of the EN sets and the number of the APP sets as M and N respectively, and setting each EN set j Has c j A calculation unit; let x be i,j Represents EN j To APP i Computing unit of (2), allocated to APP i Is x i =(x i,1 ,x i,2 ,x i,3 ,…,x i,M ) (ii) a Let p be j Represents EN j For the whole system, define EN price vector p ═ (p) for 1 ,p 2 ,…,p j ,…,p M ) (ii) a Let U i (x i P) represents APP i Of the resource allocation vector x occupied by it i And a price vector p decision;
the limitation of the N APPs owned by the gateway by each EN-owned computing resource can be expressed as the following constraint:
s12, establishing an availability analysis model: apply APP i The revenue that can be generated from the acquired resources is denoted as u i (x i ) In the subsequent model, the numerical value is used as an input item to calculate a specific resource allocation scheme; apply APP to i Can be driven from EN j Is defined as a i,j And (3) pushing out:
in an actual production environment, the time of each request and reply sent by a user comprises three parts: round-trip delay between a subscriber and an edge intelligent gatewayRound-trip network delay between edge intelligent gateway and edge computing node ENAnd processing delay at ENIn most casesAre very small and are ignored; apply APP to i The maximum tolerable delay is defined asObtaining:
analyzing the processing time delay of EN based on M/G/1 queuing model, and EN assuming that the working loads are uniformly distributed to each computing unit j Is APP i Processing time at serviceThe calculation is made by the following formula:
in the above formula,. mu. i,j Represents EN j In which a single computing unit is processing APP i Occupancy ratio of λ i,j Denotes APP i To EN j The rate of requests issued, which is required to ensure queue stability
in the above formula, r i Indicating each successful response APP i By setting the maximum tolerable delayAnd confirming that the resource allocation scheme calculates x i,j ,μ i,j Andafter corresponding values, each APP can be calculated through the formula i Availability of the implementation from the acquired computing resources.
Further, in said step S11, for all APPs deployed by Docker used in the gateway, the goal they need to reach is to maximize their availability under the condition of budget constraint, and overall, the following two conditions must be satisfied:
2) all resources are fully utilized:
further, the specific content of step S2 is as follows:
will be u in the system model i (x i ) Using sigma j a i,j x i,j Is shown in which
When p represents a price vector, a i,j /p j Is defined as APP i Occupation of EN by costing cost 1 j Obtaining the income; and for each APP deployed at the gateway, there is an EN with the highest profit, and MBB represents the highest profit rate that can be obtained by an APP in the current EN set:
requirement set D of each APP i (p) includes all compounds capable of giving itEN for MBB is formulated as:in order to maximize the profit, each APP tries to invest all the budgets and, for a given price, its own D i (p) consumption at EN and guarantee of clearing budget; using χ to represent the resource allocation matrix, converting into a convex function optimization problem:
the constraint conditions include:
according to the KKT condition, the optimal allocation of resources also needs to satisfy the following conditions:
further, in step S3, the intelligent gateway simultaneously applies a double decomposition method and a proportional dynamic control strategy to obtain a resource allocation scheme, and determines a specific resource allocation scheme according to the final total profit.
Further, the double decomposition method equivalently transforms the convex function optimization problem into:
the sub-problems that each APP needs to compute are:
further, an equation with a CES function is used for double decomposition, and the specific solution includes:
61) initializing parameters including a price p (0) p for the first EN 0 Step length alpha (0) and tolerance value gamma, and the loop iteration time t is 0;
62) the edge intelligent gateway broadcasts and informs all the APPs deployed at the current gateway of the price of p (t);
63) each APP calculates the optimal demand x for all known ENs at the number of iterations t by the following equation i,j (t) and sending it to the intelligent gateway, where ρ is a parameter in the CES function that is approximately 1 but strictly less than 1;
64) the edge intelligent gateway updates the price vector:
65) when in useOr when the iteration times t are overlarge, outputting the final balance price vector p * And optimal planning X * 。
Further, the content of the dynamic control strategy according to the proportion is as follows: in each iteration, each APP will proportionally update its own bid for each EN according to the total availability obtained in the last iteration case, that is:
the price of an EN is equivalent to the sum of all APPs bidding on it, i.e. p j (t)=∑ i b i,j (t);
In each iteration, App i The following algorithm is required to be executed to update the bidding vector of the edge intelligent device and output the bidding vector to the edge intelligent deviceA gateway:
71) all EN are according to a i,j /b i,j Performing descending arrangement, and outputting an array after the arrangement:
L i ={i 1 ,i 2 ,…,i M }
72) finding the largest k that satisfies the following inequality:
in the above formula, the first and second carbon atoms are,representing the current App i To i k The rate of return for the number EN,representing the current App i Pair i in the last iteration k The bid for the number EN,represents the current APP pair i in this iteration l The gateway broadcasts the price vector to the APP deployed in the Docker container after receiving all bidding conditions of the current round so as to carry out iteration of the next round; and when the change of the price vector after iteration is small enough, ending the iteration process, and outputting the final result price vector, the resource allocation condition and the income obtained by each APP.
The invention has the following beneficial effects: the invention combines edge calculation, container technology and micro-service architecture, ensures long-term stable operation of each APP deployed in the intelligent gateway through flexible management and control of the resource obtaining mode of the business function APP, and achieves the effects of improving the resource utilization rate and reducing the network delay.
Drawings
FIG. 1 is a flow chart of a resource allocation plan according to an embodiment of the present invention;
fig. 2 is a framework diagram of an embodiment of a service function apprizing and flexible management and control technology provided by an embodiment of the present invention;
fig. 3 is an interaction diagram between a peer and a peer in the case of using an edge smart gateway according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, but it should be understood that the following descriptions of the specific embodiments are only for the purpose of clearly understanding the technical solutions of the present invention, and are not intended to limit the present invention.
Referring to fig. 1, the resource allocation method for an edge-oriented intelligent gateway provided by the present invention includes the following steps:
and step S1, establishing a system model including a resource allocation model and an availability analysis model.
S1-1, establishing a resource allocation model: for the edge intelligent gateway, all the business functions APP deployed at the gateway by using Docker perform actual computing services through the computing nodes (EN) owned by the gateway. The invention assumes that the computing power that each computing node can provide is fixed and does not change over time, and each APP is responsible for independent business functions. As shown in fig. 2, several APPs and their dependent files are deployed through a Docker, and are managed uniformly by APP management microservices deployed at the edge smart gateway. The micro-service is responsible for automatically allocating specific resources such as a computing unit and a memory for the deployed APP, monitoring the running state of each service, ensuring the stability of the service and realizing centralized management of the resources.
For the resource allocation model, it is assumed that the EN set and APP set owned by the gateway are MS and NS, respectively, the number of ENs and APP are M and N, respectively, and each specific EN j Has c j A computing unit. Even when an EN contains multiple types of computational units (e.g., GPUs and CPUs), the EN may be divided into groups, each having only a single type of EN, in which case each group may still be considered a separate EN.
The objective of the model is to improve the resource utilization rate, and the standard capable of embodying the resource utilization rate includes two aspects, namely the amount of computing resources allocated to each APP by the gateway and the total revenue generated by the APP. In order to avoid resource abuse and improve utilization efficiency, the cost of the APPs is required for occupying any number of computing units in the EN, and each APP needs to set own expense according to the priority and the demand. When the EN is provided by a third party company, the occupation cost can be set as a function related to the concrete price, and if the EN is provided in the system, the corresponding price function can be abstracted according to the number of the calculation units owned by each EN.
Let x i,j Represents EN j To APP i Number of calculation units allocated to APP i Is x i =(x i,1 ,x i,2 ,x i,3 ,…,x i,M ). B is to be i Defined as being provided to APP i Maximum resource limit for consumption.
Let p be j Represents EN j The EN price vector p ═ (p) can be defined for the whole system 1 ,p 2 ,…,p M )。
Order U i (x i P) represents APP i Of the total number of calculation units x occupied by the availability function of i And resource overhead p.
The limitation of N APPs owned by a gateway by the computational resources owned by each EN can be expressed as the following constraint:
c owned by each EN for simplicity of expression and subsequent equations j The computing unit carries out standardized operation to orderAnd to scale up other parameters corresponding to each EN, including price, resource allocation, etc. The normalized total computation resource constraint described above can be converted into:
for all APPs deployed by Docker used in the gateway, the goal they need to achieve is to maximize their availability under budget constraints, and overall, the following two conditions must be met:
2. all resources are fully utilized:
the above two conditions enable each APP to acquire the corresponding resource to maximize its availability, and each EN to maximize its availability, respectively. Under the simultaneous action of these two conditions, all APP compete for maximum benefit, behind which it is actually the price that plays a regulatory role.
S1-2, establishing an availability analysis model:
apply APP to i The revenue that can be generated from the acquired resources is denoted as u i (x i ) In the subsequent model, the value is taken asAnd (4) inputting items to calculate a specific resource allocation scheme.
Apply APP to i Can be driven from EN j Is defined as a in one calculation unit of i,j And can deduce:
the following illustrates how this entry is computed in an actual production environment, and typically, the services provided by edge computing are only sensitive to time delay, such as remote monitoring and fault location functions implemented in a substation using internet of things sensors. For simplicity of explanation, it is assumed that the transmission bandwidth is sufficiently large and the amount of requested data is small, and therefore the transmission delay is ignored, and only the propagation delay and the processing delay are considered.
In the actual production environment, the time of each request and reply sent by the user includes three parts, and the round-trip delay between the user and the edge intelligent gatewayAnd the round-trip network time delay between the edge intelligent gateway and the edge computing node ENAnd processing delay at ENIn most casesAre very small and are ignored in this example. Apply APP to i The maximum tolerable delay is defined asThe following can be obtained:
it can be directly concluded that EN j Capable of handling the maximum number of requestsIf it is not
The processing delays of the various ENs are modeled using an M/G/1 queuing model, assuming that the workloads are evenly distributed across the various computational units. EN j To APP i The calculated average response time may be expressed as:
mu in the above formula i,j Represents EN j In which a single computing unit is processing APP i Occupied ratio of (a) ("lambda i,j Denotes APP i To EN j The rate at which the request is issued. To guarantee the stability of the queue, it is necessary to guarantee
r in the formula i Indicating each successful response APP i The gain of (c), including the value,four values can be obtained through measurement, and each APP is calculated through the measurement i Availability of the implementation from the acquired computing resources.
Step S2, converting the resource allocation problem into an optimization problem using a centralized solution according to the system model.
In order to better analyze the model, u in the system model is used i (x i ) Using Σ j a i,j x i,j Is shown in which
When p represents a price vector, a i,j /p j Can be defined as APP i Occupation of EN by costing cost 1 j The acquired benefit (availability). For each APP deployed at the gateway, it has the EN with the highest income, and the abbreviation MBB in english indicates the highest income rate that a certain APP can obtain in the current EN set:
requirement set D of each APP i All the ENs that can supply MBBs to it should be included in (p), and expressed by the formula:in order to maximize their availability, each APP tries to invest all the budgets to its own D for a given price i Consumption at EN in (p) and guarantee to empty budget. Can be converted into a convex function optimization problem:
the constraint conditions include:
according to the KKT condition, the optimal allocation of resources also needs to satisfy the following condition:
step S3, calculate the optimization problem using the distributed solution and output the allocation.
Due to the fact that complex conditions possibly occurring in reality are responded, the two algorithms are provided for solving the optimization problem, the intelligent gateway simultaneously applies the two algorithms to obtain the resource allocation scheme, and the specific resource allocation scheme is determined according to the final total income. As shown in fig. 3, the APP at the edge intelligent network as the client performs the functions of obtaining the price vector and updating the resource allocation.
S3-1 double decomposition method
The convex function optimization problem can be equivalently expressed as:
using the lagrange multiplier method, the constraint is converted to:
thus, the sub-problem that each APP needs to compute is:
the equation with the CES function is used for carrying out double decomposition, and the specific solution comprises the following steps:
(i) initializing parameters, including: price p (0) ═ p of first EN 0 The relatively small step size α (0) and the tolerance value γ, the loop iteration number t is 0.
(ii) The edge intelligent gateway broadcasts and informs all the APPs deployed at the current gateway of the price of p (t).
(iii) Each APP calculates the optimal demand x for all known ENs at the number of iterations t by the following equation i,j (t) and sending it to the intelligent gateway. Where ρ is a parameter in the CES function that is approximately 1 but strictly less than 1.
(iv) The edge intelligent gateway updates the price vector:
(v) when in useOr when the iteration times t are overlarge, outputting the final balance price vector p * And optimal planning X * 。
S3-2, dynamic regulation strategy according to proportion
In each iteration, each APP will scale up its bid for each EN based on the total availability obtained in the last iteration. Namely:
since the capacity of all ENs is normalized, the price of an EN can be equated to the sum of all APPs bidding on it, i.e., p j (t)=∑ i b i,j (t)。
In each iteration, each APP needs to execute the following algorithm to update its own bid vector and output it to the edge intelligent gateway:
(i) all EN are according to a i,j /b i,j Performing descending arrangement, and outputting an array after the arrangement:
L i ={i 1 ,i 2 ,…,i M }
(ii) find the largest k that satisfies the following inequality:
in the above formula, the first and second carbon atoms are,indicating the current App i To i k The rate of return for the number EN,indicating the current App i Pair i in the last iteration k The bid for the number EN,represents the current APP pair i in this iteration l For the final bid of number EN, the gateway will broadcast all bid vectors received to the APP deployed in the Docker container for the next iteration. And when the change of the price vector after iteration is small enough, ending the iteration process, and outputting the final result price vector, the resource allocation condition and the income obtained by each APP.
Based on the same technical concept as the method embodiment, according to another embodiment of the present invention, there is provided a computer apparatus including: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, which when executed by the processors implement the steps in the method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.
Claims (10)
1. A resource allocation method facing to an edge intelligent gateway is characterized by comprising the following steps:
s1, establishing a system model, wherein the system model comprises a resource allocation model and an availability analysis model;
s2, converting the resource allocation problem into an optimization problem by using a centralized solution according to the system model;
and S3, calculating the optimization problem by using a distributed solution and outputting a distribution scheme.
2. The method for resource allocation to an edge-oriented intelligent gateway according to claim 1, wherein the step S1 includes:
s11, establishing a resource allocation model: setting EN sets and APP sets owned by a gateway as MS and NS respectively, setting the number of the EN sets and the number of the APP sets as M and N respectively, and setting each EN set j Has c j A calculation unit; let x i,j Represents EN j To APP i Computing unit of (2), allocated to APP i Is x i =(x i,1 ,x i,2 ,x i,3 ,...,x i,M ) (ii) a Let p be j Represents EN j For the whole system, define EN price vector p ═ (p) for 1 ,p 2 ,...,p j ,...,p M ) (ii) a Let U i (x i P) represents the availability function of the APPI, the resource allocation vector x occupied by it i And price directionQuantity p is determined;
the limitation of the N APPs owned by the gateway by each EN-owned computing resource can be expressed as the following constraint:
s12, establishing an availability analysis model: apply APP to i The revenue that can be generated from the acquired resources is denoted as u i (x i ) In the subsequent model, the numerical value is used as an input item to calculate a specific resource allocation scheme; apply APP to i Can be driven from EN j Is defined as a in one calculation unit of i,j And (3) pushing out:
in an actual production environment, the time of each request and reply sent by a user comprises three parts: round-trip delay between a subscriber and an edge intelligent gatewayRound-trip network delay between edge intelligent gateway and edge computing node ENAnd processing delay at ENIn most casesAre all very small and are ignored; apply APP to i The maximum tolerable delay is defined asObtaining:
analyzing the processing time delay of EN based on M/G/1 queuing model, and EN assuming that the working loads are uniformly distributed to each computing unit j Is APP i Processing delay while servicingThe calculation is made by the following formula:
in the above formula,. mu. i,j Represents EN j In which a single computing unit is processing APP i Occupancy ratio of λ i,j Denotes APP i To EN j The rate of requests issued, which is required to ensure queue stability
in the above formula, r i Indicating each successful response APP i By settingAnd confirming that the resource allocation scheme calculates x i,j 、μ i,j Andafter corresponding values, each APP can be calculated by the above formula i Availability of the implementation from the acquired computing resources.
3. The method for resource allocation to an edge-oriented intelligent gateway according to claim 2, wherein in step S11, for all APPs deployed by using Docker in the gateway, their required goal is to maximize their availability under the condition of budget constraint, and as a whole, the following two conditions must be satisfied:
B i defined as being provided to APP i Maximum resource limit of consumption;
2) all resources are fully utilized:
4. the method for resource allocation to an edge-oriented intelligent gateway according to claim 2, wherein the specific content of the step S2 is as follows:
will be u in the system model i (x i ) Using Σ j a i,j x i,j Is shown in which
When p represents a price vector, a i,j /p j Is defined as APP i Occupation of EN by costing cost 1 j Obtaining the income; and for each APP deployed at the gateway, there is an EN with the highest profit, and MBB represents the highest profit rate that can be obtained by an APP in the current EN set:
requirement set D of each APP i (p) includes all the ENs that can supply it with MBB, formulated as:in order to maximize the profit, each APP tries to invest all the budgets, under the given price conditions, its own D i Consumption at EN in (p) and guarantee budget clearing, using χ to represent resource allocation matrix, and transforming into convex function optimization problem:
the constraint conditions include:
according to the KKT condition, the optimal allocation of resources also needs to satisfy the following condition:
5. the method of claim 4, wherein in step S3, the intelligent gateway obtains the resource allocation scheme by applying a double decomposition method and a dynamic scaling strategy simultaneously, and determines the specific resource allocation scheme according to the final total profit.
7. the method of claim 6, wherein an equation with a CES function is used for performing double decomposition, and the specific solution includes:
61) initializing parameters including a price p (0) p for the first EN 0 Step length alpha (0) and tolerance value gamma, and the loop iteration time t is 0;
62) the edge intelligent gateway broadcasts and informs all the APPs deployed at the current gateway of the price of p (t);
63) each APP calculates the optimal demand x for all known ENs at the number of iterations t by the following equation i,j (t) and sending it to the intelligent gateway, where ρ is a parameter in the CES function that is approximately 1 but strictly less than 1;
64) the edge intelligent gateway updates the price vector:
8. The method of claim 5, wherein the content of the dynamic scaling strategy is as follows: in each iteration, each APP will proportionally update its own bid for each EN according to the total availability obtained in the last iteration case, that is:
the price of an EN is equivalent to the sum of all APPs bidding on it, i.e. p j (t)=∑ i b i,j (t)。
9. The method for resource allocation to an edge-oriented intelligent gateway according to claim 8, wherein in each iteration, each APP needs to execute the following algorithm to update its own bid vector and output it to the edge intelligent gateway:
71) all EN are according to a i,j /b i,j Performing descending arrangement, and outputting an array after the arrangement:
L i ={i 1 ,i 2 ,...,i M }
72) finding the largest k that satisfies the following inequality:
in the above formula, the first and second carbon atoms are,representing the current App i To i k The rate of return for the number EN,indicating the current App i Pair i in the last iteration k The bid for the number EN,represents the current APP pair i in this iteration l The gateway broadcasts the price vector to the APP deployed in the Docker container after receiving all bidding conditions of the current round so as to carry out the iteration of the next round; and when the change of the price vector after iteration is small enough, ending the iteration process, and outputting the final result price vector, the resource allocation condition and the income obtained by each APP.
10. An edge-oriented intelligent gateway resource allocation device, comprising:
a processor; and
a memory storing computer instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210538531.0A CN115037620B (en) | 2022-05-18 | 2022-05-18 | Resource allocation method and equipment for edge intelligent gateway |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210538531.0A CN115037620B (en) | 2022-05-18 | 2022-05-18 | Resource allocation method and equipment for edge intelligent gateway |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115037620A true CN115037620A (en) | 2022-09-09 |
CN115037620B CN115037620B (en) | 2024-05-10 |
Family
ID=83121147
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210538531.0A Active CN115037620B (en) | 2022-05-18 | 2022-05-18 | Resource allocation method and equipment for edge intelligent gateway |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115037620B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6965867B1 (en) * | 1998-04-29 | 2005-11-15 | Joel Jameson | Methods and apparatus for allocating, costing, and pricing organizational resources |
US20080103793A1 (en) * | 2006-10-27 | 2008-05-01 | Microsoft Corporation | Sequence of algorithms to compute equilibrium prices in networks |
US20120095940A1 (en) * | 2010-10-13 | 2012-04-19 | Microsoft Corporation | Pricing mechanisms for perishable time-varying resources |
CN109041130A (en) * | 2018-08-09 | 2018-12-18 | 北京邮电大学 | Resource allocation methods based on mobile edge calculations |
CN110147915A (en) * | 2018-02-11 | 2019-08-20 | 陕西爱尚物联科技有限公司 | A kind of method and its system of resource distribution |
CN110380891A (en) * | 2019-06-13 | 2019-10-25 | 中国人民解放军国防科技大学 | Edge computing service resource allocation method and device and electronic equipment |
US20200007460A1 (en) * | 2018-06-29 | 2020-01-02 | Intel Corporation | Scalable edge computing |
US20200104184A1 (en) * | 2018-09-27 | 2020-04-02 | Intel Corporation | Accelerated resource allocation techniques |
CN111935205A (en) * | 2020-06-19 | 2020-11-13 | 东南大学 | Distributed resource allocation method based on alternative direction multiplier method in fog computing network |
-
2022
- 2022-05-18 CN CN202210538531.0A patent/CN115037620B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6965867B1 (en) * | 1998-04-29 | 2005-11-15 | Joel Jameson | Methods and apparatus for allocating, costing, and pricing organizational resources |
US20080103793A1 (en) * | 2006-10-27 | 2008-05-01 | Microsoft Corporation | Sequence of algorithms to compute equilibrium prices in networks |
US20120095940A1 (en) * | 2010-10-13 | 2012-04-19 | Microsoft Corporation | Pricing mechanisms for perishable time-varying resources |
CN110147915A (en) * | 2018-02-11 | 2019-08-20 | 陕西爱尚物联科技有限公司 | A kind of method and its system of resource distribution |
US20200007460A1 (en) * | 2018-06-29 | 2020-01-02 | Intel Corporation | Scalable edge computing |
CN109041130A (en) * | 2018-08-09 | 2018-12-18 | 北京邮电大学 | Resource allocation methods based on mobile edge calculations |
US20200104184A1 (en) * | 2018-09-27 | 2020-04-02 | Intel Corporation | Accelerated resource allocation techniques |
CN110380891A (en) * | 2019-06-13 | 2019-10-25 | 中国人民解放军国防科技大学 | Edge computing service resource allocation method and device and electronic equipment |
CN111935205A (en) * | 2020-06-19 | 2020-11-13 | 东南大学 | Distributed resource allocation method based on alternative direction multiplier method in fog computing network |
Non-Patent Citations (2)
Title |
---|
ZHIJIA CHEN, YANQIANG DI: "Intelligent Cloud Training System based on Edge Computing and Cloud Computing", 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKINGMELECTRONIC AND AUTOMATION CONTROL CONFERENCE * |
岑伯维,蔡泽祥: "电力物联网边缘计算终端的微服务建模与计算资源配置方法", 电力***自动化 * |
Also Published As
Publication number | Publication date |
---|---|
CN115037620B (en) | 2024-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Priya et al. | Resource scheduling algorithm with load balancing for cloud service provisioning | |
Hosseinioun et al. | A new energy-aware tasks scheduling approach in fog computing using hybrid meta-heuristic algorithm | |
Lin et al. | On scientific workflow scheduling in clouds under budget constraint | |
US10404067B2 (en) | Congestion control in electric power system under load and uncertainty | |
CN104657220A (en) | Model and method for scheduling for mixed cloud based on deadline and cost constraints | |
Mechalikh et al. | PureEdgeSim: A simulation framework for performance evaluation of cloud, edge and mist computing environments | |
Dias et al. | Parallel computing applied to the stochastic dynamic programming for long term operation planning of hydrothermal power systems | |
Long et al. | Agent scheduling model for adaptive dynamic load balancing in agent-based distributed simulations | |
Ralha et al. | Multiagent system for dynamic resource provisioning in cloud computing platforms | |
Sebastio et al. | Optimal distributed task scheduling in volunteer clouds | |
CN107317836A (en) | One kind mixing cloud environment lower time appreciable request scheduling method | |
CN109815009B (en) | Resource scheduling and optimizing method under CSP | |
Saravanan et al. | Enhancing investigations in data migration and security using sequence cover cat and cover particle swarm optimization in the fog paradigm | |
CN115134371A (en) | Scheduling method, system, equipment and medium containing edge network computing resources | |
CN105808341A (en) | Method, apparatus and system for scheduling resources | |
Ivanovic et al. | Elastic grid resource provisioning with WoBinGO: A parallel framework for genetic algorithm based optimization | |
Nguyen et al. | Optimizing resource utilization in NFV dynamic systems: New exact and heuristic approaches | |
CN116991558A (en) | Computing power resource scheduling method, multi-architecture cluster, device and storage medium | |
Taghinezhad-Niar et al. | QoS-aware online scheduling of multiple workflows under task execution time uncertainty in clouds | |
Yin et al. | An improved ant colony optimization job scheduling algorithm in fog computing | |
Medishetti et al. | An Improved Dingo Optimization for Resource Aware Scheduling in Cloud Fog Computing Environment | |
Chen et al. | Data-driven task offloading method for resource-constrained terminals via unified resource model | |
CN115037620B (en) | Resource allocation method and equipment for edge intelligent gateway | |
Cao et al. | Online cost-rejection rate scheduling for resource requests in hybrid clouds | |
Kontos et al. | Cloud-Native Applications' Workload Placement over the Edge-Cloud Continuum. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |