CN110968920A - Method for placing chain type service entity in edge computing and edge computing equipment - Google Patents

Method for placing chain type service entity in edge computing and edge computing equipment Download PDF

Info

Publication number
CN110968920A
CN110968920A CN201911204131.0A CN201911204131A CN110968920A CN 110968920 A CN110968920 A CN 110968920A CN 201911204131 A CN201911204131 A CN 201911204131A CN 110968920 A CN110968920 A CN 110968920A
Authority
CN
China
Prior art keywords
user
service entity
edge
service
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911204131.0A
Other languages
Chinese (zh)
Other versions
CN110968920B (en
Inventor
严永辉
张胜
王黎明
施霄航
喻伟
钱柱中
周惯衡
吴甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Jiangsu Fangtian Power Technology Co Ltd
Original Assignee
Nanjing University
Jiangsu Fangtian Power Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University, Jiangsu Fangtian Power Technology Co Ltd filed Critical Nanjing University
Priority to CN201911204131.0A priority Critical patent/CN110968920B/en
Publication of CN110968920A publication Critical patent/CN110968920A/en
Application granted granted Critical
Publication of CN110968920B publication Critical patent/CN110968920B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method for placing a chain service entity in edge computing and edge computing equipment, wherein the method comprises the following steps: A. constructing a network model, a time delay model and a cost model of the edge computing environment; the network model comprises an edge server, a user and a service entity chain to be executed by the user in the network; the time delay model comprises the calculation time delay, the queuing time delay and the transmission time delay of the service entity on the edge server; the transmission delay comprises the transmission delay between servers and between a server and a user; B. and obtaining a placement scheme of the chained service entities by combining an objective function and constraint conditions of the chained service entity placement problem in edge calculation and through a heuristic algorithm based on a K-Means clustering algorithm and a greedy algorithm. The invention obtains the placement scheme of the chained service entity through the heuristic algorithm based on the K-Means clustering algorithm and the greedy algorithm, and can obtain a better result within lower time complexity.

Description

Method for placing chain type service entity in edge computing and edge computing equipment
Technical Field
The invention relates to the field of edge computing and server deployment and distribution, in particular to a method for placing chained service entities in edge computing.
Background
With the technology of the fifth generation mobile communication network (5G) becoming mature, the Internet of things (IoT) will be rapidly developed. The 5G technology can provide reliability, low delay, safety and ubiquitous mobility required by the Internet of things. The rise of the internet of things has also driven the development of various new services, such as Virtual Reality (VR, Virtual Reality), Augmented Reality (AR, Augmented Reality), speech recognition (speech recognition), Natural Language Processing (NLP, Natural Language Processing), Language Understanding (LUIS, Language Understanding), and the like. These applications usually include highly complex deep learning algorithms, and the operation generally requires the network to satisfy the conditions of high bandwidth and low latency. However, due to the limited power and computing power of the devices themselves, how to perform these deep learning applications becomes a major problem.
Due to the characteristics of the internet of things, the traditional cloud computing mode cannot meet the requirements: the data at the network edge is huge, which can lead to huge unnecessary bandwidth and computing resource usage; privacy protection can also be a big problem; the wireless transmission module consumes huge energy. The network edge generates a huge amount of data, so it is more efficient to process the data directly at the network edge, which requires edge computing such a network structure.
Unlike a cloud computing structure, which collects data and then concentrates the data in a cloud center for processing, edge computing completes data computation by using network edge nodes close to users and data sources. Compared with cloud computing, edge computing has the advantages that the congestion degree of a network center is reduced, network delay is low, efficiency is high, and safety is high.
Many applications in edge computing tend to consist of chained service entities, and thus the placement of chained service entities in edge computing is of concern. A service entity may be defined as a bundle of a user's personal data and processing logic for the data, responsible for the user's state and for computationally intensive tasks. In these new applications, the service entity usually appears in a chain form, that is, the latter task uses the output data of the former task as its input on the premise of the completion of the former task. For example, some augmented reality applications include the following chained tasks: shot capture, decoding, pre-processing, object classification, generating virtual objects, and the like. Different users have overlapping parts although the service entities are not exactly the same. Fig. 1 shows an example of a chained service entity, as can be seen from fig. 1: the user 1 has application a to be executed, the user 2 has application B to be executed, the two applications each include a series of chained service entities (i.e. module 1, module 2, module 3, and module 4a corresponding to application a, and module 1, module 2, module 3, module 4B, and module 5B corresponding to application B in the figure), and the application a and application B have coincident service entities ( modules 1, 2, 3), and the input data is processed by the respective corresponding modules to be output.
For the user, the response time of the service entity is mainly composed of the network transmission time and the calculation time of the application server, and in order to reduce the response time of the user, the service entity should be placed on the appropriate edge server. And placing a chain of service entities for multiple users is not a simple matter. The service entity is placed on different edge servers, which may cause different time delays and different workloads of each server, and if the service entity is not properly placed, the server loads are extremely unbalanced, and further cause global high time delay. Therefore, how to place the chained service entities in the edge computing network to minimize the total user delay is a problem worthy of research and urgent solution.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a method for placing a chained service entity in edge computing and provides edge computing equipment for solving the defects of the prior art.
The technical scheme is as follows: in order to solve the above technical problem, the present invention provides a method for placing chained service entities in edge computing, which comprises the following steps:
A. constructing a network model, a time delay model and a cost model of the edge computing environment; the network model comprises an edge server, a user and a service entity chain to be executed by the user in the network; the time delay model comprises the calculation time delay, the queuing time delay and the transmission time delay of the service entity on the edge server; the transmission delay comprises the transmission delay between servers and between a server and a user;
B. and obtaining a placement scheme of the chained service entities by combining an objective function and constraint conditions of the chained service entity placement problem in edge calculation and through a heuristic algorithm based on a K-Means clustering algorithm and a greedy algorithm.
Preferably, the method for placing the chained service entities in the edge computation further includes a step c.
Preferably, the cost model in step a includes constraining the sum of the costs of the service entities on all the edge servers, and constraining the number of service entities that each edge server calculates at most simultaneously; the edge in step B computes an objective function for the mid-chain service entity placement problem with the goal of minimizing user response time.
Preferably, in the step a, the network model of the edge computing environment includes N edge servers and M users, and any user uiContaining a chain of service entities L to be executedi(ii) a Where the set of edge servers S ═ S1,s2,...,sNU-U set of users1,u2,...,uM-M, N is a positive integer greater than 1;wherein E ═ { E ═ E1,e2,...,eKThe service entity set of all users; wherein
Figure BDA0002296571450000021
Representing user uiChain of service entities contained in a mobile application, wherein the mapping f (i, j) represents a user uiThe jth service entity in the service entity chain of (2) corresponds to the subscript sequence number, l, of a certain element E in the set EiRepresenting a chain of service entities LiNumber of included service entities.
Further preferably, the delay model includes a user uiThe total delay of (c) is:
Ti=Tc+Tl+Tt
wherein the user uiService entity of (2) calculating time delay TcComprises the following steps:
Figure BDA0002296571450000031
wherein the user uiQueuing delay TlThe time from the time of putting the service entity into the task queue to the time of executing the task is taken as the service entity;
wherein the user uiTransmission delay T oftComprises the following steps:
Figure BDA0002296571450000032
wherein
Figure BDA0002296571450000033
As a service entity eiAt server sjThe calculated time of (a); wherein
Figure BDA0002296571450000034
Representing user uiOf the kth service entity ef(i,k)Whether or not to be placed in the server sjIn the upper calculation, if
Figure BDA0002296571450000035
If the number is 1, yes, and if the number is not 1, no; wherein f (i, j) is user uiThe jth service entity in the service entity chain of (a) corresponds to the subscript sequence of the element in (E);
wherein d(s)i,sj) For servers siAnd server sjTransmission delay between, d(s)i,uj) For servers siAnd user ujThe transmission delay therebetween.
Further preferably, the objective function of the placement problem of the chained service entities in the edge computation in step B is:
Figure BDA0002296571450000036
wherein T isiFor user uiThe total delay of (c);
the constraint conditions include:
the sum of the service entity costs on all edge servers must not exceed C:
Figure BDA0002296571450000037
and each edge server simultaneously computes up to Q service entities:
Figure BDA0002296571450000038
wherein C is the upper limit of the sum of the costs of all edge servers;
Figure BDA0002296571450000039
representing a service entity ekPlaced in the server sjThe cost of (a);
Figure BDA00022965714500000310
presentation Server sjWhether or not the service entity module e is loadedkSuch asIf 1 is true, then 1 is not trueIf not;
wherein Q is the number of the service entities calculated by each edge server at most; wherein
Figure BDA0002296571450000041
Representing users u at the same timejWhether there is a service entity at the server skUpper calculation, e.g.
Figure BDA0002296571450000042
If the number is 1, yes, and if the number is not 1, no.
Preferably, in the heuristic algorithm in step B, clustering is performed by using a K-Means clustering algorithm according to the geographical locations of the servers and the users in the edge network, and the step of operating the servers and the users in each cluster includes:
(31) abstracting all users to which the service entities to be executed according to the time line belong into a task line list Taskline, and randomly adding all the users into the Taskline list during initialization;
(32) taking out the users from the Taskline list in a circulating way from beginning to end, and selecting the server with the shortest response time added by the users in the assumption that the service entity is placed in all the edge servers loaded with the next service entity module for calculation, thereby obtaining the placement scheme of the service entity and the response time added by the users; updating the state of the user and the state of the server selected to be placed at the same time until no user exists in the Taskline list, and circularly stopping;
(33) putting the placing schemes of the service entity chains of all the users into a placing set, and obtaining and returning the placing schemes of the chain service entities.
Further preferably, the updating of the status of the user and the status of the selected server in step (32) comprises: if no service entity needs to be executed after the user, deleting the user from the Taskline list; if the service entities still need to be executed, firstly, the time required by the current service entity for completing the calculation is calculated, and the user is reinserted into the Taskline list according to the sequence from small to large of the time required by the current service entity for completing the calculation.
Further preferably, the step (33) further includes calculating an average response time of all users according to the response time of each user and returning an output.
Preferably, the time complexity of the heuristic algorithm is O (| S | · | U |), where S is an edge server set and U is a user set.
The invention also provides an edge computing device, which comprises:
a processor; and
a memory storing computer executable instructions which, when executed by the processor, implement the steps of any of the methods described above.
Has the advantages that: the invention provides a chain type service entity placing method taking minimized user response time as a target through an edge computing environment, which obtains a placing scheme of a chain type service entity through constructing a network model, a time delay model and a cost model of the edge network environment, combining an objective function and constraint conditions of a chain type service entity placing problem in edge computing and through a heuristic algorithm based on a K-Means clustering algorithm and a greedy algorithm, and can obtain an approximately optimal result in lower time complexity.
Drawings
FIG. 1 is an example of a chained service entity;
FIG. 2 is a schematic block diagram of a method for placing chained service entities in edge computing according to this embodiment;
FIG. 3 is a schematic diagram illustrating a set of edge servers (indicating inter-server distances and types of modules owned by the servers) in an edge computing environment according to the present embodiment without loss of generality;
FIG. 4 is a schematic diagram of the edge server set (the servers in the dotted box are clustered) obtained by K-Means clustering in FIG. 3.
Detailed Description
The present invention will be described in further detail with reference to examples, which are not intended to limit the present invention.
The invention provides a method for placing a chained service entity in edge computing, which comprises the steps of firstly constructing a network model, a time delay model and a cost model of an edge computing environment; and obtaining a placement scheme of the chained service entities by combining an objective function and constraint conditions of the chained service entity placement problem in edge calculation and through a heuristic algorithm based on a K-Means clustering algorithm and a greedy algorithm. In this embodiment, as shown in fig. 2, the method specifically includes the following steps:
(1) modeling a chained service entity placement problem in edge computing
The method comprises the steps of constructing a network model, a time delay model and a cost model of the edge computing environment. Firstly, modeling is carried out on an edge network environment, wherein the network model comprises an edge server in a network, a user and a service entity chain to be executed by the user. And meanwhile, the network delay and the load capacity of the server are abstracted into symbolic representations.
(2) Defining a chained service entity placement problem in edge computing:
the problem is described first and then an objective function and various constraints are defined.
(3) The chain-type service entity placement scheme is obtained through the heuristic algorithm based on the K-Means clustering algorithm and the greedy algorithm, and the chain-type service entity placement scheme has good performance in solving the problem.
(4) Analyzing temporal complexity of heuristic algorithms
In this embodiment, the process of modeling the placement problem of the chained service entity in edge computing specifically includes:
(11) establishing a network model: in the present embodiment, the set S ═ S is used1,s2,...,sNDenotes a server containing N edge servers, using the set U ═ U }1,u2,...,uMDenotes the presence of M users in the network. Assuming that each user has a mobile application to be executed, which is made up of chained service entities, the usage set E ═ { E }1,e2,...,eKDenotes the set of service entities for all users, usage set
Figure BDA0002296571450000051
Representing user uiChain of service entities contained in a mobile application, wherein the mapping f (i, j) represents a user uiThe jth service entity in the service entity chain of (2) corresponds to the subscript number of an element E in the set E, and is further represented by liRepresenting a chain of service entities LiIncluding the number of serving entities. Is obviously LiIs a subset of E and satisfies the following formula:
Figure BDA0002296571450000061
the time delay model comprises the calculation time delay, the queuing time delay and the transmission time delay of the service entity on the edge server; the transmission delay comprises transmission delay between servers and between a server and a user.
(12) Establishing a time delay model: the delay in the network model is composed of three parts, namely, the calculation time (namely, calculation delay) of the service entity on the edge server, the queuing time (namely, queuing delay), and the transmission delay between servers and between the servers and the users, namely, the user uiTotal time delay T ofiComprises the following steps:
Ti=Tc+Tl+Tt
for calculating time delay Tc: by using
Figure BDA0002296571450000062
Representing a service entity eiAt server sjFor calculating time of
Figure BDA0002296571450000063
Representing user uiOf the kth service entity ef(i,k)Whether or not to be placed in the server sjIn the upper calculation, if
Figure BDA0002296571450000064
1, indicates user uiOf the kth service entity ef(i,k)Placed in the server sjAnd if not, the server is not placed on the server. Thus user uiThe service entity computation time of (1) is:
Figure BDA0002296571450000065
for queuing delay Tl: the server maintains a time-ordered task queue with a queuing time TlThe time from when the service entity is put in the task queue to when the task is executed.
For transmission delay Tt: the transmission delay is mainly determined by the network environment, the data size, the transmission distance and other factors. Definition server siAnd server sjThe transmission delay between is d(s)i,sj) Server siAnd user ujThe transmission delay between is d(s)i,uj). In the network model, it is assumed that the network environment of all nodes is the same and the size of the data frame has no influence on the transmission delay, i.e. the transmission delay is completely determined by the transmission distance. Simultaneously due to d(s)i,sj)=d(sj,si) And d(s) when i ═ ji,sj) 0. Then user uiThe transmission delay is:
Figure BDA0002296571450000066
(13) establishing a cost model: the cost model in this embodiment includes constraining the sum of the costs of the service entities on all edge servers and constraining the number of service entities that each edge server calculates at most simultaneously. For the edge server, once a service entity is placed therein, it is necessary to bear the static costs related to its maintenance and the like. Since the service entities belonging to different modules require different loads and the configuration of the edge servers is different, assume that the service entity e is to be usedkPlaced in the server sjAt a cost of
Figure BDA0002296571450000074
And the sum of static cost of all the servers is constrained/limited to be at most C, i.e. all the edge servers are constrainedThe sum of the service entity costs must not exceed C. The service entities running computations on the edge servers also consume resources, and in this embodiment, each edge server is constrained/limited to compute Q service entities at most simultaneously.
The following table lists the symbols mentioned herein and their meanings:
Figure BDA0002296571450000071
wherein
Figure BDA0002296571450000072
A value of 1 indicates a server sjLoad service entity module ekOtherwise, it indicates that the server s is not 1jWithout loading with service entity module ek(ii) a Wherein
Figure BDA0002296571450000073
A value of 1 indicates that the user u is in the same timejWith service entities at servers skThe above calculation, if not 1, indicates the user u at the same timejWithout service entities at the server skAnd (4) calculating.
In this embodiment, the process of defining the placement problem of the chained service entities in edge computing is as follows:
(21) description problem (chained service entity placement problem in edge computation): given an edge computing network comprising a set S of edge servers and a set U of users, any user UiContaining a chain of service entities L to be executedi. In this embodiment, the objective function of the placement problem of the chained service entities in the edge calculation aims to minimize the response time of the user, that is, a placement scheme of the service entities is found, so that the response time of the user is minimized.
(22) Defining an objective function:
Figure BDA0002296571450000081
(23) defining a constraint condition:
the sum of the service entity costs on all edge servers must not exceed C:
Figure BDA0002296571450000082
each edge server computes Q service entities simultaneously at most:
Figure BDA0002296571450000083
and based on the network model, the time delay model and the cost model of the edge computing environment, combining an objective function and constraint conditions of the chain type service entity placement problem in edge computing, and obtaining a placement scheme of the chain type service entity through a heuristic algorithm based on a K-Means clustering algorithm and a greedy algorithm.
The method specifically comprises the following steps of clustering by using a K-Means clustering algorithm according to the geographical positions of servers and users in an edge network, and operating the servers and the users in each cluster:
(31) selecting an appropriate data structure for the user and the corresponding service entity: in this embodiment, all users to which service entities to be executed according to a timeline belong are abstracted into a task line list, and all users have not started to place the service entities on an edge server for execution at the beginning, and no waiting time exists, so that all users are randomly added into the task line list during initialization; wherein the Taskline list is an element sequence realized by an array or a linked list;
(32) cyclically taking out the users from the Taskline list in a sequence from beginning to end, and selecting the server with the shortest response time added by the user in the assumption that the service entity is placed in all the edge servers loaded with the next service entity module for calculation, thereby obtaining a placement scheme of the service entity and the response time added by the user; and simultaneously updating the state of the user and the state of the server selected to be placed until no user exists in the Taskline list, and circularly stopping.
In this embodiment, updating the state of the user and the state of the server selected to be placed includes: if no service entity needs to be executed after the user, deleting the user from the Taskline list; if the service entities still need to be executed, firstly, the time required by the current service entity for completing the calculation is calculated, and the user is reinserted into the Taskline list according to the sequence from small to large of the time required by the current service entity for completing the calculation.
(33) And combining the outputs of the clusters together to obtain final output, namely putting the placing schemes of the service entity chains of all the users into a placing set to obtain and return the placing schemes of the chain service entities. In this embodiment, the average response time of all users is calculated according to the response time of each user, and finally, the average response time and the placement set are returned together.
In this embodiment, the time complexity analysis process of the heuristic algorithm is as follows:
the heuristic algorithm based on the greedy algorithm and the K-Means clustering algorithm consists of two parts, wherein the former part is a clustering process, and the latter part is the greedy algorithm, namely the time complexity of the K-Means clustering algorithm and the time complexity of the greedy algorithm are included, and the method specifically comprises the following steps:
(41) in the K-Means clustering algorithm: the time complexity of the K-Means clustering algorithm is O (l.K)2- | S | + | U |)), wherein a constant l is the convergence iteration frequency of the K-Means clustering algorithm, and since l and K are constants, the complexity of the algorithm is mainly determined by the data quantity | S | + | U |, so the original formula can be simplified to O (| S | + | U |);
(42) in the greedy algorithm: if the servers and users are evenly distributed, i.e., there are | S |/K servers and | U |/K users in each cluster, the time complexity of the greedy algorithm is O (K |/K) · (| U |/K) ·), i.e., O (| S |. U |).
Therefore, the total time complexity of the heuristic algorithm is O (| S |. U |), wherein S is the edge server set, U is the user set, K is the number of clusters in the K-Means clustering algorithm, and O is the progressive upper bound sign. Therefore, the heuristic algorithm provided by the invention has lower time complexity, and can obtain a better placing scheme of the service entity in shorter time during specific application, namely can obtain an approximately optimal result in lower time complexity.
The embodiment also provides an edge computing device, which comprises a processor; and a memory storing computer-executable instructions that, when executed by the processor, implement the steps of any of the methods provided by the present embodiments.
The present embodiments also provide a computer-readable storage medium storing one or more programs for: when one or more of the programs are executed by an edge computing device, the edge computing device implements the steps of any of the methods provided by the present embodiments.
The following describes, with reference to fig. 3 and fig. 4, an edge computing network environment related to this embodiment and a scheme for obtaining placement of chained service entities by using a heuristic algorithm based on a K-Means clustering algorithm and a greedy algorithm provided in this embodiment, by way of example, without loss of generality:
fig. 3 provides a structured edge computing network environment, where each circle represents an edge server node, the alphabetic letters next to the circle represent the number of the edge server, the numbers inside each circle represent the number of service entity modules that the network node can provide, the line between each two circles represents the link between the two edge servers, and the numbers next to the line represent the distance between the two edge servers.
According to the heuristic algorithm provided by the invention, firstly, clustering is carried out by using a K-Means clustering algorithm according to the geographic positions of each user and the edge servers in the graph 3, and it can be seen in the graph 4 (the edge servers in the dotted line frame are clustered into a Cluster), after the K-Means clustering, A, B, C is clustered into a Cluster (marked as Cluster1), and D, E, F is clustered into a Cluster (marked as Cluster 2). The process of simulating the placement of a certain task entity in a certain cluster without loss of generality is as follows:
assume that in Cluster1, the user set U ═ { U ═1,u2,u3,u4A task line list (the task line list is a general element sequence, and can be implemented by an array or a linked list) at a certain time is as follows, and a number behind a slash represents the remaining required time for completing calculation of the current service entity of the user, and the time is updated after a new service entity is placed each time:
u3/3 u2/5 u1/7 u4/12
at this point, user u is taken from the Taskline3Assume that its current service entity is the 2 nd in its chain of service entities, i.e., ef(3,2)The place is edge server B, next service entity ef(3,3)The required module number is 3. Looking at all edge servers in fig. 3, A, C and E are loaded with this module and all are linked to B, but E is first excluded as not being in Cluster 1. At this time, the transmission delay, queuing delay and calculation delay are comprehensively considered, and if e is selected from A and Cf(3,3)Placement on the edge Server adds u3The one with the shorter response time. Assuming that C is much more powerful than a (i.e., C has a computation delay smaller than a) and has more sufficient resources (i.e., C has a queuing delay smaller than a), the heuristic algorithm provided in this embodiment chooses to place this service entity on C even though a is closer to B (i.e., a has a transmission delay smaller than C). So as to obtain and record the placement scheme of the service entity, and calculate user u3Increased response time. Then at the same timeNew user u3And the state of the server it has selected to place, in this example u after this placement is complete3The remaining required time for the current serving entity to complete the calculation changes due to the placement of the new serving entity (assumed to become 9), and the Taskline list is updated to maintain the property of "the remaining required time for the current serving entity to complete the calculation goes from small to large", and the updated Taskline list is shown in the following table:
u2/5 u1/7 u3/9 u4/12
this completes the placement of one service entity in Cluster1, and repeats this process cyclically in each Cluster until there are no users in the corresponding Taskline list and the cycle terminates. Finally, the outputs of the clusters are combined together to obtain the final placement set of the chain-type service entity and the average response time of all users, and at this time, the whole process of the placement of the multi-user chain-type service entity in the edge computing environment in the embodiment is completed.
While the invention has been described in connection with the preferred embodiments, it should be understood that the invention is not limited to the disclosed embodiments, but is intended to cover various changes and modifications within the spirit and scope of the appended claims.

Claims (10)

1. A method for placing chained service entities in edge computing is characterized by comprising the following steps:
A. constructing a network model, a time delay model and a cost model of the edge computing environment; the network model comprises an edge server, a user and a service entity chain to be executed by the user in the network; the time delay model comprises the calculation time delay, the queuing time delay and the transmission time delay of the service entity on the edge server; the transmission delay comprises the transmission delay between servers and between a server and a user;
B. and obtaining a placement scheme of the chained service entities by combining an objective function and constraint conditions of the chained service entity placement problem in edge calculation and through a heuristic algorithm based on a K-Means clustering algorithm and a greedy algorithm.
2. The method for placing chained service entities in edge computing according to claim 1, wherein:
the cost model in the step A comprises the steps of restraining the sum of the cost of the service entities on all the edge servers and restraining the number of the service entities which are calculated at most simultaneously by each edge server;
the edge in step B computes an objective function for the mid-chain service entity placement problem with the goal of minimizing user response time.
3. The method for placing chained service entities in edge computing according to claim 1, wherein: in the step A, the network model of the edge computing environment comprises N edge servers and M users, and any user uiContaining a chain of service entities L to be executedi(ii) a Where the set of edge servers S ═ S1,s2,...,sNU-U set of users1,u2,...,uM-M, N is a positive integer greater than 1; wherein E ═ { E ═ E1,e2,...,eKThe service entity set of all users; wherein
Figure FDA0002296571440000013
Representing user uiChain of service entities included in a mobile application, wherein the mapping f (i, j) is represented byHuu (household)iThe jth service entity in the service entity chain of (2) corresponds to the subscript sequence number, l, of a certain element E in the set EiRepresenting a chain of service entities LiNumber of included service entities.
4. The method for placing chained service entities in edge computing according to claim 3, wherein: user u in the delay modeliThe total delay of (c) is:
Ti=Tc+Tl+Tt
wherein the user uiService entity of (2) calculating time delay TcComprises the following steps:
Figure FDA0002296571440000011
wherein the user uiQueuing delay TlThe time from the time of putting the service entity into the task queue to the time of executing the task is taken as the service entity;
wherein the user uiTransmission delay T oftComprises the following steps:
Figure FDA0002296571440000012
wherein
Figure FDA0002296571440000021
As a service entity eiAt server sjThe calculated time of (a); wherein
Figure FDA0002296571440000022
Representing user uiOf the kth service entity ef(i,k)Whether or not to be placed in the server sjIn the upper calculation, if
Figure FDA0002296571440000023
If the number is 1, yes, and if the number is not 1, no; wherein f (i, j) is user uiThe jth service entity in the service entity chain of (a) corresponds to the subscript sequence of the element in (E);
wherein d(s)i,sj) For servers siAnd server sjTransmission delay between, d(s)i,uj) For servers siAnd user ujThe transmission delay therebetween.
5. The method for placing chained service entities in edge computing according to claim 3, wherein:
the objective function of the placement problem of the chained service entities in the edge calculation in the step B is as follows:
Figure FDA0002296571440000024
wherein T isiFor user uiThe total delay of (c);
the constraint conditions include:
the sum of the service entity costs on all edge servers must not exceed C:
Figure FDA0002296571440000025
and each edge server simultaneously computes up to Q service entities:
Figure FDA0002296571440000026
wherein C is the upper limit of the sum of the costs of all edge servers;
Figure FDA0002296571440000027
representing a service entity ekPlaced in the server sjThe cost of (a);
Figure FDA0002296571440000028
presentation Server sjWhether or not the service entity module e is loadedkSuch as
Figure FDA0002296571440000029
If the number is 1, yes, and if the number is not 1, no;
wherein Q is the number of the service entities calculated by each edge server at most; wherein
Figure FDA00022965714400000210
Representing users u at the same timejWhether there is a service entity at the server skUpper calculation, e.g.
Figure FDA00022965714400000211
If the number is 1, yes, and if the number is not 1, no.
6. The method for placing chained service entities in edge computing according to claim 1, wherein: in the heuristic algorithm in the step B, clustering is performed by using a K-Means clustering algorithm according to the geographical positions of the servers and the users in the edge network, and the step of operating the servers and the users in each cluster comprises the following steps:
(31) abstracting all users to which the service entities to be executed according to the time line belong into a task line list Taskline, and randomly adding all the users into the Taskline list during initialization;
(32) taking out the users from the Taskline list in a circulating way from beginning to end, and selecting the server with the shortest response time added by the users in the assumption that the service entity is placed in all the edge servers loaded with the next service entity module for calculation, thereby obtaining the placement scheme of the service entity and the response time added by the users; updating the state of the user and the state of the server selected to be placed at the same time until no user exists in the Taskline list, and circularly stopping;
(33) putting the placing schemes of the service entity chains of all the users into a placing set, and obtaining and returning the placing schemes of the chain service entities.
7. The method for placing chained service entities in edge computing according to claim 6, wherein: updating the status of the user and the status of the selected placed server in said step (32) comprises: if no service entity needs to be executed after the user, deleting the user from the Taskline list; if the service entities still need to be executed, firstly, the time required by the current service entity for completing the calculation is calculated, and the user is reinserted into the Taskline list according to the sequence from small to large of the time required by the current service entity for completing the calculation.
8. The method for placing chained service entities in edge computing according to claim 6, wherein: the step (33) further comprises calculating the average response time of all users according to the response time of each user and returning the average response time to output.
9. The method for placing chained service entities in edge computing according to claim 1, wherein: the time complexity of the heuristic algorithm is O (| S |. | U |), wherein S is an edge server set, and U is a user set.
10. An edge computing device, comprising:
a processor; and
memory storing computer-executable instructions which, when executed by a processor, implement the steps of the method of any of claims 1 to 9.
CN201911204131.0A 2019-11-29 2019-11-29 Method for placing chain type service entity in edge computing and edge computing equipment Active CN110968920B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911204131.0A CN110968920B (en) 2019-11-29 2019-11-29 Method for placing chain type service entity in edge computing and edge computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911204131.0A CN110968920B (en) 2019-11-29 2019-11-29 Method for placing chain type service entity in edge computing and edge computing equipment

Publications (2)

Publication Number Publication Date
CN110968920A true CN110968920A (en) 2020-04-07
CN110968920B CN110968920B (en) 2022-06-14

Family

ID=70032347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911204131.0A Active CN110968920B (en) 2019-11-29 2019-11-29 Method for placing chain type service entity in edge computing and edge computing equipment

Country Status (1)

Country Link
CN (1) CN110968920B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153145A (en) * 2020-09-26 2020-12-29 江苏方天电力技术有限公司 Method and device for unloading calculation tasks facing Internet of vehicles in 5G edge environment
CN112153147A (en) * 2020-09-25 2020-12-29 南京大学 Method for placing chained service entities based on entity sharing in mobile edge environment
CN112202603A (en) * 2020-09-25 2021-01-08 南京大学 Interactive service entity placement method in edge environment
CN113301151A (en) * 2021-05-24 2021-08-24 南京大学 Low-delay containerized task deployment method and device based on cloud edge cooperation
CN113572848A (en) * 2020-08-18 2021-10-29 北京航空航天大学 Online service placement method with data refreshing based on value space estimation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595079A (en) * 2018-04-23 2018-09-28 国家电网公司 A kind of project of transmitting and converting electricity surveying method shared based on multiple terminals cloud platform
CN110290011A (en) * 2019-07-03 2019-09-27 中山大学 Dynamic Service laying method based on Lyapunov control optimization in edge calculations
CN110418353A (en) * 2019-07-25 2019-11-05 南京邮电大学 A kind of edge calculations server laying method based on particle swarm algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595079A (en) * 2018-04-23 2018-09-28 国家电网公司 A kind of project of transmitting and converting electricity surveying method shared based on multiple terminals cloud platform
CN110290011A (en) * 2019-07-03 2019-09-27 中山大学 Dynamic Service laying method based on Lyapunov control optimization in edge calculations
CN110418353A (en) * 2019-07-25 2019-11-05 南京邮电大学 A kind of edge calculations server laying method based on particle swarm algorithm

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113572848A (en) * 2020-08-18 2021-10-29 北京航空航天大学 Online service placement method with data refreshing based on value space estimation
CN112153147A (en) * 2020-09-25 2020-12-29 南京大学 Method for placing chained service entities based on entity sharing in mobile edge environment
CN112202603A (en) * 2020-09-25 2021-01-08 南京大学 Interactive service entity placement method in edge environment
CN112153145A (en) * 2020-09-26 2020-12-29 江苏方天电力技术有限公司 Method and device for unloading calculation tasks facing Internet of vehicles in 5G edge environment
CN113301151A (en) * 2021-05-24 2021-08-24 南京大学 Low-delay containerized task deployment method and device based on cloud edge cooperation
CN113301151B (en) * 2021-05-24 2023-01-06 南京大学 Low-delay containerized task deployment method and device based on cloud edge cooperation

Also Published As

Publication number Publication date
CN110968920B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN110968920B (en) Method for placing chain type service entity in edge computing and edge computing equipment
Burrage Parallel methods for initial value problems
CN103179052B (en) A kind of based on the central virtual resource allocation method and system of the degree of approach
CN109840154B (en) Task dependency-based computing migration method in mobile cloud environment
CN110928654A (en) Distributed online task unloading scheduling method in edge computing system
Dong et al. Computation offloading for mobile-edge computing with multi-user
Wei et al. Multi-resource balance optimization for virtual machine placement in cloud data centers
US9904741B2 (en) Colocation and anticolocation in colocation data centers via elastic nets
CN114503125A (en) Structured pruning method, system and computer readable medium
Campos et al. Scaling a convolutional neural network for classification of adjective noun pairs with tensorflow on gpu clusters
Bousselmi et al. Energy efficient partitioning and scheduling approach for scientific workflows in the cloud
Li et al. An intelligent collaborative inference approach of service partitioning and task offloading for deep learning based service in mobile edge computing networks
Maqsood et al. Energy and communication aware task mapping for MPSoCs
CN111667060B (en) Deep learning algorithm compiling method and device and related products
Nichols et al. MagmaDNN: accelerated deep learning using MAGMA
CN109450684B (en) Method and device for expanding physical node capacity of network slicing system
Qiu et al. Heterogeneous assignment of functional units with gaussian execution time on a tree
CN114064235A (en) Multitask teaching and learning optimization method, system and equipment
Lai et al. Parallel computations of local PageRank problem based on Graphics Processing Unit
Benjamas et al. Impact of I/O and execution scheduling strategies on large scale parallel data mining
CN114297041A (en) Network heterogeneous computing platform testing method and device and computer equipment
Rajamanickam et al. An Evaluation of the Zoltan Parallel Graph and Hypergraph Partitioners.
Hui et al. Three-dimensional characterization on edge AI processors with object detection workloads
Aldahir Evaluation of the performance of WebGPU in a cluster of WEB-browsers for scientific computing
Nasonov et al. Metaheuristic coevolution workflow scheduling in cloud environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant