CN113961266A - Task unloading method based on bilateral matching under edge cloud cooperation - Google Patents

Task unloading method based on bilateral matching under edge cloud cooperation Download PDF

Info

Publication number
CN113961266A
CN113961266A CN202111195259.2A CN202111195259A CN113961266A CN 113961266 A CN113961266 A CN 113961266A CN 202111195259 A CN202111195259 A CN 202111195259A CN 113961266 A CN113961266 A CN 113961266A
Authority
CN
China
Prior art keywords
server
user
task
unloading
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111195259.2A
Other languages
Chinese (zh)
Other versions
CN113961266B (en
Inventor
田淑娟
丁文健
朱江
黄凌翔
裴瑞宏
刘新杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangtan University
Original Assignee
Xiangtan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangtan University filed Critical Xiangtan University
Priority to CN202111195259.2A priority Critical patent/CN113961266B/en
Publication of CN113961266A publication Critical patent/CN113961266A/en
Application granted granted Critical
Publication of CN113961266B publication Critical patent/CN113961266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

A task unloading method based on bilateral matching under the cooperation of a side cloud comprises the following steps; 1) acquiring parameters of an edge server; 2) acquiring parameters in a task unloading request; 3) calculating the time for unloading the task to the edge server and the time for unloading the task to the cloud server; 4) constructing a satisfaction function of the task and the server, and performing initial unloading matching; 5) and performing optimal satisfaction bilateral matching on the task and all the edge servers and the cloud server. The function value of satisfaction is
Figure DEST_PATH_IMAGE002
. In the invention, the condition that the server unloads a plurality of tasks at the same time is considered, the reasonable price game is carried out on the user tasks so as to carry out pricing, the calculation efficiency of the edge/cloud server on the unloaded tasks can be effectively improved, and the service quality of the system is effectively improved.

Description

Task unloading method based on bilateral matching under edge cloud cooperation
Technical Field
The invention relates to a task unloading method, in particular to a task unloading method based on bilateral matching under the cooperation of a side cloud and a cloud.
Background
Clouds are widely used because of their powerful computing and storage capabilities. Due to limited network bandwidth, offloading of large amounts of data to the cloud center can cause backhaul link congestion and even paralysis. The cloud center is far away from the mobile user terminal, and for some delay-sensitive tasks, the delay of long-distance transmission of the cloud center is unacceptable for the user. To solve the above problem, moving edge calculation is proposed. Mobile edge computing extends the capabilities of the cloud to the edge of the network, providing nearby services near the source of the data, providing faster response requirements for the network. The user can select the offloading policy of the task by himself, but when multiple users simultaneously select to offload the task to the same edge server, high time delay of the user task is also caused. Therefore, in the current 'end-edge-cloud' collaborative background, how a user reasonably unloads a task to a server ensures that not only the time delay limit of the user can be met, but also the satisfaction degree of the user and the edge/cloud server can be maximized is a key point and a difficult point.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a multi-task unloading method based on bilateral matching and game pricing. The method can reasonably price the user, improve the calculation efficiency of the server on the unloading task, and effectively improve the satisfaction degree of the user and the edge/cloud server.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a task unloading method based on bilateral matching under the cooperation of a side cloud comprises the following steps;
1) acquiring parameters of the side servers, including the clock frequency of each side server, the total channel bandwidth, the channel bandwidth distributed to a user by the side servers and the data rate of task unloading to the side servers; the total channel bandwidth of each edge server is constant, and the channel bandwidth allocated to the user by the edge server is variable;
2) obtainingSize of task data in task offload request
Figure 432186DEST_PATH_IMAGE001
Maximum allowable delay for completing a task, transmission power of user m, preference information of user m for time and price; preference information fulfillment of time and price
Figure 719948DEST_PATH_IMAGE002
Figure 466449DEST_PATH_IMAGE003
The weight of user m to the time delay,
Figure 370820DEST_PATH_IMAGE004
weighting the price for user m;
3) calculating the time for unloading the task to the edge server and the time for unloading the task to the cloud server;
3.1) calculating the total time of task unloading to the edge server;
3.2) calculating the total time for unloading the tasks to the cloud server;
4) constructing a satisfaction function of the task and the server, and performing initial unloading matching;
4.1) initializing, unloading and matching the task according to the satisfaction function value;
the function value of satisfaction is
Figure 93926DEST_PATH_IMAGE005
Figure 308DEST_PATH_IMAGE006
For the time delay of the user task off-load to the server,
Figure 416246DEST_PATH_IMAGE007
accepting a minimum price for the task for the server, wherein
Figure 276755DEST_PATH_IMAGE008
For the weight value of the delay in the user satisfaction function,
Figure 101754DEST_PATH_IMAGE009
weight values for prices in the user satisfaction function;
Figure 629687DEST_PATH_IMAGE008
and
Figure 980641DEST_PATH_IMAGE009
is determined by the user himself, for each user
Figure 328445DEST_PATH_IMAGE008
And
Figure 425977DEST_PATH_IMAGE009
are all different;
4.2) the user sorts the servers according to the satisfaction function values, and the user sends an unloading request to the server with the highest current function value for matching initialization, namely, the user selects the server or the cloud server for initial matching for the first time, and the initial matching is free from interference;
4.3) judging whether the server with the highest current satisfaction function value meets the user requirement or not; step 4.4) when the requirements are met, step 4.5) when the requirements are not met; user demand is whether offloading of user tasks to the current server can meet the user's maximum latency constraint, i.e.
Figure 542837DEST_PATH_IMAGE010
And the cost the user has to pay to offload tasks to the server
Figure 300578DEST_PATH_IMAGE011
Whether or not it can be accepted by the server, i.e.
Figure 634214DEST_PATH_IMAGE012
Figure 33971DEST_PATH_IMAGE013
Maximum allowable latency for completion of a task;
4.4) the user and the server both achieve the initial matching result;
4.5) the user selects the next server in the satisfaction function value sequence to send an unloading request, and the step 4.3) is carried out;
5) and performing optimal satisfaction bilateral matching on the task and all the edge servers and the cloud server.
In the invention, the total time for unloading to the edge server comprises transmission time and execution time, and the total time
Figure 506803DEST_PATH_IMAGE014
The transmission time is
Figure 701024DEST_PATH_IMAGE015
The execution time is
Figure 23421DEST_PATH_IMAGE016
Figure 725404DEST_PATH_IMAGE017
In order to be efficient in the transmission,
Figure 551278DEST_PATH_IMAGE018
Figure 417865DEST_PATH_IMAGE019
the channel bandwidth allocated to user m for edge server n,
Figure 961978DEST_PATH_IMAGE020
Figure 172380DEST_PATH_IMAGE021
for the transmission power of the user m,
Figure 616874DEST_PATH_IMAGE022
to be a task
Figure 887319DEST_PATH_IMAGE023
Channel gain performed in the edge server N, N being the background noise power of the server;
Figure 653149DEST_PATH_IMAGE024
for the interference of other tasks on the edge server n on the influence of the current task m,
Figure 168707DEST_PATH_IMAGE025
Figure 969172DEST_PATH_IMAGE026
a is the integer variable,
Figure 410518DEST_PATH_IMAGE027
judging whether the user a and the user m are unloaded to the same side server for the decision of unloading the task of the user a,
Figure 162180DEST_PATH_IMAGE028
channel gains offloaded to edge server n for task a;
Figure 714384DEST_PATH_IMAGE029
to accomplish a task
Figure 634936DEST_PATH_IMAGE030
The total number of CPU cycles of the CPU,
Figure 483068DEST_PATH_IMAGE031
is the clock frequency of the edge server n.
In the invention, the total time of the task to be unloaded to the cloud server
Figure 754649DEST_PATH_IMAGE032
Wherein
Figure 110544DEST_PATH_IMAGE033
The time for a task to be transmitted by a user to an edge node,
Figure 384138DEST_PATH_IMAGE034
for the time that the task is uploaded from the edge node to the cloud server through the core network,
Figure 901707DEST_PATH_IMAGE035
is the execution time of the cloud server;
Figure 129426DEST_PATH_IMAGE036
the data transmission rate for user m to upload the task to the cloud,
Figure 524897DEST_PATH_IMAGE037
the transmission speed allocated to the user m for the core network to cloud server phase,
Figure 420041DEST_PATH_IMAGE038
is the clock frequency of the cloud server.
In the invention, the specific steps of the step 5) comprise: 5.1) updating the bandwidth allocation of the side server according to the initial matching result of the user; when an edge server is offloaded k tasks, each task equally divides all of the edge server's bandwidth, i.e., each task is off-loaded by k tasks
Figure 374090DEST_PATH_IMAGE039
5.2) updating the task unloading time delay, and updating the task unloading time delay according to the bandwidth allocation in the step A;
5.3) updating the satisfaction function values of the users to all the servers and sequencing; the user selects the server with the highest current function value to send an unloading request;
5.4) judging whether the server selected in the step 5.3) has a matching object; step 5.5) with matching objects, step 5.6) without matching objects;
5.5) all the matchers connected with the server participate in the price game to obtain the suggested price of the current user;
if the user price is higher than the suggested price or the user accepts the suggested price and the suggested price is higher than the minimum price of the server, the server accepts the task unloading matching request, otherwise, the server refuses the task unloading matching request;
the server receives the task unloading matching request and then recalculates the task unloading time delay, and if the maximum time delay limit of the user is met, the matching is kept to obtain an optimal satisfaction degree bilateral matching result; otherwise, returning to the step 4) to carry out matching again;
5.6) the price of the user is higher than the minimum price of the server
Figure 611078DEST_PATH_IMAGE040
(ii) a The server receives the task unloading matching request; the price of the user is lower than the minimum price of the server
Figure 308776DEST_PATH_IMAGE041
(ii) a The server feeds back the suggested price to the user, and the user calculates the satisfaction degree of the current server by using the suggested price and sorts the satisfaction degree;
after the user receives the price suggested by the server, the server still is the optimal server, and the server receives the task unloading matching request; otherwise the server rejects the matching request.
In the invention, the price game comprises the following steps:
5.5.1) constructing a satisfaction function matrix of the server and all matched objects of the server;
5.5.2) calculating the satisfaction function mean value of all matched objects of the server;
5.5.3) judging whether the satisfaction function value of the server to the current user is higher than the mean value of the satisfaction function obtained in the previous step; if the average value is higher than the average value, turning to the step 5.5.4); otherwise, turning to step 5.5.5);
5.5.4) the server suggests the price provided for the user, namely the user price is unchanged, and the server receives the task unloading matching request;
5.5.5) the server suggests the price as the mean of the satisfaction function of the remaining matching objects.
Compared with the prior art, the invention has the advantages that: in the invention, the condition that the server unloads and processes a plurality of tasks at the same time is considered, and the reasonable price game is carried out on the user tasks so as to carry out pricing; the computing efficiency of the edge/cloud server on the unloading task can be effectively improved, and the service quality of the system is effectively improved.
Drawings
Fig. 1 is a basic architecture diagram of a task offloading method based on bilateral matching under the coordination of a side cloud in the present invention.
Fig. 2 is a flowchart of a task offloading method based on bilateral matching under the coordination of the edge cloud in the present invention.
FIG. 3 is a flowchart of steps 1) to 3) in example 1.
FIG. 4 is a flowchart of a user initiating an offload task in accordance with the present invention.
FIG. 5 is a flowchart of step 5) in example 1;
FIG. 6 is a flowchart of the user price game in embodiment 1;
FIG. 7 is a flow chart of a latency check phase;
fig. 8 is a diagram of the final allocated bandwidth and task offload decision scenario.
Detailed Description
In order to facilitate an understanding of the present invention, the present invention will be described more fully and in detail with reference to the preferred embodiments, but the scope of the present invention is not limited to the specific embodiments described below.
It should be particularly noted that when an element is referred to as being "fixed to, connected to or communicated with" another element, it can be directly fixed to, connected to or communicated with the other element or indirectly fixed to, connected to or communicated with the other element through other intermediate connecting components.
Unless otherwise defined, all terms of art used hereinafter have the same meaning as commonly understood by one of ordinary skill in the art. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present invention.
Example 1
A task unloading method based on bilateral matching under the cooperation of edge clouds as shown in FIG. 1 and FIG. 2 comprises the following steps;
1) obtaining parameters of edge servers and cloud servers, including clock frequency of each edge server
Figure 792847DEST_PATH_IMAGE042
Total channel bandwidth
Figure 419262DEST_PATH_IMAGE043
Channel bandwidth allocated to user by edge server
Figure 355994DEST_PATH_IMAGE044
Data rate of task offload to edge server
Figure 591804DEST_PATH_IMAGE045
(ii) a The total channel bandwidth of each edge server is constant and the channel bandwidth allocated to the user by the edge server is variable. The parameters of the cloud server comprise the clock frequency of the cloud server
Figure 428916DEST_PATH_IMAGE046
Cloud server uplink bandwidth
Figure 990348DEST_PATH_IMAGE047
Channel gain of cloud server
Figure 679955DEST_PATH_IMAGE048
Data transmission rate of uploading task to cloud by user m
Figure 220920DEST_PATH_IMAGE049
And the transmission rate allocated to the user m in the stage from the core network to the cloud server
Figure 414004DEST_PATH_IMAGE050
2) Obtaining the size of task data in a task unloading request
Figure 880757DEST_PATH_IMAGE051
CPU (Central processing Unit) required for task processing (bit)ng unit, central processing unit), maximum allowable delay for completing the task, transmission power of user m, preference information of user m for time and price; preference information fulfillment of time and price
Figure 290617DEST_PATH_IMAGE052
Figure 133808DEST_PATH_IMAGE053
The weight of user m to the time delay,
Figure 446977DEST_PATH_IMAGE054
the weight of user m to price.
3) And calculating the time for the task to be unloaded to the edge server and the time for the task to be unloaded to the cloud server.
3.1) Total time to offload tasks to the edge Server includes transfer time and execution time, Total time
Figure 586097DEST_PATH_IMAGE055
The transmission time is
Figure 250296DEST_PATH_IMAGE056
The execution time is
Figure 428337DEST_PATH_IMAGE057
Figure 94548DEST_PATH_IMAGE058
In order to be efficient in the transmission,
Figure 903104DEST_PATH_IMAGE059
Figure 789021DEST_PATH_IMAGE060
the channel bandwidth allocated to user m for edge server n,
Figure 209900DEST_PATH_IMAGE061
Figure 763241DEST_PATH_IMAGE062
for the transmission power of the user m,
Figure 273857DEST_PATH_IMAGE063
to be a task
Figure 880025DEST_PATH_IMAGE065
Channel gain performed in the edge server N, N being the background noise power of the server;
Figure 868710DEST_PATH_IMAGE066
for the interference of other tasks on the edge server n on the influence of the current task m,
Figure 10978DEST_PATH_IMAGE067
Figure 662802DEST_PATH_IMAGE068
a is the integer variable,
Figure 257731DEST_PATH_IMAGE069
judging whether the user a and the user m are unloaded to the same side server for the decision of unloading the task of the user a,
Figure 784527DEST_PATH_IMAGE070
channel gains offloaded to edge server n for task a;
Figure 545417DEST_PATH_IMAGE071
to accomplish a task
Figure 601097DEST_PATH_IMAGE072
The total number of CPU cycles of the CPU,
Figure 683323DEST_PATH_IMAGE073
is the clock frequency of the edge server n.
3.2) calculating the total time for unloading the tasks to the cloud server;
total time for task offloading to cloud server
Figure 515275DEST_PATH_IMAGE074
Wherein
Figure 366556DEST_PATH_IMAGE075
The time for a task to be transmitted by a user to an edge node,
Figure 389876DEST_PATH_IMAGE076
for the time that the task is uploaded from the edge node to the cloud server through the core network,
Figure 457933DEST_PATH_IMAGE077
is the execution time of the cloud server;
Figure 592111DEST_PATH_IMAGE078
the data transmission rate for user m to upload the task to the cloud,
Figure 596101DEST_PATH_IMAGE079
the transmission speed allocated to the user m for the core network to cloud server phase,
Figure 524743DEST_PATH_IMAGE080
is the clock frequency of the cloud server. The flow chart of step 1) -step 3) is shown in fig. 3.
4) Constructing a satisfaction function of the task and the server, and performing initial unloading matching; the specific steps are shown in fig. 4.
4.1) initializing, unloading and matching the task according to the satisfaction function value;
the function value of satisfaction is
Figure 112719DEST_PATH_IMAGE081
Figure 549123DEST_PATH_IMAGE082
For the time delay of the user task off-load to the server,
Figure 906155DEST_PATH_IMAGE083
receiving a minimum price for the task for the server; wherein
Figure 38321DEST_PATH_IMAGE084
For the weight value of the delay in the user satisfaction function,
Figure 229042DEST_PATH_IMAGE085
weight values for prices in the user satisfaction function;
Figure 970602DEST_PATH_IMAGE084
and
Figure 182140DEST_PATH_IMAGE085
is determined by the user himself, for each user
Figure 485208DEST_PATH_IMAGE084
And
Figure 516618DEST_PATH_IMAGE085
are all different.
4.2) the user sorts the servers according to the satisfaction function values, and sorting methods such as bubble sorting, insertion sorting, quick sorting and the like can be used; the server with the largest function value of the satisfaction degree is arranged at the first position of the sequence, the server with the second function value is arranged at the second position, and so on. The user sends an unloading request to the server with the highest function value for carrying out matching initialization, namely, the user selects the side server or the cloud server for carrying out initial matching for the first time, interference influence does not exist in the initial matching, and other tasks do not exist on the side server and the cloud server.
4.3) judging whether the server with the highest current satisfaction function value meets the user requirement or not; step 4.4) when the requirements are met, step 4.5) when the requirements are not met; whether user demand can be satisfied by offloading user tasks to the current serverMaximum delay limit, i.e.
Figure 29245DEST_PATH_IMAGE086
And the cost the user has to pay to offload tasks to the server
Figure 360869DEST_PATH_IMAGE087
Whether or not it can be accepted by the server, i.e.
Figure 67794DEST_PATH_IMAGE088
Figure 353544DEST_PATH_IMAGE089
To the maximum allowable latency for completion of the task.
4.4) both the user and the server reach the initial matching result.
4.5) the user selects the next server in the sequence of satisfaction function values to send an unloading request, and the step 4.3) is carried out.
5) And performing optimal satisfaction bilateral matching on the task and all the edge servers and the cloud servers, namely, considering the optimal bilateral matching under the interference condition, as shown in fig. 5.
5.1) updating the bandwidth allocation of the side server according to the initial matching result of the user; when an edge server is offloaded k tasks, each task equally divides all of the edge server's bandwidth, i.e., each task is off-loaded by k tasks
Figure DEST_PATH_IMAGE090
W is the bandwidth of the editing server;
5.2) updating task unloading time delay: updating task unloading time delay according to the bandwidth allocation in the step A;
5.3) updating the satisfaction function values of the users to all the servers and sequencing; the user selects the server with the highest current function value to send an unloading request;
5.4) judging whether the server selected in the step 5.3) has a matching object; step E if there is a matching object, step 5.6 if there is no matching object);
5.5) the price games of all matched participants connected with the server obtain the suggested price of the current user;
if the user price is higher than the suggested price or the user accepts the suggested price, the server accepts the task unloading matching request; otherwise, the server refuses the task unloading matching request;
as shown in fig. 7, the server receives the task unloading matching request and then recalculates the task unloading delay, and if the maximum delay limit of the user is met, the matching is maintained, so as to obtain an optimal satisfaction degree bilateral matching result; otherwise, returning to the step 4) to carry out matching again.
5.6) the price of the user is higher than the minimum price of the server
Figure 997759DEST_PATH_IMAGE091
(ii) a The server receives the task unloading matching request; the price of the user is lower than the minimum price of the server
Figure 918310DEST_PATH_IMAGE091
(ii) a The server feeds back the suggested price to the user, and the user calculates the satisfaction degree of the current server by using the suggested price and sorts the satisfaction degree;
after the user receives the price suggested by the server, the server is still the optimal server, and the server receives the task unloading matching request to obtain an optimal satisfaction degree bilateral matching result; otherwise the server rejects the matching request.
6) All that is needed next is to calculate the final bandwidth allocated to the user by each server according to the optimal satisfaction degree bilateral matching result and determine the final unloading decision of the user, as shown in fig. 8. Specifically comprising steps 6.1) to 6.4), wherein:
and 6.1), obtaining an optimal satisfaction degree bilateral matching result capable of meeting the time delay requirement through the steps.
And 6.2) calculating the total number of the tasks unloaded to each edge server according to the matching result.
And 6.3), equally dividing the total bandwidth of the side servers by the tasks on the same side server, and calculating the bandwidth allocated by the tasks. And updates the offload decisions for these tasks to the number of the edge server.
And 6.4), allocating bandwidth to the tasks unloaded to the cloud server according to the size of the tasks, and updating unloading decisions of the tasks to the cloud server.
In this embodiment, the price game is shown in fig. 6, and includes the following steps:
5.5.1) constructing a satisfaction function matrix of the server and all matched objects of the server.
5.5.2) calculating the satisfaction function mean value of all matched objects of the server;
5.5.3) judging whether the satisfaction function value of the server to the current user is higher than the mean value of the satisfaction function obtained in the previous step; if the average value is higher than the average value, turning to the step 5.5.4); otherwise, turning to step 5.5.5);
5.5.4) the server suggests the price provided for the user, namely the user price is unchanged, and the server receives the task unloading matching request;
5.5.5) the server suggests the price as the mean of the satisfaction function of the remaining matching objects.
In the invention, the condition that the server unloads and processes a plurality of tasks at the same time is considered, and the reasonable price game is carried out on the user tasks so as to carry out pricing; the computing efficiency of the edge/cloud server on the unloading task can be effectively improved, and the service quality of the system is effectively improved.

Claims (5)

1. A task unloading method based on bilateral matching under the cooperation of a side cloud is characterized in that: comprises the following steps;
1) acquiring parameters of the side servers, including the clock frequency of each side server, the total channel bandwidth, the channel bandwidth distributed to a user by the side servers and the data rate of task unloading to the side servers; the total channel bandwidth of each edge server is constant, and the channel bandwidth allocated to the user by the edge server is variable;
2) obtaining the size of task data in a task unloading request
Figure DEST_PATH_IMAGE001
Maximum allowable delay for completing task, transmission power of user m and time synchronization of user mPreference information for inter and price; preference information fulfillment of time and price
Figure 886517DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
The weight of user m to the time delay,
Figure 849662DEST_PATH_IMAGE004
weighting the price for user m;
3) calculating the time for unloading the task to the edge server and the time for unloading the task to the cloud server;
3.1) calculating the total time of task unloading to the edge server;
3.2) calculating the total time for unloading the tasks to the cloud server;
4) constructing a satisfaction function of the task and the server, and performing initial unloading matching;
4.1) initializing, unloading and matching the task according to the satisfaction function value;
the function value of satisfaction is
Figure DEST_PATH_IMAGE005
Figure 370946DEST_PATH_IMAGE006
For the time delay of the user task off-load to the server,
Figure DEST_PATH_IMAGE007
accepting a minimum price for the task for the server, wherein
Figure 546582DEST_PATH_IMAGE008
For the weight value of the delay in the user satisfaction function,
Figure DEST_PATH_IMAGE009
weight values for prices in the user satisfaction function;
4.2) the user sorts the servers according to the satisfaction function values, and the user sends an unloading request to the server with the highest current function value for matching initialization, namely, the user selects the server or the cloud server for initial matching for the first time, and the initial matching is free from interference;
4.3) judging whether the server with the highest current satisfaction function value meets the user requirement or not; step 4.4) when the requirements are met, step 4.5) when the requirements are not met; user demand is whether offloading of user tasks to the current server can meet the user's maximum latency constraint, i.e.
Figure 125068DEST_PATH_IMAGE010
And the cost the user has to pay to offload tasks to the server
Figure DEST_PATH_IMAGE011
Whether or not it can be accepted by the server, i.e.
Figure 993536DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE013
Maximum allowable latency for completion of a task;
4.4) the user and the server both achieve the initial matching result;
4.5) the user selects the next server in the satisfaction function value sequence to send an unloading request, and the step 4.3) is carried out;
5) and performing optimal satisfaction bilateral matching on the task and all the edge servers and the cloud server.
2. The method for task offloading based on bilateral matching under collaboration of the edge cloud as claimed in claim 1, wherein: the total time of unloading to the edge server comprises transmission time and execution time, and the total time
Figure 713099DEST_PATH_IMAGE014
The transmission time is
Figure DEST_PATH_IMAGE015
The execution time is
Figure 115262DEST_PATH_IMAGE016
Figure DEST_PATH_IMAGE017
In order to be efficient in the transmission,
Figure 767829DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE019
the channel bandwidth allocated to user m for edge server n,
Figure 541618DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
for the transmission power of the user m,
Figure 545215DEST_PATH_IMAGE022
to be a task
Figure DEST_PATH_IMAGE023
Channel gain performed in the edge server N, N being the background noise power of the server;
Figure 469178DEST_PATH_IMAGE024
for the interference of other tasks on the edge server n on the influence of the current task m,
Figure DEST_PATH_IMAGE025
Figure 694361DEST_PATH_IMAGE026
a is the integer variable,
Figure 999571DEST_PATH_IMAGE028
judging whether the user a and the user m are unloaded to the same side server for the decision of unloading the task of the user a,
Figure DEST_PATH_IMAGE029
channel gains offloaded to edge server n for task a;
Figure 162568DEST_PATH_IMAGE030
to accomplish a task
Figure DEST_PATH_IMAGE031
The total number of CPU cycles of the CPU,
Figure 280435DEST_PATH_IMAGE032
is the clock frequency of the edge server n.
3. The method for task offloading based on bilateral matching under collaboration of the edge cloud as claimed in claim 1, wherein: total time for the task to be offloaded to the cloud server
Figure DEST_PATH_IMAGE033
Wherein
Figure 392747DEST_PATH_IMAGE034
The time for a task to be transmitted by a user to an edge node,
Figure DEST_PATH_IMAGE035
for the time that the task is uploaded from the edge node to the cloud server through the core network,
Figure 101815DEST_PATH_IMAGE036
is the execution time of the cloud server;
Figure DEST_PATH_IMAGE037
the data transmission rate for user m to upload the task to the cloud,
Figure 174944DEST_PATH_IMAGE038
the transmission speed allocated to the user m for the core network to cloud server phase,
Figure DEST_PATH_IMAGE039
is the clock frequency of the cloud server.
4. The method for task offloading based on bilateral matching under collaboration of the edge cloud as claimed in claim 1, wherein: the specific steps of the step 5) comprise: 5.1) updating the bandwidth allocation of the side server according to the initial matching result of the user; when an edge server is offloaded k tasks, each task equally divides all of the edge server's bandwidth, i.e., each task is off-loaded by k tasks
Figure 971868DEST_PATH_IMAGE040
5.2) updating the task unloading time delay, and updating the task unloading time delay according to the bandwidth allocation in the step A;
5.3) updating the satisfaction function values of the users to all the servers and sequencing; the user selects the server with the highest current function value to send an unloading request;
5.4) judging whether the server selected in the step 5.3) has a matching object; step 5.5) with matching objects, step 5.6) without matching objects;
5.5) all the matchers connected with the server participate in the price game to obtain the suggested price of the current user;
if the user price is higher than the suggested price or the user accepts the suggested price and the suggested price is higher than the minimum price of the server, the server accepts the task unloading matching request, otherwise, the server refuses the task unloading matching request;
the server receives the task unloading matching request and then recalculates the task unloading time delay, and if the maximum time delay limit of the user is met, the matching is kept to obtain an optimal satisfaction degree bilateral matching result; otherwise, returning to the step 4) to carry out matching again;
5.6) the price of the user is higher than the minimum price of the server
Figure DEST_PATH_IMAGE041
(ii) a The server receives the task unloading matching request; the price of the user is lower than the minimum price of the server
Figure 109326DEST_PATH_IMAGE042
(ii) a The server feeds back the suggested price to the user, and the user calculates the satisfaction degree of the current server by using the suggested price and sorts the satisfaction degree;
after the user receives the price suggested by the server, the server still is the optimal server, and the server receives the task unloading matching request; otherwise the server rejects the matching request.
5. The method for task offloading based on bilateral matching under collaboration of the edge cloud as recited in claim 4, wherein: the price game includes the steps of:
5.5.1) constructing a satisfaction function matrix of the server and all matched objects of the server;
5.5.2) calculating the satisfaction function mean value of all matched objects of the server;
5.5.3) judging whether the satisfaction function value of the server to the current user is higher than the mean value of the satisfaction function obtained in the previous step; if the average value is higher than the average value, turning to the step 5.5.4); otherwise, turning to step 5.5.5);
5.5.4) the server suggests the price provided for the user, namely the user price is unchanged, and the server receives the task unloading matching request;
5.5.5) the server suggests the price as the mean of the satisfaction function of the remaining matching objects.
CN202111195259.2A 2021-10-14 2021-10-14 Task unloading method based on bilateral matching under edge cloud cooperation Active CN113961266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111195259.2A CN113961266B (en) 2021-10-14 2021-10-14 Task unloading method based on bilateral matching under edge cloud cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111195259.2A CN113961266B (en) 2021-10-14 2021-10-14 Task unloading method based on bilateral matching under edge cloud cooperation

Publications (2)

Publication Number Publication Date
CN113961266A true CN113961266A (en) 2022-01-21
CN113961266B CN113961266B (en) 2023-08-22

Family

ID=79463842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111195259.2A Active CN113961266B (en) 2021-10-14 2021-10-14 Task unloading method based on bilateral matching under edge cloud cooperation

Country Status (1)

Country Link
CN (1) CN113961266B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116155618A (en) * 2023-04-04 2023-05-23 天云融创数据科技(北京)有限公司 Data maintenance method and system based on big data and artificial intelligence
CN116208669A (en) * 2023-04-28 2023-06-02 湖南大学 Intelligent lamp pole-based vehicle-mounted heterogeneous network collaborative task unloading method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2919438A1 (en) * 2014-03-10 2015-09-16 Deutsche Telekom AG Method and system to estimate user desired delay for resource allocation for mobile-cloud applications
CN109992419A (en) * 2019-03-29 2019-07-09 长沙理工大学 A kind of collaboration edge calculations low latency task distribution discharging method of optimization
CN111182570A (en) * 2020-01-08 2020-05-19 北京邮电大学 User association and edge computing unloading method for improving utility of operator
CN111585916A (en) * 2019-12-26 2020-08-25 国网辽宁省电力有限公司电力科学研究院 LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation
CN111913723A (en) * 2020-06-15 2020-11-10 合肥工业大学 Cloud-edge-end cooperative unloading method and system based on assembly line
CN113163006A (en) * 2021-04-16 2021-07-23 三峡大学 Task unloading method and system based on cloud-edge collaborative computing
US20210266834A1 (en) * 2020-02-25 2021-08-26 South China University Of Technology METHOD OF MULTI-ACCESS EDGE COMPUTING TASK OFFLOADING BASED ON D2D IN INTERNET OF VEHICLES (IoV) ENVIRONMENT

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2919438A1 (en) * 2014-03-10 2015-09-16 Deutsche Telekom AG Method and system to estimate user desired delay for resource allocation for mobile-cloud applications
CN109992419A (en) * 2019-03-29 2019-07-09 长沙理工大学 A kind of collaboration edge calculations low latency task distribution discharging method of optimization
CN111585916A (en) * 2019-12-26 2020-08-25 国网辽宁省电力有限公司电力科学研究院 LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation
CN111182570A (en) * 2020-01-08 2020-05-19 北京邮电大学 User association and edge computing unloading method for improving utility of operator
US20210266834A1 (en) * 2020-02-25 2021-08-26 South China University Of Technology METHOD OF MULTI-ACCESS EDGE COMPUTING TASK OFFLOADING BASED ON D2D IN INTERNET OF VEHICLES (IoV) ENVIRONMENT
CN111913723A (en) * 2020-06-15 2020-11-10 合肥工业大学 Cloud-edge-end cooperative unloading method and system based on assembly line
CN113163006A (en) * 2021-04-16 2021-07-23 三峡大学 Task unloading method and system based on cloud-edge collaborative computing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHUJUAN TIAN 等: ""A dynamic task offloading algorithmbased on greedymatching in vehicle network"", 《ELSEVIER》 *
张海波;栾秋季;朱江;贺晓帆;: "基于移动边缘计算的V2X任务卸载方案", 电子与信息学报, no. 11 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116155618A (en) * 2023-04-04 2023-05-23 天云融创数据科技(北京)有限公司 Data maintenance method and system based on big data and artificial intelligence
CN116155618B (en) * 2023-04-04 2023-06-23 天云融创数据科技(北京)有限公司 Data maintenance method and system based on big data and artificial intelligence
CN116208669A (en) * 2023-04-28 2023-06-02 湖南大学 Intelligent lamp pole-based vehicle-mounted heterogeneous network collaborative task unloading method and system
CN116208669B (en) * 2023-04-28 2023-06-30 湖南大学 Intelligent lamp pole-based vehicle-mounted heterogeneous network collaborative task unloading method and system

Also Published As

Publication number Publication date
CN113961266B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN112492626A (en) Method for unloading computing task of mobile user
Yang et al. Efficient resource allocation for mobile-edge computing networks with NOMA: Completion time and energy minimization
CN113961266A (en) Task unloading method based on bilateral matching under edge cloud cooperation
CN111240701A (en) Task unloading optimization method for end-edge-cloud collaborative computing
CN110098969B (en) Fog computing task unloading method for Internet of things
CN112888002B (en) Game theory-based mobile edge computing task unloading and resource allocation method
CN112689303B (en) Edge cloud cooperative resource joint allocation method, system and application
CN110489176B (en) Multi-access edge computing task unloading method based on boxing problem
Dao et al. SGCO: Stabilized green crosshaul orchestration for dense IoT offloading services
CN113220356B (en) User computing task unloading method in mobile edge computing
CN108924254B (en) User-centered distributed multi-user computing task unloading method
CN111800812B (en) Design method of user access scheme applied to mobile edge computing network of non-orthogonal multiple access
CN110647403A (en) Cloud computing resource allocation method in multi-user MEC system
CN110740473A (en) management method for mobile edge calculation and edge server
CN113687876B (en) Information processing method, automatic driving control method and electronic device
CN112084034B (en) MCT scheduling method based on edge platform layer adjustment coefficient
CN113938394A (en) Monitoring service bandwidth allocation method and device, electronic equipment and storage medium
CN111580943B (en) Task scheduling method for multi-hop unloading in low-delay edge calculation
CN111542091B (en) Wireless and computing resource joint allocation method for network slice
CN113365290A (en) Greedy strategy-based game theory calculation unloading method in world fusion network
CN110190982B (en) Non-orthogonal multiple access edge computation time and energy consumption optimization based on fair time
CN111343238A (en) Method for realizing joint calculation and bandwidth resource allocation in mobile edge calculation
CN114880046B (en) Low-orbit satellite edge computing and unloading method combining unloading decision and bandwidth allocation
CN114268994A (en) Price-based distributed unloading method and device for mobile edge computing network
CN113784372A (en) Joint optimization method for terminal multi-service model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant