CN111611063B - Cloud-aware mobile fog computing system task unloading method based on 802.11p - Google Patents

Cloud-aware mobile fog computing system task unloading method based on 802.11p Download PDF

Info

Publication number
CN111611063B
CN111611063B CN202010458226.1A CN202010458226A CN111611063B CN 111611063 B CN111611063 B CN 111611063B CN 202010458226 A CN202010458226 A CN 202010458226A CN 111611063 B CN111611063 B CN 111611063B
Authority
CN
China
Prior art keywords
vehicle
task
tasks
computing
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010458226.1A
Other languages
Chinese (zh)
Other versions
CN111611063A (en
Inventor
吴琼
葛红梅
倪渊之
武贵路
夏思洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN202010458226.1A priority Critical patent/CN111611063B/en
Publication of CN111611063A publication Critical patent/CN111611063A/en
Application granted granted Critical
Publication of CN111611063B publication Critical patent/CN111611063B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a cloud perception mobile fog computing system task unloading method based on 802.11p, which fully considers the characteristics of a mobile fog computing system, is in accordance with a real scene, has low computation complexity, reduces computation time, and further improves computation efficiency. In the technical scheme of the invention, considering the characteristic that the calculation requirements of tasks transmitted by access queues with different priorities of 802.11p are different, a task unloading model based on a semi-Markov decision process is established to represent the task unloading process based on the defined state, action, reward and transition probability, a Bellman equation is solved by using a value iterative algorithm, and the optimal action, namely the optimal unloading strategy, in different states is obtained.

Description

Cloud-aware mobile fog computing system task unloading method based on 802.11p
Technical Field
The invention relates to the technical field of vehicle-mounted networks, in particular to a cloud-aware mobile fog computing system task unloading method based on 802.11 p.
Background
In the application fields of unmanned driving, intelligent traffic safety early warning and the like, a large number of calculation-intensive time delay sensitive tasks and non-time delay sensitive tasks need to be processed. However, the current vehicle-mounted computer has limited computing power and cannot meet the requirement of large amount of intensive computing. To address such enormous computing demands, technicians have introduced task offloading techniques. That is, when a task arrives, the target vehicle unloads the task to a plurality of nearby vehicles for common processing; to achieve this goal, on-board fog technology was introduced. The vehicle-mounted fog refers to cooperative processing tasks among vehicles, and each vehicle can not only send out a calculation task but also process the task; because the processing place of the task is close to the request vehicle, the calculation requirement of the time delay sensitive task can be well met by utilizing the vehicle-mounted fog technology. However, as the amount of computing tasks generated by the vehicle applications continues to increase, the computing power of the vehicle-mounted fog computing system cannot fully meet such a huge and intensive task processing requirement, and therefore, a remote cloud is required. Remote clouds have powerful computing capabilities, and people propose to offload computing tasks generated by vehicles to remote clouds and then return the computing results to the requesting vehicle. However, the distance between the remote cloud and the requesting vehicle is large, a large amount of energy and link resources are consumed for uploading tasks, and if all computing tasks are uploaded to the remote cloud indiscriminately, the problem of overlarge cost is caused; therefore, a mobile fog computing framework has been proposed, which combines the in-vehicle fog technology with remote cloud to offload tasks generated by in-vehicle applications to a mobile fog computing system to handle a large number of computationally intensive delay-sensitive and non-delay-sensitive tasks.
In an 802.11 p-based vehicle-mounted network, tasks transmitted by 802.11p can be divided into low priority and high priority, the calculation requirements of the tasks transmitted by access queues with different priorities are different, and the requirements of the tasks transmitted by the queues with different priorities on time delay are different; high priority tasks require higher quality of service, and therefore, high priority tasks can be offloaded to both vehicle-mounted fog for computation and remote cloud; the low priority tasks have low quality of service requirements and are either offloaded to on-board fog or rejected by the computing system. In the existing research based on the mobile fog computing framework, the computing demand heterogeneity and the computing resource variability influence of tasks with different priorities are not fully considered, so that the energy and link resource distribution in the computing work is unreasonable, the task queuing time is too long, and the problems of too long computing time and low working efficiency are caused.
Disclosure of Invention
In order to solve the problems that in the existing research based on a mobile fog computing framework, a computing method and a computing result do not meet actual requirements, computing time is too long, and working efficiency is low, the invention provides a task unloading method of a cloud sensing mobile fog computing system based on 802.11p, the characteristics of the mobile fog computing system are fully considered, the computing method accords with a real scene, computing complexity is low, computing time is reduced, and computing efficiency is improved.
The technical scheme of the invention is as follows: the cloud perception mobile fog computing system task unloading method based on 802.11p comprises the following steps:
the method is characterized in that:
s1: defining a state set X of the system;
the state set is represented as:
X={x|x=(K,s 1,1 ,…,s 1,N ,…,s 2,N ,e)}
wherein K represents the total number of the computing units of the system in the current state, s i,j The number of i priority tasks processed by j computing units is shown, N is the number of computing units to which a task can be distributed at most, and e is a specific event;
s2: define action set A of system c
The set of actions of the system is represented as:
Figure BDA0002510048280000011
wherein D is i,j Indicating that an i-priority task processed by j compute units is complete and leaves the system; f +1 Representing a vehicle arrival system; f -1 To representA vehicle exit system; a. The i (i =1, 2) indicates that a vehicle in the system makes an i-priority task request; 0 represents that the system uploads tasks to a remote cloud, -1 represents that the system takes no action, and j represents that the system allocates j computing unit processing tasks;
s3: defining a reward model r (x, a) of the system;
the reward includes: immediate rewards and costs, expressed as:
r(x,a)=h(x,a)-g(x,a)
where r (x, a) represents the system long-term reward, h (x, a) represents the system immediate reward, g (x, a) represents the overhead of the system until the next decision;
s4: defining the state transition probability P (k | x, a) of the system; p (k | x, a) represents the probability of transitioning to state k after taking action a in state x;
s5: normalizing the reward r (x, a), the transition probability P (k | x, a) and the discount rate to obtain a Bellman equation:
the long-term reward after normalization is
Figure BDA0002510048280000021
The discounting factor after normalization is ≦>
Figure BDA0002510048280000022
The normalized transition probability is +>
Figure BDA0002510048280000023
Obtaining a Bellman equation of->
Figure BDA0002510048280000024
S6: and solving the optimal task unloading scheme by adopting a value iteration method according to the Bellman equation.
It is further characterized in that:
the expression for the immediate reward h (x, a) of the system is:
Figure BDA0002510048280000025
where η represents a time-saving price, T represents a time required for the vehicle generating the task to independently process the task, and D 1 Representing the transmission time, ts, between the vehicle fog to the remote cloud i Indicating the transmission time required for the requesting vehicle to transmit the i-priority to the on-vehicle fog,
Figure BDA0002510048280000026
representing the time needed by the j computing units to process the tasks, phi representing the penalty of rejecting the low-priority tasks by the system, and xi representing the penalty of leaving the system by busy vehicles;
the overhead g (x, a) to the system during the next decision is expressed as:
Figure BDA0002510048280000027
wherein β (x, a) represents the average incidence of events after taking action a in state x; α is a continuous-time discounting factor, c (x, a) represents the overhead rate;
the expression for the overhead rate c (x, a) is:
Figure BDA0002510048280000031
depending on the type of event currently occurring, the state transition probability P (k | x, a) can be expressed as:
1)x=(K,s 1,1 ,…,s 1,N ,...,s 2,N ,A 1 ) I.e. high priority tasks arrive at the system:
Figure BDA0002510048280000032
2)x=(K,s 1,1 ,…,s 1,N ,…,s 2,N ,A 2 ) A = j, i.e. low priority tasks arrive at the system:
Figure BDA0002510048280000033
3)x=(K,s 1,1 ,…,s 1,N ,…,s 2,N ,D i,j ) I =1,2; j =1,2, \ 8230;, N; a = -1, i.e. i priority tasks processed by j compute units complete and leave the system:
Figure BDA0002510048280000041
4)x=(K,s 1,1 ,…,s 1,N ,...,s 2,N ,F +1 ) A = -1, i.e. vehicle arrival system:
Figure BDA0002510048280000042
5)x=(K,s 1,1 ,...,s 1,N ,…,s 2,N ,F -1 ) A = -1, i.e. vehicle exit system:
Figure BDA0002510048280000043
wherein λ is i (i =1,2) represents the arrival rate of i-priority tasks to the system, μ t Service rate, λ, representing the processing task of the computing unit v Indicating the arrival rate of the vehicle, mu v Indicating a rate of departure of the vehicle;
the average incidence of events β (x, a) is expressed as:
Figure BDA0002510048280000051
in step S5, the bellman equation expression is:
Figure BDA0002510048280000052
long term rewards after said normalization
Figure BDA0002510048280000053
The expression of (a) is:
Figure BDA0002510048280000054
discounting factor after said normalization
Figure BDA0002510048280000055
The expression of (a) is:
Figure BDA0002510048280000056
transition probabilities after the normalization
Figure BDA0002510048280000057
The expression of (a) is:
Figure BDA0002510048280000058
the task unloading method of the cloud-aware mobile fog computing system based on 802.11p, provided by the invention, has the advantages that the computing requirements of tasks transmitted by access queues with different priorities of 802.11p are different, the task unloading process is represented by a task unloading model based on a semi-Markov decision process based on the defined state, action, reward and transition probability, a value iteration algorithm is used for solving a Bellman equation, and the optimal action, namely the optimal unloading strategy, in different states is obtained; in the technical scheme of the invention, the system characteristics of the mobile fog computing system, namely the isomerism of computing requirements of different tasks and the variability of computing resources are considered, the whole computing process is completely based on the real scene of the system, and the computing result is ensured to meet the real requirements; meanwhile, the strategy is calculated based on the semi-Markov decision model, so that the technical scheme of the invention has the advantages of simple calculation process, easy understanding, reduced calculation time and improved calculation efficiency; when the optimal task unloading scheme is solved, the method of the Bellman equation is adopted, the discount rate is considered, the strategy obtained based on the technical scheme of the invention is ensured not only to consider the current return but also to consider the future return, the technical scheme of the invention is further ensured to have long-term consideration, the obtained optimal unloading scheme accords with the real demand of vehicle-mounted network system calculation, and the scheme of the invention has higher practicability.
Drawings
FIG. 1 is a model block diagram of an 802.11p based cloud aware mobile fog computing system;
FIG. 2 is a schematic diagram showing the probability of each action of the system varying with the maximum number of vehicles in the system;
FIG. 3 is a comparison chart of the long-term average profit of the system under various schemes when the maximum number of vehicles in the system changes.
Detailed Description
As shown in fig. 1, locally moving vehicles constitute vehicle-mounted fog 2, each vehicle can not only send out a task request, but also can process the task request, and a centralized server at a remote place is called a remote cloud 1. In the technical scheme of the invention, the whole system adopts an 802.11p EDCA (Enhanced Distributed Channel Access) mechanism to transmit tasks, wherein: AC 1 The access parameter of the queue is smaller, which represents the queue of the high-priority task and is used for transmitting the delay-sensitive task; AC 2 The queue represents a queue of low priority tasks for transmitting non-delay sensitive tasks. The system model has the following features: each vehicle may send a high-priority or low-priority task request due to running the vehicle-mounted application, and leave the system when the calculation request is completed, and some vehicles enter the calculation system, such as vehicle C7, and some vehicles leave the system, such as vehicle C5; the specific working process of the system is as follows: when a certain vehicle generates a high-priority task, the computing system may upload the task to the remote cloud 1 for processing, and may also unload the task to the vehicle-mounted fog 2 for processing; if the processing is carried out in the vehicle-mounted fog 2, the system needs to further determine how many computing unit processing tasks are distributed;when the vehicle generates a low-priority task, the computing system only can unload the task into the vehicle-mounted fog 2 for processing or refuse the task; as in fig. 1, the vehicle-mounted fog has 6 vehicles in total: c1, C2, C3, C4, C5, C6; c1 generates a high priority task that the system allocates two computing resources C2 and C3 to handle since the computing resources in the on-board fog 2 are relatively abundant.
In the technical scheme of the invention, the cloud perception mobile fog computing system task unloading method based on 802.11p comprises the following steps.
S1: defining a state set X of the system; in the semi-Markov decision model, each state indicates the total number of system computing units, the number of processing tasks with different vehicle numbers and the current occurrence of events;
the state set is represented as:
X={x|x=(K,s 1,1 ,...,s 1,N ,...,s 2,N ,e)}
wherein K represents the total number of the computing units of the system in the current state, s i,j Indicates the number of i priority tasks processed by j computing units, N indicates the number of computing units to which a task can be allocated at most, and e indicates a specific event.
S2: define action set A of system c
In the semi-markov decision model, each action of the system is a set of actions from the system, and the set of actions of the system in different events is represented as:
Figure BDA0002510048280000061
wherein D is i,j Indicating that one i-priority task processed by j computing units is completed and leaves the system; f +1 Representing a vehicle arrival system; f -1 Indicating a vehicle departure system; a. The i (i =1, 2) indicates that a vehicle in the system makes an i-priority task request; 0 represents that the system uploads tasks to a remote cloud, -1 represents that the system takes no action, and j represents that the system allocates j computing unit processing tasks;
that is, if the system decides to process one task in the on-vehicle fog 2, it needs to further determine how many calculation unit processing tasks are allocated, at least one calculation unit processing task is allocated, and at most N calculation units are allocated to process the task.
S3: defining a reward model r (x, a) of the system; the benefits of the technical scheme of the invention mainly come from saving task processing time, and in order to obtain larger long-term average benefits, the system needs to find an optimal task unloading strategy;
in a semi-Markov decision model, the system takes an action when a particular event occurs, at which point the system can obtain a reward; the reward includes: immediate rewards and costs, expressed as:
r(x,a)=h(x,a)-g(x,a)
where r (x, a) represents the system long-term reward, h (x, a) represents the immediate reward of the system, and g (x, a) represents the overhead to the system during the next decision;
the expression for the immediate reward h (x, a) of the system is:
Figure BDA0002510048280000071
where η represents a time saving price, T represents a time required for the vehicle generating the task to independently process the task, and D 1 Representing the transmission time, ts, between the vehicle fog to the remote cloud i Indicating the transmission time required for the requesting vehicle to transmit the i-priority to the on-vehicle fog,
Figure BDA0002510048280000072
representing the time needed by the j computing units to process the tasks, phi representing the penalty of rejecting the low-priority tasks by the system, and xi representing the penalty of leaving the system by busy vehicles;
the overhead g (x, a) to the system during the next decision is expressed as:
Figure BDA0002510048280000073
wherein β (x, a) represents the average incidence of events after taking action a in state x; α is a continuous-time discounting factor, c (x, a) represents the overhead rate;
the expression for the overhead rate c (x, a) is:
Figure BDA0002510048280000074
/>
transmission time Ts required for requesting a vehicle to transmit an i-priority task to an on-board fog i The expression is as follows:
Figure BDA0002510048280000075
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002510048280000076
the probability mother function of time consumed by the vehicle for transmitting the i-priority task to the vehicle-mounted fog is expressed as follows:
Figure BDA0002510048280000077
wherein, TR (z) is a probability mother function of the average transmission time; g i,m (z) when the number of retransmissions is m, AC i A probability mother function of queue back-off time;
Figure BDA0002510048280000078
is AC i The transmission probability of the queue; l is i Is AC i Retransmission limit of the queue.
The expression of the probability mother function TR (z) of the mean transmission time is:
Figure BDA0002510048280000081
wherein, T tr The average transmission time is expressed as:
Figure BDA0002510048280000082
wherein, PHY h And MAC h Physical layer and medium access layer header lengths, R, respectively b And R d For the base data rate and data rate, σ is the propagation time, E [ P ]]Is the task size.
When the retransmission times are m, AC i Probability mother function G of queue back-off time i,m (z) is the expression:
Figure BDA0002510048280000083
wherein R is i Is AC i The number of times the contention window of the queue can be doubled; h i (z) decreasing the backoff counter by the average time required for one unit; w is a group of i,m When the back-off order is m, AC i The maximum contention window value of the queue is expressed as:
Figure BDA0002510048280000084
wherein CW i,min Is AC i The minimum contention window value of the queue.
The average time H required for the backoff counter to decrease by one unit i The expression of (z) is:
Figure BDA0002510048280000085
wherein p is bi Is AC i A blocking probability of the queue; AIFS i For the inter-arbitration interval, the expression is:
AIFS i =SIFS+AIFSN[i]×T slot ,i=1,2
wherein, SIFS is a short interframe space; AIFSN is the arbitration interframe space.
AC i Blocking probability p of queue bi The expression of (a) is:
Figure BDA0002510048280000086
wherein A is AC 2 Queue is more important than AC 1 The number of time slots of the queue multi-detection channel is expressed as follows:
A=AIFSN[2]-AIFSN[1],
AC i transmission probability of queue
Figure BDA0002510048280000093
The expression of (a) is:
Figure BDA0002510048280000091
where ρ is i Is AC i Queue server utilization; p is a radical of formula ai For the task arrival probability, the expression is:
Figure BDA0002510048280000092
the system long-term reward r (x, a) is calculated by the system immediate reward h (x, a) based on the time saving price eta, the time T consumed by the vehicle generating the mission to process the mission independently, and the transmission time D between the vehicle-mounted fog and the remote cloud 1 Requesting the vehicle to transmit the i-priority to the transmission time Ts required for on-board fog i Time D required for processing tasks by j computing units t (j) And the like are obtained by calculation; that is, in the technical solution of the present invention, the balance of the system profit is based on saving the task processing time, and the optimal offloading policy obtained based on the technical solution of the present invention is the most efficient offloading policy.
S4: defining the state transition probability P (k | x, a) of the system; in a semi-Markov decision model, the transition probability refers to the ratio of the rate of occurrence of the next event to the average probability of occurrence of the next event; p (k | x, a) represents the probability of transitioning to state k after taking action a in state x; depending on the type of event currently occurring, the state transition probability P (k | x, a) can be expressed as:
1)x=(K,s 1,1 ,…,s 1,N ,…,s 2,N ,A 1 ) I.e. high priority tasks arrive at the system:
Figure BDA0002510048280000101
2)x=(K,s 1,1 ,…,s 1,N ,…,s 2,N ,A 2 ) A = j, i.e. low priority tasks arrive at the system:
Figure BDA0002510048280000102
3)x=(K,s 1,1 ,…,s 1,N ,...,s 2,N ,D i,j ) I =1,2; j =1, 2.., N; a = -1, i.e. i priority tasks processed by j compute units complete and leave the system:
Figure BDA0002510048280000111
4)x=(K,s 1,1 ,...,s 1,N ,...,s 2,N ,F +1 ) A = -1, i.e. vehicle arrival system:
Figure BDA0002510048280000112
5)x=(K,s 1,1 ,...,s 1,N ,...,s 2,N ,F -1 ) A = -1, i.e. vehicle exit system:
Figure BDA0002510048280000113
wherein λ is i (i =1,2) represents the arrival rate of i-priority tasks to the system, μ t Indicating the service rate, lambda, of the processing task of the computing unit v Indicating the arrival rate of the vehicle, mu v Indicating a rate of departure of the vehicle;
wherein, under different events and actions, the average incidence of events β (x, a) is expressed as:
Figure BDA0002510048280000121
s5: normalizing the reward r (x, a), the transition probability P (k | x, a) and the discount rate to obtain a Bellman equation:
the long term reward after normalization is
Figure BDA0002510048280000122
The discounting factor after normalization is ≦>
Figure BDA0002510048280000123
The transition probability after normalization is ≦>
Figure BDA0002510048280000124
Obtaining a Bellman equation of->
Figure BDA0002510048280000125
The expression of the Bellman equation is as follows:
Figure BDA0002510048280000126
long term rewards after normalization
Figure BDA0002510048280000127
The expression of (c) is:
Figure BDA0002510048280000128
discounting factor after normalization
Figure BDA0002510048280000129
The expression of (c) is:
Figure BDA00025100482800001210
transition probabilities after normalization
Figure BDA00025100482800001211
The expression of (c) is:
Figure BDA00025100482800001212
s6: solving an optimal task unloading scheme by adopting a value iteration method according to a Bellman equation;
the algorithm pseudo-code for value iteration is as follows:
Figure BDA0002510048280000131
assume that only 2 computing units (only two idle vehicles) remain in the on-vehicle fog 2 in the current state; if a high priority task arrives at the system, the system may transmit this task to be processed in the remote cloud 1, or in the on-board fog 2; if processing in the on-board fog 2 is selected, the system needs to further decide how many computing resources to allocate to process the task; if a low priority task arrives, the system can only choose to process in the vehicle fog, or reject the task request. Specifically, for each state of the system, a state value function under different actions is calculated by using a Bellman equation
Figure BDA0002510048280000132
If the last iterative computation is finished, and a certain state is found, dividing intoWhen 1 computing unit is allocated to process, the state cost function of the current state is the largest, and in this state, the system will choose to allocate the task to 1 computing unit to process, that is, in that state, the optimal action is: distributing the tasks to 1 computing unit for processing; if the last iterative computation is finished, and the maximum state cost function of the current state is found when the current state is distributed to 2 computing units for processing in a certain state, then: in that state, the optimal actions are: distributing the tasks to 2 computing units for processing; optimum strategy pi * (x) Is the set of all optimal actions.
Referring to FIG. 2 of the drawings, the abscissa is the maximum number of vehicles K (maximum of vehicles in the MFC system) in the system and the ordinate is the Action probability (Action probability); fig. 2 shows the probability of each action of the system calculated based on the technical solution of the present invention when the maximum number of vehicles in the system changes:
a0 represents the trend of the action probability of the system for unloading the high-priority task to the remote cloud based on the change of the maximum vehicle number K;
a1 represents the trend of the action probability of the system for allocating one resource unit to process the high-priority task based on the change of the maximum number of vehicles K;
a2 represents the trend of action probability of the system for distributing two resource units to process the high-priority task based on the change of the maximum vehicle number K;
a0 represents the probability that a low-priority task is rejected based on a change in the maximum number of vehicles K;
a1 represents the trend of action probability of a system for allocating one resource unit to process a low-priority task based on the change of the maximum number of vehicles K;
a2 represents a trend of the action probability of the system allocating two resource units to process a low-priority task based on a change in the maximum number of vehicles K.
As can be seen, as K increases, A0 and A0 gradually decrease because the computing resources increase and the system tends to offload tasks into the on-board fog; moreover, with the further increase of K, the computing resources are sufficient, and in order to obtain a larger long-term reward, the system allocates more resource processing tasks as much as possible, so that A1 and A1 are decreased, and A2 are increased; it can also be seen that A2 and A2 are nearly identical because the rewards obtained are not very different because the high and low priority tasks of the system are allocated to both resource processes.
Referring to FIG. 3 of the drawings accompanying the specification, the maximum number of vehicles in the system, K (maximum number of vehicles in the MFC system), is plotted on the abscissa, and the Long-term reward (Long-term expected) is plotted on the ordinate. Fig. 3 is a graph showing long-term average profit comparison of the system according to the present invention and other embodiments when the maximum number of vehicles in the system is changed, using the same test environment as that of fig. 2. SMDP is the task offloading strategy proposed by the present invention, and GA represents a typical greedy algorithm allocation strategy. As can be seen from the figure, the unloading strategy provided by the invention can enable the system to obtain larger long-term benefits; the technical scheme of the invention mainly derives from saving task processing time, and obtains larger long-term average benefit, namely, the technical scheme of the invention has higher calculation efficiency compared with other technical schemes.

Claims (1)

1. The cloud perception mobile fog computing system task unloading method based on 802.11p comprises the following steps:
the method is characterized in that:
s1: defining a state set X of the system;
the state set is represented as:
X={x|x=(K,s 1,1 ,...,s 1,N ,...,s 2,N ,e)}
wherein K represents the total number of the computing units of the system in the current state, s i,j The number of i priority tasks processed by j computing units is shown, N is the number of computing units to which a task can be distributed at most, and e is a specific event;
s2: define action set A of system c
The set of actions of the system is represented as:
Figure FDA0004073355670000011
wherein D is i,j Indicating that one i-priority task processed by j computing units is completed and leaves the system; f +1 Representing a vehicle arrival system; f -1 Indicating a vehicle departure system; a. The i (i =1, 2) indicates that a vehicle in the system makes an i-priority task request; 0 represents that the system uploads tasks to a remote cloud, -1 represents that the system takes no action, and j represents that the system allocates j computing unit processing tasks;
s3: defining a reward model r (x, a) of the system;
the reward includes: immediate rewards and costs, expressed as:
r(x,a)=h(x,a)-g(x,a)
where r (x, a) represents the system long-term reward, h (x, a) represents the immediate reward of the system, and g (x, a) represents the overhead to the system during the next decision;
s4: defining the state transition probability P (k | x, a) of the system; p (k | x, a) represents the probability of transitioning to state k after taking action a in state x;
s5: normalizing the reward r (x, a), the transition probability P (k | x, a) and the discount rate to obtain a Bellman equation:
the long-term reward after normalization is
Figure FDA0004073355670000012
The discounting factor after normalization is ≦>
Figure FDA0004073355670000013
The transition probability after normalization is ≦>
Figure FDA0004073355670000014
Obtaining a Bellman equation of->
Figure FDA0004073355670000015
S6: solving an optimal task unloading scheme by adopting a value iteration method according to a Bellman equation;
the overhead g (x, a) to the system during the next decision is expressed as:
Figure FDA0004073355670000016
wherein β (x, a) represents the average incidence of events after taking action a in state x; α is a continuous time discounting factor, c (x, a) represents the overhead rate;
depending on the type of event currently occurring, the state transition probability P (k | x, a) can be expressed as:
1)x=(K,s 1,1 ,...,s 1,N ,...,s 2,N ,A 1 ) I.e. high priority tasks arrive at the system:
Figure FDA0004073355670000021
2)x=(K,s 1,1 ,...,s 1,N ,...,s 2,N ,A 2 ) A = j, i.e. low priority tasks arrive at the system:
Figure FDA0004073355670000022
3)x=(K,s 1,1 ,...,s 1,N ,...,s 2,N ,D i,j ) I =1,2; j =1,2,. Ang, N; a = -1, i.e. i priority tasks processed by j compute units complete and leave the system:
Figure FDA0004073355670000031
4)x=(K,s 1,1 ,...,s 1,N ,...,s 2,N ,F +1 ) A = -1, i.e. vehicle arrival system:
Figure FDA0004073355670000032
5)x=(K,s 1,1 ,...,s 1,N ,...,s 2,N ,F -1 ) A = -1, i.e. vehicle exit system:
Figure FDA0004073355670000033
wherein λ is i (i =1,2) represents the arrival rate of i-priority tasks to the system, μ t Service rate, λ, representing the processing task of the computing unit v Indicating the arrival rate of the vehicle, mu v Indicating a rate of departure of the vehicle; β (x, a) represents the average incidence of events after taking action a in state x; the expression for the immediate reward h (x, a) of the system is:
Figure FDA0004073355670000041
where η represents a time saving price, T represents a time required for the vehicle generating the task to independently process the task, and D 1 Representing the transmission time, ts, between the vehicle-mounted fog to the remote cloud i Indicating the transmission time required for the requesting vehicle to transmit the i-priority to the on-vehicle fog,
Figure FDA0004073355670000042
representing the time needed by the j computing units to process the tasks, phi representing the penalty of rejecting the low-priority tasks by the system, and xi representing the penalty of leaving the system by busy vehicles;
the expression for the overhead rate c (x, a) is:
Figure FDA0004073355670000043
the average incidence of events β (x, a) is expressed as:
Figure FDA0004073355670000044
wherein λ is i (i =1,2) represents the arrival rate of i-priority tasks to the system, μ t Indicating the service rate, lambda, of the processing task of the computing unit v Indicating the arrival rate of the vehicle, mu v Indicating a rate of departure of the vehicle;
in step S5, the bellman equation expression is:
Figure FDA0004073355670000045
long term rewards after the normalization
Figure FDA0004073355670000046
The expression of (a) is:
Figure FDA0004073355670000051
discounting factors after the normalization
Figure FDA0004073355670000052
The expression of (a) is: />
Figure FDA0004073355670000053
Transition probabilities after the normalization
Figure FDA0004073355670000054
The expression of (c) is:
Figure FDA0004073355670000055
/>
CN202010458226.1A 2020-05-27 2020-05-27 Cloud-aware mobile fog computing system task unloading method based on 802.11p Active CN111611063B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010458226.1A CN111611063B (en) 2020-05-27 2020-05-27 Cloud-aware mobile fog computing system task unloading method based on 802.11p

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010458226.1A CN111611063B (en) 2020-05-27 2020-05-27 Cloud-aware mobile fog computing system task unloading method based on 802.11p

Publications (2)

Publication Number Publication Date
CN111611063A CN111611063A (en) 2020-09-01
CN111611063B true CN111611063B (en) 2023-04-11

Family

ID=72200596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010458226.1A Active CN111611063B (en) 2020-05-27 2020-05-27 Cloud-aware mobile fog computing system task unloading method based on 802.11p

Country Status (1)

Country Link
CN (1) CN111611063B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326076B (en) * 2021-05-28 2022-10-18 江南大学 Vehicle-mounted fog-assisted vehicle fleet task unloading method based on semi-Markov decision process
CN116233017A (en) * 2022-12-23 2023-06-06 中国联合网络通信集团有限公司 Time delay guaranteeing method, time delay guaranteeing device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017049977A1 (en) * 2015-09-25 2017-03-30 中兴通讯股份有限公司 Method, apparatus, and system for selecting target area, obu, and rsu
CN108921437A (en) * 2018-07-10 2018-11-30 电子科技大学 It is a kind of based on mist calculate more vehicles between more calculating task dispatching methods
CN108990016A (en) * 2018-08-17 2018-12-11 电子科技大学 A kind of calculating task unloading of more vehicles collaboration and transmission method
CN109067842A (en) * 2018-07-06 2018-12-21 电子科技大学 Calculating task discharging method towards car networking
CN109831522A (en) * 2019-03-11 2019-05-31 西南交通大学 A kind of vehicle connection cloud and mist system dynamic resource Optimal Management System and method based on SMDP
CN110489218A (en) * 2019-07-26 2019-11-22 江南大学 Vehicle-mounted mist computing system task discharging method based on semi-Markovian decision process

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10735518B2 (en) * 2017-06-26 2020-08-04 Veniam, Inc. Systems and methods for self-organized fleets of autonomous vehicles for optimal and adaptive transport and offload of massive amounts of data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017049977A1 (en) * 2015-09-25 2017-03-30 中兴通讯股份有限公司 Method, apparatus, and system for selecting target area, obu, and rsu
CN109067842A (en) * 2018-07-06 2018-12-21 电子科技大学 Calculating task discharging method towards car networking
CN108921437A (en) * 2018-07-10 2018-11-30 电子科技大学 It is a kind of based on mist calculate more vehicles between more calculating task dispatching methods
CN108990016A (en) * 2018-08-17 2018-12-11 电子科技大学 A kind of calculating task unloading of more vehicles collaboration and transmission method
CN109831522A (en) * 2019-03-11 2019-05-31 西南交通大学 A kind of vehicle connection cloud and mist system dynamic resource Optimal Management System and method based on SMDP
CN110489218A (en) * 2019-07-26 2019-11-22 江南大学 Vehicle-mounted mist computing system task discharging method based on semi-Markovian decision process

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
V2V网络***MAC层协议研究与实现;严功博;《中国优秀硕士学位论文全文数据库(电子期刊)工程科技Ⅱ辑》;CNKI;20190715;第13页第2.2.1节 *
交通路口场景下基于802.11p的车队通信性能分析模型;夏思洋 吴琼 倪渊之 武贵路 李正权;《计算机科学》;CNKI;20210112;全文 *
基于802.11p的车联网性能分析与优化研究;葛红梅;《中国优秀硕士学位论文全文数据库(电子期刊)工程科技Ⅱ辑》;CNKI;20220115;全文 *
无人驾驶通信协议分析研究;夏思洋;《中国优秀硕士学位论文全文数据库(电子期刊)工程科技Ⅱ辑》;CNKI;20220115;全文 *
车联网链路传输及信道接入性能分析;武贵路;《中国博士学位论文全文数据库(电子期刊)工程科技Ⅱ辑》;CNKI;20190515;第5页第1.2.1节 *
车载云计算***中资源分配的优化方法;董晓丹 吴琼;《中国电子科学研究院学报》;CNKI;20200120;全文 *

Also Published As

Publication number Publication date
CN111611063A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN110198278B (en) Lyapunov optimization method for vehicle networking cloud and edge joint task scheduling
CN110717300B (en) Edge calculation task allocation method for real-time online monitoring service of power internet of things
CN110109745B (en) Task collaborative online scheduling method for edge computing environment
CN111211830B (en) Satellite uplink bandwidth resource allocation method based on Markov prediction
CN111611063B (en) Cloud-aware mobile fog computing system task unloading method based on 802.11p
CN110928658A (en) Cooperative task migration system and algorithm of vehicle-side cloud cooperative architecture
CN110489218B (en) Vehicle-mounted fog computing system task unloading method based on semi-Markov decision process
CN109640290B (en) Differentiated service method, device and equipment based on EDCA mechanism in Internet of vehicles
CN112860429B (en) Cost-effective optimization system and method for task offloading in mobile edge computing system
CN109617836B (en) Intelligent bandwidth allocation method and system for satellite data transmission
CN113641417B (en) Vehicle security task unloading method based on branch-and-bound method
CN110149401B (en) Method and system for optimizing edge calculation task
CN114301730B (en) Gateway scheduling method for vehicle-mounted Ethernet to CAN network
CN115629873A (en) System and method for controlling unloading of vehicle-road cloud cooperative tasks and stability of task queue
CN114928611B (en) IEEE802.11p protocol-based energy-saving calculation unloading optimization method for Internet of vehicles
CN108833486B (en) Hybrid dynamic task scheduling method for complex vehicle-mounted fog computing system environment
CN116744367A (en) Unloading method based on double-layer unloading mechanism and multi-agent algorithm under Internet of vehicles
CN116527674A (en) Method for managing and scheduling heterogeneous computing power resources
CN113452625B (en) Deep reinforcement learning-based unloading scheduling and resource allocation method
Gallardo et al. QoS mechanisms for the MAC protocol of IEEE 802.11 WLANs
CN112055382B (en) Service access method based on refined distinction
CN112492652B (en) Method, device and system for allocating computing power service of edge equipment
CN115118783A (en) Task unloading method based on heterogeneous communication technology ultra-reliable low-delay reinforcement learning
Xie et al. A novel collision probability based adaptive contention windows adjustment for QoS fairness on ad hoc wireless networks
Ahmed et al. A QoS-aware scheduling with node grouping for IEEE 802.11 ah

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant