CN109561148B - Distributed task scheduling method based on directed acyclic graph in edge computing network - Google Patents

Distributed task scheduling method based on directed acyclic graph in edge computing network Download PDF

Info

Publication number
CN109561148B
CN109561148B CN201811462177.8A CN201811462177A CN109561148B CN 109561148 B CN109561148 B CN 109561148B CN 201811462177 A CN201811462177 A CN 201811462177A CN 109561148 B CN109561148 B CN 109561148B
Authority
CN
China
Prior art keywords
tasks
task
node
processor
directed acyclic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811462177.8A
Other languages
Chinese (zh)
Other versions
CN109561148A (en
Inventor
刘昊霖
曹乐
裴廷睿
邓清勇
田淑娟
朱江
李梦瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangtan University
Original Assignee
Xiangtan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangtan University filed Critical Xiangtan University
Priority to CN201811462177.8A priority Critical patent/CN109561148B/en
Publication of CN109561148A publication Critical patent/CN109561148A/en
Application granted granted Critical
Publication of CN109561148B publication Critical patent/CN109561148B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods

Abstract

The invention provides a distributed task scheduling method based on a directed acyclic graph in an edge computing network. Firstly, respectively obtaining various data parameters of a task to be scheduled based on a directed acyclic graph and performance parameters of an edge node processor from a mobile terminal and an edge network central control node according to a central node scheduling mode; then, forming a task sequence by tasks with the in-degree of 0 in the directed acyclic graph, deleting the tasks in the graph, then updating the directed acyclic graph, and repeating the steps until all the tasks in the original graph are contained in the task sequences of different layers; and finally, allocating the tasks in each layer of task sequence to the node processors for execution by taking the optimized task scheduling time, task communication time and queuing time as targets. The invention can be suitable for processing the edge computing network data with different scales, can process tasks with a front-back constraint relation in the directed acyclic graph in a linear mode, and can optimize the overall scheduling time.

Description

Distributed task scheduling method based on directed acyclic graph in edge computing network
Technical Field
The invention belongs to the field of mobile edge computing, and particularly relates to a distributed task scheduling method based on a directed acyclic graph in an edge computing network.
Background
The Internet Of Things (Internet Of Things) is an important component Of a new generation Of information technology and is also an important development stage Of the information era. However, with the increase in the number of mobile devices, the enhancement of computing performance of edge devices, and the increasing demand of users, edge computing has become an important expansion form of the internet of things. Edge computing refers to providing services nearby by adopting an open platform with integrated network, computing, storage and application core capabilities on one side close to an object or a data source. The biggest difference between edge computing and cloud computing is that edge devices, such as intelligent terminals, automobiles, electric appliances, factories, wireless base stations, etc., are utilized to provide services for users. The edge computing aims to relieve the pressure of cloud computing, reduce the energy consumption of data computing, improve the fluency and the rapidness of data transmission and meet the real-time requirements of a client and a server by utilizing the edge devices.
With the rapid development of edge computing in the application of the internet of things, more and more terminal devices are added into an edge computing mode. For the traditional terminal application processing mode, if the complex application is calculated completely by local, the task execution time is too long, the energy consumption is too large, the calculation precision is low, and the operation requirement of the complex application cannot be met. For the centralized cloud computing mode, if all terminal complex application program data are transmitted to a cloud computing center which is far away for processing, although the centralized cloud computing mode can support accurate computing of complex application programs, a large number of terminals transmit the data to the cloud computing center for processing, so that the load of the cloud computing center and a network link is increased, and the communication energy consumption of the terminals is also increased; moreover, a long-distance and unstable backbone network between the terminal and the cloud server will also cause higher transmission delay, and it is difficult to meet the low-delay requirement of the terminal application.
In summary, in order to meet the Quality Of Service (Quality Of Service) requirement Of the user complex application, the edge computing platform is utilized to push the cloud Service to the edge Of the network, when the terminal processes the complex application, an offloading (offloading) decision is made, a part Of the application is executed at the terminal, and the other part with large computation amount is offloaded to the nearby edge device for execution, so that the execution efficiency Of the application is improved, the real-time performance Of data processing is ensured, the network bandwidth resource is saved, and the energy consumption Of terminal operation is reduced. In order to realize the edge computing processing of the complex application, a program to be executed on a terminal is decomposed into a task sequence, the task sequence reflects the front-back dependency relationship of a task by a Directed Acyclic Graph (DAG), and the task sequence is distributed to different edge device processors to run according to a task load balancing model, so that the execution energy consumption and the execution time of the complex program of the terminal are reduced.
Disclosure of Invention
The invention provides a distributed task scheduling method based on a directed acyclic graph in an edge computing network, which has the main advantages that tasks with a front-back constraint relation in the directed acyclic graph can be processed in a linear mode, and overall scheduling time can be optimized.
1. A distributed task scheduling method based on directed acyclic graph in edge computing network is characterized in that the method at least comprises the following steps:
step 1, arranging an edge computing network scene, wherein each edge service node processor in the network is composed of P ═ { P ═ P1,p2,...,pt,...,pmDenotes, P1Representing the local terminal processor, processor ptData processing speed of CtIndicates that the channel bandwidth between the terminal and the node processor is WvpTo CtForming a queue Y according to the sequence from big to small;
and 2, inputting a Directed Acyclic Graph (DAG) of the task to be processed at the terminal, wherein G is { V, E }, and a node set V is { V }1,v2,...,vi,...,vnThe edge set E represents the dependency relationship among tasks, and the following tasks must wait for the preambleProcessing can be started only after the task is completed;
step 3, traversing the nodes in the graph G, and finding out the nodes with the degree of entry of 0, namely the tasks v without the prepositioned tasksiThe tasks are based on the data size SiSequencing from big to small to form a task sequence Qj,QjThe tasks in (1) do not have a precedence dependency relationship any more, wherein j represents the layer number of the task sequence, the smaller j represents the higher priority of the tasks in the sequence, the initial value of j is 1, and KjIndicating the number of tasks in the task sequence;
step 4, delete Q in the graph GjUpdating the residual node degree of income by the node corresponding to the middle task to generate a new graph G, wherein j is j + 1;
step 5, repeating the step 3 until no node exists in the graph G;
step 6, with the task scheduling time optimized as the target, distributing the tasks in the task sequence set obtained in the step to the node processors in the edge computing network by using a task scheduling method based on load balancing, and using a when the scheduling is successfuli,t1, otherwise ai,t=0。
2. The distributed task scheduling method based on directed acyclic graph in edge computing network as claimed in claim 1, characterized in that the task in DAG is divided into task sequence sets without successive dependency relationship in layers by finding nodes with node in degree 0, if the task sequence layer number x < y, QyThe task in has to wait for QxThe tasks in the system can be processed only after being processed completely.
3. The method of claim 1, wherein the task is distributed to different node processors for parallel processing using a load balancing method to reduce scheduling time of a single-layer task sequence, thereby optimizing the latest completion time of each layer of tasks.
4. The distributed task scheduling method based on directed acyclic graph in an edge computing network according to claim 1, wherein the task scheduling method of load balancing at least comprises the following steps:
1) setting a layer number j as an initial value j equal to 1;
2) comparison QjThe number of tasks in (1) and the total number of processors, if QjNumber of pending tasks K injIf the number of the processors is less than or equal to m, the front K in the queue Y is selectedjIndividual node processor executes 4), otherwise executes 3);
3) get Q according to the total number m of processorsjProcessing the first m tasks;
4) calculating the transmission time of the task to be processed in the current layer to different node processors
Figure GDA0002896155800000031
ri,tIndicating the channel transmission rate, ri,t=Wvplog2(1+Hi,tgi,t2);Hi,tDenotes the transmission power, gi,tRepresenting the channel gain, σ2Representing the noise power, WvpRepresents the channel bandwidth;
5) calculating the processing time of the task to be processed in the current layer on different node processors
Figure GDA0002896155800000032
6) Calculating the average waiting time X when the tasks to be processed in the current layer are queuedi=Siλ, λ represents the subchannel average rate;
7) obtaining the total scheduling time of the current layer to-be-processed task and the current available node processor
Figure GDA0002896155800000033
The matrix B of (a) is,
Figure GDA0002896155800000034
Figure GDA0002896155800000035
8) starting from the 1 st row of the matrix, selecting the row with the minimum total scheduling time, taking the processor corresponding to the row as the execution processor of the row of tasks, if the processor selected by the certain row of tasks is selected by the uplink tasks, selecting the unselected processor as the execution processor of the row of tasks, and after the selection is finished, skipping to the next row and repeating the step until the tasks corresponding to all the rows in the matrix finish the selection of the execution processors;
9) during task processing, if QjIf there are unprocessed tasks, then when there is an idle processor, Q is sequentially addedjThe unprocessed task in the task list is distributed to an idle processor to run;
10) if Q isjIf the unprocessed tasks do not exist any more, j is j +1, and the step 2 is skipped to perform the task allocation process of the next layer until all the tasks in the Q are processed.
Compared with the prior art, the method has the advantages that:
a distributed task scheduling method based on a directed acyclic graph in an edge computing network is provided, so that tasks with a front-back constraint relation in the directed acyclic graph are linearized, and then the tasks are scheduled to the most appropriate node processor by considering the overall optimization of communication time, processing time and queuing time.
Drawings
FIG. 1 is a summary flow chart of the present invention;
FIG. 2 is a task layering flow diagram in the present invention;
FIG. 3 is a diagram of a task assignment method in the present invention;
FIG. 4 is an edge computing task scheduling model.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
And (3) parallel tasks: the complex application running on the intelligent terminal can be represented by a directed acyclic graph DAG, which is defined as G { V, E }.
The parallel task scheduling is generally divided into three steps, namely, the priority of a task node is determined firstly, then the overall task scheduling target is determined, and finally the tasks with the priorities from high to low are sequentially distributed to a proper processor according to the overall task scheduling target.
Assuming that a game application running on a smart terminal is taken as an example, the DAG diagram in fig. 4 is a task exploded view of the game application, and three receiving points are nearby edge computing devices.
Step one, determining task priority, the invention adopts DAG task hierarchy to determine task priority, the construction method is as follows:
a) traversing nodes in the graph G, and finding out nodes with the degree of income of 0, namely the tasks v without the prepositioned tasksiThe tasks are based on the data size SiSequencing from big to small to form a task sequence Qj,QjThe tasks in (1) do not have a precedence dependency relationship any more, wherein j represents the layer number of the task sequence, the smaller j represents the higher priority of the tasks in the sequence, the initial value of j is 1, and KjIndicating the number of tasks in the task sequence; the specific method for finding the task with the degree of income of 0 comprises the following steps: in a DAG graph, a matrix of degree exists between tasks, if the task 1 and the task 2 have an in-degree relation, the matrix is marked as 1, if the task 1 and the task 2 have an out-degree relation, the matrix is marked as 0, and if the task 1 and the task 2 have an out-degree relation, the matrix is marked as infinity; recording the relation between the tasks according to the matrix; at this time, the node with the degree of entry of 0 is n1Put in Q1The first layer task is { n1};
b) Deleting Q in graph GjUpdating the residual node degree of income by the node corresponding to the middle task to generate a new graph G, wherein j is j + 1;
c) repeating the steps a) and b) until no node exists in the graph G.
The first layer task after the steps is n1}; the second layer task is { n6,n2,n5,n4,n3}; the third layer task is { n7,n8,n9}; the fourth layer task is n10}。
Step two, determining a task overall scheduling target:
the objective of the whole task scheduling process is to minimize the task scheduling time, which includes the transmission time, the processing time and the queuing time of the task.
Step three, distributing the tasks to proper node processors according to the layering result:
a) sequencing node processors from high to low according to the calculation speed, then determining the relation between the number of tasks needing to be scheduled on each layer and the number of processors, and when the number of processors is greater than the current number of tasks: for example, the first layer, the third layer and the fourth layer of tasks, the total number of the tasks of each layer is compared with the total number of the processors, if the number of the tasks to be distributed of the layer is less than or equal to the number of the processors, the node processors with the same number as the current tasks are selected as scheduling processors, and the selection rule is to preferentially select the node processors with strong processing capacity;
b) when the number of processors is less than the current number of tasks: e.g. second layer, in terms of number of processors pmPerforming p before the layermA task;
c) calculating the transmission time of each task of the current layer and each available processor, calculating the time and queuing time:
the transmission time of each task of the current layer and each available processor is formulated as
Figure GDA0002896155800000041
The processing time of each task of the current layer and each available processor is formulated as
Figure GDA0002896155800000051
The average waiting time formula when the tasks to be processed in the current layer are queued is Xi=Si/λ;
d) Obtaining the total scheduling time of the current layer to-be-processed task and the current available node processor
Figure GDA0002896155800000052
The matrix B of (a) is,
Figure GDA0002896155800000053
Figure GDA0002896155800000054
e) starting from the 1 st row of the matrix, selecting the row with the minimum total scheduling time, taking the processor corresponding to the row as the execution processor of the row of tasks, if the processor selected by the certain row of tasks is selected by the uplink tasks, selecting the unselected processor as the execution processor of the row of tasks, and after the selection is finished, skipping to the next row and repeating the step until the tasks corresponding to all the rows in the matrix finish the selection of the execution processors;
f) during task processing, if QjIf there are unprocessed tasks, then when there is an idle processor, Q is sequentially addedjThe unprocessed task in the task list is distributed to an idle processor to run;
g) if Q isjAnd if the unprocessed tasks do not exist any more, j is j +1, and the task allocation process of the next layer is carried out until all the tasks in Q are processed.

Claims (4)

1. A distributed task scheduling method based on directed acyclic graph in edge computing network is characterized in that the method at least comprises the following steps:
step 1, arranging an edge computing network scene, wherein each edge service node processor in the network is composed of P ═ { P ═ P1,p2,...,pt,...,pmDenotes, P1Representing the local terminal processor, processor ptData processing speed of CtIndicates that the channel bandwidth between the terminal and the node processor is WvpTo CtForming a queue Y according to the sequence from big to small;
and 2, inputting a Directed Acyclic Graph (DAG) of the task to be processed at the terminal, wherein G is { V, E }, and a node set V is { V }1,v2,...,vi,...,vnRepresenting the tasks to be processed by the edge computing network, wherein an edge set E represents the dependency relationship among the tasks, and the subsequent tasks can start processing only after the completion of the preceding tasks;
step 3, traversing the nodes in the graph G, and finding out the nodes with the degree of entry of 0, namely the tasks v without the prepositioned tasksiBase these tasks onData size S thereofiSequencing from big to small to form a task sequence Qj,QjThe tasks in (1) do not have a precedence dependency relationship any more, wherein j represents the layer number of the task sequence, the smaller j represents the higher priority of the tasks in the sequence, the initial value of j is 1, and KjIndicating the number of tasks in the task sequence;
step 4, delete Q in the graph GjUpdating the residual node degree of income by the node corresponding to the middle task to generate a new graph G, wherein j is j + 1;
step 5, repeating the step 3 until no node exists in the graph G;
step 6, with the task scheduling time optimized as the target, distributing the tasks in the task sequence set obtained in the step to the node processors in the edge computing network by using a task scheduling method based on load balancing, and using a when the scheduling is successfuli,t1, otherwise ai,t=0。
2. The distributed task scheduling method based on directed acyclic graph in edge computing network as claimed in claim 1, characterized in that the task in DAG is divided into task sequence sets without successive dependency relationship in layers by finding nodes with node in degree 0, if the task sequence layer number x < y, QyThe task in has to wait for QxThe tasks in the system can be processed only after being processed completely.
3. The method of claim 1, wherein the task is distributed to different node processors for parallel processing using a load balancing method to reduce scheduling time of a single-layer task sequence, thereby optimizing the latest completion time of each layer of tasks.
4. The distributed task scheduling method based on directed acyclic graph in an edge computing network according to claim 1, wherein the task scheduling method of load balancing at least comprises the following steps:
1) setting a layer number j as an initial value j equal to 1;
2) comparison QjThe number of tasks in (1) and the total number of processors, if QjNumber of pending tasks K injIf the number of the processors is less than or equal to m, the front K in the queue Y is selectedjIndividual node processor executes 4), otherwise executes 3);
3) get Q according to the total number m of processorsjProcessing the first m tasks;
4) calculating the transmission time of the task to be processed in the current layer to different node processors
Figure FDA0002896155790000021
ri,tIndicating the channel transmission rate, ri,t=Wvplog2(1+Hi,tgi,t2);Hi,tDenotes the transmission power, gi,tRepresenting the channel gain, σ2Representing the noise power, WvpRepresents the channel bandwidth;
5) calculating the processing time of the task to be processed in the current layer on different node processors
Figure FDA0002896155790000022
6) Calculating the average waiting time X when the tasks to be processed in the current layer are queuedi=Siλ, λ represents the subchannel average rate;
7) obtaining the total scheduling time of the current layer to-be-processed task and the current available node processor
Figure FDA0002896155790000025
The matrix B of (a) is,
Figure FDA0002896155790000023
Figure FDA0002896155790000024
8) starting from the 1 st row of the matrix, selecting the row with the minimum total scheduling time, taking the processor corresponding to the row as the execution processor of the row of tasks, if the processor selected by the certain row of tasks is selected by the uplink tasks, selecting the unselected processor as the execution processor of the row of tasks, and after the selection is finished, skipping to the next row and repeating the step until the tasks corresponding to all the rows in the matrix finish the selection of the execution processors;
9) during task processing, if QjIf there are unprocessed tasks, then when there is an idle processor, Q is sequentially addedjThe unprocessed task in the task list is distributed to an idle processor to run;
10) if Q isjIf the unprocessed tasks do not exist any more, j is j +1, and the step 2 is skipped to perform the task allocation process of the next layer until all the tasks in the Q are processed.
CN201811462177.8A 2018-11-30 2018-11-30 Distributed task scheduling method based on directed acyclic graph in edge computing network Active CN109561148B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811462177.8A CN109561148B (en) 2018-11-30 2018-11-30 Distributed task scheduling method based on directed acyclic graph in edge computing network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811462177.8A CN109561148B (en) 2018-11-30 2018-11-30 Distributed task scheduling method based on directed acyclic graph in edge computing network

Publications (2)

Publication Number Publication Date
CN109561148A CN109561148A (en) 2019-04-02
CN109561148B true CN109561148B (en) 2021-03-23

Family

ID=65868556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811462177.8A Active CN109561148B (en) 2018-11-30 2018-11-30 Distributed task scheduling method based on directed acyclic graph in edge computing network

Country Status (1)

Country Link
CN (1) CN109561148B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069341B (en) * 2019-04-10 2022-09-06 中国科学技术大学 Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing
CN111814002A (en) * 2019-04-12 2020-10-23 阿里巴巴集团控股有限公司 Directed graph identification method and system and server
CN110134505A (en) * 2019-05-15 2019-08-16 湖南麒麟信安科技有限公司 A kind of distributed computing method of group system, system and medium
CN110231984B (en) * 2019-06-06 2021-07-16 湖南大学 Multi-workflow task allocation method and device, computer equipment and storage medium
CN111367644B (en) * 2020-03-17 2023-03-14 中国科学技术大学 Task scheduling method and device for heterogeneous fusion system
CN111427679B (en) * 2020-03-25 2023-12-22 中国科学院自动化研究所 Computing task scheduling method, system and device for edge computing
CN111782389A (en) * 2020-06-22 2020-10-16 中科边缘智慧信息科技(苏州)有限公司 Task scheduling system and method under mobile edge information service network
CN112130927B (en) * 2020-09-21 2022-09-13 安阳师范学院 Reliability-enhanced mobile edge computing task unloading method
CN111930448B (en) * 2020-09-23 2020-12-25 南京梦饷网络科技有限公司 Method, electronic device, and storage medium for service distribution
CN112181655A (en) * 2020-09-30 2021-01-05 杭州电子科技大学 Hybrid genetic algorithm-based calculation unloading method in mobile edge calculation
CN112463397B (en) * 2020-12-10 2023-02-10 中国科学院深圳先进技术研究院 Lock-free distributed deadlock avoidance method and device, computer equipment and readable storage medium
CN112884247B (en) * 2021-03-23 2023-07-14 中国人民解放军国防科技大学 Command management and control center and control method
CN113535367B (en) * 2021-09-07 2022-01-25 北京达佳互联信息技术有限公司 Task scheduling method and related device
CN115145711B (en) * 2022-09-02 2022-12-23 北京睿企信息科技有限公司 Data processing system for acquiring directed acyclic graph task result
US11887353B1 (en) 2022-09-21 2024-01-30 Zhejiang Lab Deep learning image classification oriented to heterogeneous computing device
CN115249315B (en) * 2022-09-21 2023-02-03 之江实验室 Heterogeneous computing device-oriented deep learning image classification method and device
CN115840631B (en) * 2023-01-04 2023-05-16 中科金瑞(北京)大数据科技有限公司 RAFT-based high-availability distributed task scheduling method and equipment
CN116302396B (en) * 2023-02-13 2023-09-01 上海浦东发展银行股份有限公司 Distributed task scheduling method based on directed acyclic
CN116170846B (en) * 2023-04-25 2023-07-14 天津市工业和信息化研究院(天津市节能中心、天津市工业和信息化局教育中心) Distributed edge computing system and edge computing method for data communication

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831012A (en) * 2011-06-16 2012-12-19 日立(中国)研究开发有限公司 Task scheduling device and task scheduling method in multimode distributive system
CN103235640A (en) * 2013-01-08 2013-08-07 北京邮电大学 DVFS-based energy-saving dispatching method for large-scale parallel tasks
CN103631657A (en) * 2013-11-19 2014-03-12 浪潮电子信息产业股份有限公司 Task scheduling algorithm based on MapReduce
CN104052811A (en) * 2014-06-17 2014-09-17 华为技术有限公司 Service scheduling method and device and system
CN104965754A (en) * 2015-03-31 2015-10-07 腾讯科技(深圳)有限公司 Task scheduling method and task scheduling apparatus
CN108737462A (en) * 2017-04-17 2018-11-02 华东师范大学 A kind of cloud computation data center method for scheduling task based on graph theory
CN108897625A (en) * 2018-07-06 2018-11-27 陈霖 Method of Scheduling Parallel based on DAG model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9900378B2 (en) * 2016-02-01 2018-02-20 Sas Institute Inc. Node device function and cache aware task assignment
US10567248B2 (en) * 2016-11-29 2020-02-18 Intel Corporation Distributed assignment of video analytics tasks in cloud computing environments to reduce bandwidth utilization

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831012A (en) * 2011-06-16 2012-12-19 日立(中国)研究开发有限公司 Task scheduling device and task scheduling method in multimode distributive system
CN103235640A (en) * 2013-01-08 2013-08-07 北京邮电大学 DVFS-based energy-saving dispatching method for large-scale parallel tasks
CN103631657A (en) * 2013-11-19 2014-03-12 浪潮电子信息产业股份有限公司 Task scheduling algorithm based on MapReduce
CN104052811A (en) * 2014-06-17 2014-09-17 华为技术有限公司 Service scheduling method and device and system
CN104965754A (en) * 2015-03-31 2015-10-07 腾讯科技(深圳)有限公司 Task scheduling method and task scheduling apparatus
CN108737462A (en) * 2017-04-17 2018-11-02 华东师范大学 A kind of cloud computation data center method for scheduling task based on graph theory
CN108897625A (en) * 2018-07-06 2018-11-27 陈霖 Method of Scheduling Parallel based on DAG model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"A Hybrid Task Scheduler for DAG Applications on a Cluster of Processors";Uma Boregowda;Venugopal R. Chakravarthy;《2014 Fourth International Conference on Advances in Computing and Communications》;20141230;全文 *
"HETS: Heterogeneous Edge and Task Scheduling Algorithm for Heterogeneous Computing Systems";Anum Masood等;《2015 IEEE 17th International Conference on High Performance Computing and Communications, 2015 IEEE 7th International Symposium on Cyberspace Safety and Security, and 2015 IEEE 12th International Conference on Embedded Software and Systems》;20151230;全文 *
"一种改进的异构多处理器实时任务调度算法研究";尹杨美;《中国硕士学位论文全文数据库信息科技辑》;20100426;全文 *

Also Published As

Publication number Publication date
CN109561148A (en) 2019-04-02

Similar Documents

Publication Publication Date Title
CN109561148B (en) Distributed task scheduling method based on directed acyclic graph in edge computing network
CN107911478B (en) Multi-user calculation unloading method and device based on chemical reaction optimization algorithm
CN109669768B (en) Resource allocation and task scheduling method for edge cloud combined architecture
CN109885397B (en) Delay optimization load task migration algorithm in edge computing environment
CN113950066B (en) Single server part calculation unloading method, system and equipment under mobile edge environment
CN111427679B (en) Computing task scheduling method, system and device for edge computing
CN110941667A (en) Method and system for calculating and unloading in mobile edge calculation network
CN112988345B (en) Dependency task unloading method and device based on mobile edge calculation
CN111381950A (en) Task scheduling method and system based on multiple copies for edge computing environment
CN111813506B (en) Resource perception calculation migration method, device and medium based on particle swarm optimization
CN113220356B (en) User computing task unloading method in mobile edge computing
CN110570075B (en) Power business edge calculation task allocation method and device
CN110069341B (en) Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing
CN112799823B (en) Online dispatching and scheduling method and system for edge computing tasks
CN107341041B (en) Cloud task multidimensional constraint backfill scheduling method based on priority queue
CN109656713B (en) Container scheduling method based on edge computing framework
CN113867843B (en) Mobile edge computing task unloading method based on deep reinforcement learning
CN113821318A (en) Internet of things cross-domain subtask combined collaborative computing method and system
CN111711962A (en) Cooperative scheduling method for subtasks of mobile edge computing system
CN114564312A (en) Cloud edge-side cooperative computing method based on adaptive deep neural network
CN112860337A (en) Method and system for unloading dependent tasks in multi-access edge computing
CN110048966B (en) Coflow scheduling method for minimizing system overhead based on deadline
CN111199316A (en) Cloud and mist collaborative computing power grid scheduling method based on execution time evaluation
CN113741999B (en) Dependency-oriented task unloading method and device based on mobile edge calculation
CN113139639B (en) MOMBI-oriented smart city application multi-target computing migration method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant