CN105740059A - Particle swarm scheduling method for divisible task - Google Patents

Particle swarm scheduling method for divisible task Download PDF

Info

Publication number
CN105740059A
CN105740059A CN201410768968.9A CN201410768968A CN105740059A CN 105740059 A CN105740059 A CN 105740059A CN 201410768968 A CN201410768968 A CN 201410768968A CN 105740059 A CN105740059 A CN 105740059A
Authority
CN
China
Prior art keywords
task
particle
subtask
fitness
population
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410768968.9A
Other languages
Chinese (zh)
Other versions
CN105740059B (en
Inventor
尤佳莉
乔楠楠
刘学
齐卫宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Acoustics CAS
Beijing Hili Technology Co Ltd
Shanghai 3Ntv Network Technology Co Ltd
Original Assignee
Institute of Acoustics CAS
Beijing Hili Technology Co Ltd
Shanghai 3Ntv Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Acoustics CAS, Beijing Hili Technology Co Ltd, Shanghai 3Ntv Network Technology Co Ltd filed Critical Institute of Acoustics CAS
Priority to CN201410768968.9A priority Critical patent/CN105740059B/en
Publication of CN105740059A publication Critical patent/CN105740059A/en
Application granted granted Critical
Publication of CN105740059B publication Critical patent/CN105740059B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Multi Processors (AREA)

Abstract

The invention relates to a particle swarm scheduling method for a divisible task. The method comprises the steps of after dividing the task to be scheduled into sub-tasks, using each randomly generated task distribution scheme as a particle, using the time performance corresponding to each task distribution scheme as the fitness of the corresponding particle, computing the mutual movement speed between the particles by using the differences between the particle fitnesses, performing evolution on the particle swarm for many times, and selecting the particle with the best fitness from the result of the many times of evolution; and at last, through combining with the overhead value, scheduling the sub-tasks in the task distribution scheme corresponding to the particle with the best fitness.

Description

A kind of population dispatching method towards Divisible task
Technical field
The present invention relates to computer networking technology, particularly to a kind of population dispatching method towards Divisible task.
Background technology
Common method for scheduling task has multiple, and a portion method is that the entirety being considered as can not be split by task is scheduling, and this kind of method can be listed below:
First Min-Min algorithm dopes each task minimum completion time on each processor in current task queue, then the task with minimum completion time is distributed to corresponding processor, update the ready time of corresponding processor simultaneously, being assigned with of task is removed from task queue, the remaining task of such duplicate allocation, until whole task queue is empty.Easily there is load imbalance phenomenon in Min-Min algorithm.
Max-Min algorithm and Min-Min algorithm are different in that, after determining each task earliest finish time on each processor, the task with maximum earliest finish time is distributed to corresponding processor, and the corresponding processor ready time that upgrades in time, remaining task is repeated processing.Max-Min algorithm makes moderate progress than Min-Min algorithm in load balancing.
Promethee algorithm is at task end, according to user-defined standard (such as task scale, prediction on current processor perform the one in the index such as used time, cost, it is also possible to be weighted many indexes processing the integrated performance index obtained), pending task is carried out prioritization;In processor end, monitor in real time machine state, once there be machine that idle condition occurs, just task the highest for priority is assigned to the machine of current idle up according to the task priority sequence obtained in advance.Emulation shows, suitably adjusts the weights between each performance indications, and algorithm can be made to realize many-sided best performance.
Separately have certain methods to propose that task is divided into multiple subtask to dispatch one by one, but the task object analyzed is only limitted to some specific tasks, do not relate to high-volume task and occur simultaneously, multiple partitioning scheme the situation deposited, this kind of method can be listed below:
First the genetic algorithm of sequential correlator tasks in parallel scheduling is analyzed the timing requirements between subtask, and time depth value when all subtasks are performed is ranked up.Then stochastic generation several " subtask-node " allocation matrix, each " subtask-node " matrix is a kind of allocative decision.The thinking of algorithm is randomly generated several allocative decision and constitutes initial population, and the individuality in population is made a variation and screens operation, so as to improve by generation, thus obtaining the scheme new, the deadline is shorter.Through a lot of for after genetic algorithm, it can be deduced that stable, preferably solve.But the complexity of genetic algorithm is higher, very big calculation delay when total task number is more in a network, can be caused.
EDTS algorithm is the method that the N step task within a task carries out optimal scheduling, first algorithm dopes time and the energy consumption that each subtask performs to spend on all machines, then it has been this succession of task setting total deadline, under fixing total deadline, in conjunction with existing sequential relationship, find out the subtask method of salary distribution energy-conservation as far as possible, but EDTS algorithm splits just for a task, scheduling, it is accomplished that the best performance of a task self, when network occurs broad medium task, the mutual waiting time caused due to temporal constraint between subtask is longer, the local optimum of each task is contradiction with overall optimization.
Summary of the invention
It is an object of the invention to overcome the defects such as subtask scheduling method complexity of the prior art is high, calculation delay is big, energy consumption is high, thus providing a kind of subtask scheduling method of comprehensive measure time, energy consumption.
To achieve these goals, the invention provides a kind of population dispatching method towards Divisible task, including:
After task to be scheduled is divided into subtask, using the task allocative decision that randomly generates as a particle, using time performance corresponding to task allocative decision as the fitness of particle, with the speed being mutually shifted between the mathematic interpolation particle between particle fitness, population is done and repeatedly evolves, from the result repeatedly evolved, select the particle that fitness is best;Finally in conjunction with overhead value, subtask scheduling is done in each subtask in task allocative decision particle corresponding to best to fitness.
In technique scheme, the method specifically includes:
Step 1), collect the instruction strip number until scheduler task, segmentation after the instruction strip number of multiple subtasks, Temporal dependency relation between subtask, the blocks of files size of each task;
Step 2), gather the speed of service MIPS of each server, unit interval electricity charge expense CPS, present load, the earliest idle moment EST in cluster, the bandwidth information between calculation server;
Step 3), queue will treat that scheduler task carries out descending sort according to instruction strip number, scheduler task is treated for head of the queue, performs step 4 successively) step 10), until the needed scheduler task in queue has all been processed;
Step 4), current is treated that scheduler task is decomposed, obtain subtask structure chart, and treat that scheduler task randomly generates N number of allocative decision described in being, form initial population;
Wherein, described subtask structure chart reflects the sequential relationship treating that scheduler task is broken down into before subtask;
Described initial population includes N number of particle P of stochastic generation1,P2,...PN, each particle represents " task server " allocative decision, if server add up to M, then each particle PnIt is expressed as a L*M matrix Sn: wherein, the size of 1≤n≤N, N according to the number of servers in cluster, treat that the subtask number comprised in scheduler task sets, L represents the number of obtained subtask after scheduler task is decomposed;
Step 5), calculate the time performance Makespan of N number of particle, draw the fitness of N number of particle;
Step 6), calculate particle translational speed in new round iteration, each particle is once evolved, obtains population of future generation;
Step 7), judge that whether current evolution number of times is less than preset value, if so, re-executes step 5), otherwise, perform step 8);
Step 8), filter out, from all of evolution result before, the particle P that fitness is the highestbest
Step 9), the weights P that overhead performance is corresponding is set, carry out the task immigration based on expense Cost;
Step 10), output step 9) task immigration result, carry out subtask scheduling by this result.
In technique scheme, in step 5) in, calculate time performance Makespan according to following principle:
A, subtask start time must after the finish time of its all forerunner subtasks;
B, subtask must receive the output file of its all forerunner subtasks as its input file;
C, subtask must start under being currently located processor idle states;
D, subtask must start as early as possible when meeting above-mentioned condition a-c condition;
E, subtask TaskjExecution required time is Task_MIj/MIPSk
Time span between the start time that time span Makespan is its first subtask and the finish time of last subtask that f, task complete.
In technique scheme, in step 5) in, the fitness Fitness of the n-th particle is calculated according to equation below (2)n:
Fitnessn=max{Makespan1,Makespan2,...MakespanN}-MakespannFormula (2).
In technique scheme, described step 6) including:
It is P by particle label maximum for fitness valuebest, allocation matrix corresponding for this particle is labeled as Sbest
Definition particle PnTo Pbest. translational speed be Vn,best, then shown in the formula (3) that its computing formula is such as following:
Vn,best=(fitnessbest-fitnn)/fitnesszFormula (3);
In the once evolution of population, each particle PnAll with speed Vn,bestTo PbestShifting moves a step, and the N number of new particle drawn forms population of future generation.
In technique scheme, described step 9) including:
The particle P that fitness is best in filtering out current particle groupbestAfter, for current subtask, calculate its time performance performed on other any server and overhead performance, and calculate this performance compared to PbestGain delta C and Δ FT, wherein C represents cost when performing on a certain server, FT represent on a certain server perform complete the moment, CbaseAnd FTbaseIt it is C and the FT value calculated before task immigration;
For Servers-all, find out according to the following formula (4) and calculate the server that obtained value is maximum, then subtask is assigned on this server:
P*ΔC/Cbase+(1-P)*ΔFT/FTbaseFormula (4).
It is an advantage of the current invention that:
1, the method for the present invention has carried out the consideration of many-sided performance factor, when assessing scheduling scheme, not only consider its performance total time, and using indexs such as the load balancing in the overhead of tasks carrying, cluster as the factor considered, this makes task distribution more reasonable, efficient;
2, the method for the present invention had both considered the scene that batch tasks asks to occur simultaneously, it is contemplated that the Temporal dependency between subtask, the cutting operation of subtask solves the limitation that on the virtual machine in cloud environment, software license resource limit is brought.
Accompanying drawing explanation
Fig. 1 is in one embodiment, the schematic diagram of the network environment that the inventive method is applied;
Fig. 2 is the schematic diagram of serial sequential relationship between subtask;
Fig. 3 is the schematic diagram of parallel sequential relation between subtask;
Fig. 4 is the schematic diagram of mixed type sequential relationship between subtask;
Fig. 5 is the flow chart of the inventive method.
Detailed description of the invention
In conjunction with accompanying drawing, the invention will be further described.
The batch that the method for the present invention is initiated for client, alienable task requests, sequential relationship between the subtask granularity formed after the segmentation of consideration task and subtask, and in conjunction with the computing capability of different processor in server cluster and the factor such as power, Internet bandwidth, batch tasks is scheduling so that scheduling result is respectively provided with good performance in total time and charge costs two.
Before the method for the present invention is elaborated, first concept involved in the inventive method is done an introduction.
The method of the present invention is before use it suffices that three below precondition:
1, the traffic control of all terminal tasks in this region is responsible for by the scheduler in a region, and the task that terminal is initiated need to wait in line the decision-making of scheduler;
2, the batch tasks handled by the present invention is alienable task (namely may be partitioned into multiple subtask), and there is serial, parallel or mixed type sequential relationship between subtask.If a subtask exists forerunner subtask, then need to wait for after its all predecessor tasks complete, this subtask to be performed;
3, described in condition 2 as in the previous, meeting the sequential relationship between subtask is the essential condition that subtask is run, when a subtask and its certain forerunner subtask be assigned to different server perform time, it is necessary to the output file of forerunner subtask is sent to place, follow-up subtask server as input file by the Internet or LAN;If certain subtask and its forerunner subtask are assigned to same server, then do not need to carry out file transmission.
The respectively schematic diagram of serial sequential relationship, parallel sequential relation, mixed type sequential relationship between subtask described in condition 2 in figs. 2,3 and 4.
The method of the present invention pursues following two performance indications:
1, the overall deadline of currently pending batch tasks;
2, the overhead needed for batch tasks is performed, including charge on traffic and the processor electricity charge.
The method of the present invention mainly considers three below factor when performance evaluation:
1, the different CPU impacts for task completion time performing speed;
2, the network bandwidth that network topology structure the determines impact on I/O file propagation delay time;
3, the CPU of different capacity is for the impact of power consumption and the corresponding electricity charge.
The definition of symbol involved in the inventive method is as follows:
Jobi: i-th task;
Taskj: jth subtask;
Serverk: kth server;
Job_MIi: JobiInstruction strip number (million);
Makespani: JobiFrom initiating to perform the actual time delay that terminates to experience;
Task_MIj: TaskiInstruction strip number (million);
MIPSk: ServerkThe instruction strip number (million/second) of operation per second;
CPSk: ServerkExpense per second in the operational mode;
ECTk: ServerkIdle moment the earliest.
In the embodiment shown in fig. 1, including 3 server clusters in network, cluster topology structure and bandwidth condition are as shown in Figure 1.
With reference to Fig. 5, the method for the present invention comprises the following steps:
Step 1), collect the instruction strip number until scheduler task, segmentation after the instruction strip number of multiple subtasks, Temporal dependency relation between subtask, the blocks of files size of each task;
Step 2), gather the speed of service MIPS of each server, unit interval electricity charge expense CPS, present load, the earliest idle moment EST in cluster, the bandwidth information between calculation server;
Step 3), queue will treat that scheduler task carries out descending sort according to instruction strip number, scheduler task is treated for head of the queue, performs step 4 successively) step 10), until the needed scheduler task in queue has all been processed;
Step 4), current is treated that scheduler task is decomposed, obtain subtask structure chart, and treat that scheduler task randomly generates N number of allocative decision described in being, form initial population;
Described subtask structure chart reflects the sequential relationship treating that scheduler task is broken down into before subtask, i.e. forerunner-follow-up relation, subtask structure chart ensure that, in follow-up execution scheduling process, any subtask all starts after its all predecessor tasks complete again.
In initial population, the N number of particle P of stochastic generation1,P2,...PN, each particle represents " task server " allocative decision, if server add up to M, then each particle Pn(1≤n≤N) is represented by a L*M matrix Sn: wherein, the size of N according to the number of servers in cluster, treat that the subtask number comprised in scheduler task sets, L represents the number of obtained subtask after scheduler task is decomposed;
Step 5), calculate the Makespan of N number of particle, draw the fitness of N number of particle;
In this step, according to following principle, time performance Makespan is calculated for each particle (i.e. each task):
A, subtask start time must after the finish time of its all forerunner subtasks;
B, subtask must receive the output file of its all forerunner subtasks as its input file;
C, subtask must start under being currently located processor idle states;
D, subtask must start as early as possible when meeting above-mentioned condition a-c condition;
E, subtask TaskjExecution required time is Task_MIj/MIPSk
Time span between the start time that time span Makespan is its first subtask and the finish time of last subtask that f, task complete.
After the Makespan obtaining N number of particle, calculate the fitness Fitness of the n-th particle according to equation below (2)n:
Fitnessn=max{Makespan1,Makespan2,...MakespanN}-MakespannFormula (2)
The respective fitness of N number of particle can be obtained according to this formula.
Step 6), calculate particle translational speed in new round iteration, each particle is once evolved, obtains population of future generation;
After calculating the respective fitness value of N number of particle in a previous step, it is P by particle label maximum for wherein fitness valuebest, allocation matrix corresponding for this particle is labeled as Sbest, define particle PnTo Pbest. translational speed be Vn,best, then shown in the formula (3) that its computing formula is such as following:
Vn,best=(fitnessbest-fitnn)/fitnessbFormula (3)
Vn,bestCan be regarded as matrix SnIn element variation for SbestProbability, in the once evolution of population, each particle PnAll with speed Vn,bestTo PbestShifting moves a step, and the N number of new particle drawn will form population of future generation.
Step 7), current evolution number of times whether less than preset value, if so, re-execute step 5), otherwise, perform step 8);Described preset value is determined according to subtask number, number of servers, and subtask number, number of servers are more many, and it is more big that this preset value just sets.
Step 8), filter out, from all of evolution result before, the particle P that fitness is the highestbest
Step 9), arrange weights P corresponding to overhead performance (according to repeatedly test result obtain empirical value), carry out the task immigration based on expense Cost;
The particle P that fitness is best in filtering out current particle groupbestAfter, for current subtask, calculate its time performance performed on other any server and overhead performance, and calculate this performance compared to PbestGain delta C and Δ FT, wherein C represents the cost (including charge on traffic and the processor electricity charge) when performing on a certain server, FT represent on a certain server perform complete the moment, CbaseAnd FTbaseIt it is C and the FT value calculated before task immigration.
For Servers-all, find out according to the following formula (4) and calculate the server that obtained value is maximum, then subtask is assigned on this server:
P*ΔC/Cbase+(1-P)*ΔFT/FTbaseFormula (4)
Step 10), output step 9) task immigration result, carry out subtask scheduling by this result.
It should be noted last that, above example is only in order to illustrate technical scheme and unrestricted.Although the present invention being described in detail with reference to embodiment, it will be understood by those within the art that, technical scheme being modified or equivalent replacement, without departure from the spirit and scope of technical solution of the present invention, it all should be encompassed in the middle of scope of the presently claimed invention.

Claims (6)

1. towards a population dispatching method for Divisible task, including:
After task to be scheduled is divided into subtask, using the task allocative decision that randomly generates as a particle, using time performance corresponding to task allocative decision as the fitness of particle, with the speed being mutually shifted between the mathematic interpolation particle between particle fitness, population is done and repeatedly evolves, from the result repeatedly evolved, select the particle that fitness is best;Finally in conjunction with overhead value, subtask scheduling is done in each subtask in task allocative decision particle corresponding to best to fitness.
2. the population dispatching method towards Divisible task according to claim 1, it is characterised in that the method specifically includes:
Step 1), collect the instruction strip number until scheduler task, segmentation after the instruction strip number of multiple subtasks, Temporal dependency relation between subtask, the blocks of files size of each task;
Step 2), gather the speed of service MIPS of each server, unit interval electricity charge expense CPS, present load, the earliest idle moment EST in cluster, the bandwidth information between calculation server;
Step 3), queue will treat that scheduler task carries out descending sort according to instruction strip number, scheduler task is treated for head of the queue, performs step 4 successively) step 10), until the needed scheduler task in queue has all been processed;
Step 4), current is treated that scheduler task is decomposed, obtain subtask structure chart, and treat that scheduler task randomly generates N number of allocative decision described in being, form initial population;
Wherein, described subtask structure chart reflects the sequential relationship treating that scheduler task is broken down into before subtask;
Described initial population includes N number of particle P of stochastic generation1,P2,...PN, each particle represents " task server " allocative decision, if server add up to M, then each particle PnIt is expressed as a L*M matrix Sn: wherein, the size of 1≤n≤N, N according to the number of servers in cluster, treat that the subtask number comprised in scheduler task sets, L represents the number of obtained subtask after scheduler task is decomposed;
Step 5), calculate the time performance Makespan of N number of particle, draw the fitness of N number of particle;
Step 6), calculate particle translational speed in new round iteration, each particle is once evolved, obtains population of future generation;
Step 7), judge that whether current evolution number of times is less than preset value, if so, re-executes step 5), otherwise, perform step 8);
Step 8), filter out, from all of evolution result before, the particle P that fitness is the highestbest
Step 9), the weights P that overhead performance is corresponding is set, carry out the task immigration based on expense Cost;
Step 10), output step 9) task immigration result, carry out subtask scheduling by this result.
3. the population dispatching method towards Divisible task according to claim 1, it is characterised in that in step 5) in, calculate time performance Makespan according to following principle:
A, subtask start time must after the finish time of its all forerunner subtasks;
B, subtask must receive the output file of its all forerunner subtasks as its input file;
C, subtask must start under being currently located processor idle states;
D, subtask must start as early as possible when meeting above-mentioned condition a-c condition;
E, subtask TaskjExecution required time is Task_MIj/MIPSk
Time span between the start time that time span Makespan is its first subtask and the finish time of last subtask that f, task complete.
4. the population dispatching method towards Divisible task according to claim 1, it is characterised in that in step 5) in, the fitness Fitness of the n-th particle is calculated according to equation below (2)n:
Fitnessn=max{Makespan1,Makespan2,...MakespanN}-MakespannFormula (2).
5. the population dispatching method towards Divisible task according to claim 1, it is characterised in that described step 6) including:
It is P by particle label maximum for fitness valuebest, allocation matrix corresponding for this particle is labeled as Sbest
Definition particle PnTo Pbest. translational speed be Vn,best, then shown in the formula (3) that its computing formula is such as following:
Vn,best=(fitnessbest-fitnn)/fitnessbFormula (3);
In the once evolution of population, each particle PnAll with speed Vn,bestTo PbestShifting moves a step, and the N number of new particle drawn forms population of future generation.
6. the population dispatching method towards Divisible task according to claim 1, it is characterised in that described step 9) including:
The particle P that fitness is best in filtering out current particle groupbestAfter, for current subtask, calculate its time performance performed on other any server and overhead performance, and calculate this performance compared to PbestGain delta C and Δ FT, wherein C represents cost when performing on a certain server, FT represent on a certain server perform complete the moment, CbaseAnd FTbaseIt it is C and the FT value calculated before task immigration;
For Servers-all, find out according to the following formula (4) and calculate the server that obtained value is maximum, then subtask is assigned on this server:
P*ΔC/Cbase+(1-P)*ΔFT/FTbaseFormula (4).
CN201410768968.9A 2014-12-11 2014-12-11 A kind of population dispatching method towards Divisible task Expired - Fee Related CN105740059B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410768968.9A CN105740059B (en) 2014-12-11 2014-12-11 A kind of population dispatching method towards Divisible task

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410768968.9A CN105740059B (en) 2014-12-11 2014-12-11 A kind of population dispatching method towards Divisible task

Publications (2)

Publication Number Publication Date
CN105740059A true CN105740059A (en) 2016-07-06
CN105740059B CN105740059B (en) 2018-12-04

Family

ID=56240898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410768968.9A Expired - Fee Related CN105740059B (en) 2014-12-11 2014-12-11 A kind of population dispatching method towards Divisible task

Country Status (1)

Country Link
CN (1) CN105740059B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371908A (en) * 2016-08-31 2017-02-01 武汉鸿瑞达信息技术有限公司 Optimization method for image/video filter task distribution based on PSO (Particle Swarm Optimization)
CN107613025A (en) * 2017-10-31 2018-01-19 武汉光迅科技股份有限公司 A kind of implementation method replied based on message queue order and device
CN107729130A (en) * 2017-09-20 2018-02-23 昆明理工大学 A kind of time point based on information physical system does not know task-dynamic dispatching method
CN110928651A (en) * 2019-10-12 2020-03-27 杭州电子科技大学 Service workflow fault-tolerant scheduling method under mobile edge environment
CN111414198A (en) * 2020-03-18 2020-07-14 北京字节跳动网络技术有限公司 Request processing method and device
TWI749992B (en) * 2021-01-06 2021-12-11 力晶積成電子製造股份有限公司 Wafer manufacturing management method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604258A (en) * 2009-07-10 2009-12-16 杭州电子科技大学 A kind of method for scheduling task of embedded heterogeneous multiprocessor system
CN102724220A (en) * 2011-03-29 2012-10-10 无锡物联网产业研究院 Method and apparatus for task cooperation, and system for internet of things
CN103019822A (en) * 2012-12-07 2013-04-03 北京邮电大学 Large-scale processing task scheduling method for income driving under cloud environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604258A (en) * 2009-07-10 2009-12-16 杭州电子科技大学 A kind of method for scheduling task of embedded heterogeneous multiprocessor system
CN102724220A (en) * 2011-03-29 2012-10-10 无锡物联网产业研究院 Method and apparatus for task cooperation, and system for internet of things
CN103019822A (en) * 2012-12-07 2013-04-03 北京邮电大学 Large-scale processing task scheduling method for income driving under cloud environment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XINGQUAN ZUO等: "Self-Adaptive Learning PSO-Based Deadline Constrained Task Scheduling for Hybrid IaaS Cloud", 《IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING》 *
张陶等: "基于改进粒子群算法的云计算任务调度算法", 《计算机工程与应用》 *
邓林义等: "粒子群算法求解任务可拆分项目调度问题", 《控制与决策》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371908A (en) * 2016-08-31 2017-02-01 武汉鸿瑞达信息技术有限公司 Optimization method for image/video filter task distribution based on PSO (Particle Swarm Optimization)
CN107729130A (en) * 2017-09-20 2018-02-23 昆明理工大学 A kind of time point based on information physical system does not know task-dynamic dispatching method
CN107613025A (en) * 2017-10-31 2018-01-19 武汉光迅科技股份有限公司 A kind of implementation method replied based on message queue order and device
CN110928651A (en) * 2019-10-12 2020-03-27 杭州电子科技大学 Service workflow fault-tolerant scheduling method under mobile edge environment
CN110928651B (en) * 2019-10-12 2022-03-01 杭州电子科技大学 Service workflow fault-tolerant scheduling method under mobile edge environment
CN111414198A (en) * 2020-03-18 2020-07-14 北京字节跳动网络技术有限公司 Request processing method and device
TWI749992B (en) * 2021-01-06 2021-12-11 力晶積成電子製造股份有限公司 Wafer manufacturing management method and system

Also Published As

Publication number Publication date
CN105740059B (en) 2018-12-04

Similar Documents

Publication Publication Date Title
CN108829494B (en) Container cloud platform intelligent resource optimization method based on load prediction
CN107888669B (en) Deep learning neural network-based large-scale resource scheduling system and method
CN105740059A (en) Particle swarm scheduling method for divisible task
US10031774B2 (en) Scheduling multi-phase computing jobs
CN105718479B (en) Execution strategy generation method and device under cross-IDC big data processing architecture
CN104765640B (en) A kind of intelligent Service dispatching method
CN110389816B (en) Method, apparatus and computer readable medium for resource scheduling
CN105260235A (en) Method and device for scheduling resources on basis of application scenarios in cloud platform
CN103593323A (en) Machine learning method for Map Reduce task resource allocation parameters
CN111274036A (en) Deep learning task scheduling method based on speed prediction
CN107370799B (en) A kind of online computation migration method of multi-user mixing high energy efficiency in mobile cloud environment
CN104915253A (en) Work scheduling method and work processor
Cao et al. A parallel computing framework for large-scale air traffic flow optimization
CN116450312A (en) Scheduling strategy determination method and system for pipeline parallel training
CN108132840B (en) Resource scheduling method and device in distributed system
EP4300305A1 (en) Methods and systems for energy-efficient scheduling of periodic tasks on a group of processing devices
CN102184124A (en) Task scheduling method and system
CN111930485A (en) Job scheduling method based on performance expression
Hu et al. An optimal resource allocator of elastic training for deep learning jobs on cloud
CN111506407B (en) Resource management and job scheduling method and system combining Pull mode and Push mode
CN113010319A (en) Dynamic workflow scheduling optimization method based on hybrid heuristic rule and genetic algorithm
Li Research on cloud computing resource scheduling based on machine learning
Chen et al. Pickyman: A preemptive scheduler for deep learning jobs on gpu clusters
Han et al. SPIN: BSP job scheduling with placement-sensitive execution
KR101916809B1 (en) Apparatus for placing virtual cluster and method for providing the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181204