CN109766168B - Task scheduling method and device, storage medium and computing equipment - Google Patents

Task scheduling method and device, storage medium and computing equipment Download PDF

Info

Publication number
CN109766168B
CN109766168B CN201711099264.7A CN201711099264A CN109766168B CN 109766168 B CN109766168 B CN 109766168B CN 201711099264 A CN201711099264 A CN 201711099264A CN 109766168 B CN109766168 B CN 109766168B
Authority
CN
China
Prior art keywords
task
type
core
tasks
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711099264.7A
Other languages
Chinese (zh)
Other versions
CN109766168A (en
Inventor
朱延海
陈善佩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201711099264.7A priority Critical patent/CN109766168B/en
Publication of CN109766168A publication Critical patent/CN109766168A/en
Application granted granted Critical
Publication of CN109766168B publication Critical patent/CN109766168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a task scheduling method and device, a storage medium and a computing device, which are suitable for a physical core with two logic cores. A task queue is maintained for each logical core separately. When a task needs to be allocated to a first logical core, it is determined whether a task running on a second logical core is an online task that is extremely delay sensitive. If so, the offline task in the task queue of the first logical core is deleted. On the other hand, when a first logical core is about to or starts to execute an online task that is extremely sensitive to latency, it is determined whether a second logical core is executing an offline task. If so, the offline task currently being executed by the second logic core is interrupted, and the task scheduling is carried out again. Therefore, the hyper-thread interference caused by offline tasks can be effectively eliminated under the condition of avoiding remarkably reducing the resource utilization rate.

Description

Task scheduling method and device, storage medium and computing equipment
Technical Field
The disclosure relates to the technical field of task scheduling, in particular to the technical field of task scheduling for eliminating hyper-thread interference.
Background
In a traditional data center, online applications (online tasks) are typically deployed individually on a collection of physical machines. In order to cope with burst traffic (such as promotions like 'dueleven') during traffic peak, sufficient hardware resources are reserved in general by physical machines.
On the other hand, in the low-traffic period, a lot of physical machine resources are wasted.
To reduce this waste, some offline tasks may be scheduled to run on these physical machines during the traffic valley. Therefore, the resource utilization rate of the physical machine can be improved, and the aim of saving cost is fulfilled.
In this way, it is possible to deploy both online applications (online tasks) and offline applications (offline tasks) on the same physical machine. Online applications are more sensitive to delay, while offline applications are less sensitive to delay.
In the environment of mixed deployment of offline application and online application, firstly, the quality of service (QoS) of online tasks needs to be ensured, and secondly, the utilization rate of machine resources is improved. However, offline applications can interfere with online applications, reducing performance of online tasks.
Off-line task interference is of multiple dimensions, such as CPU (central processing unit), LLC (logical link control), memory bandwidth, network, storage, etc.
From the CPU level, the interference of offline application to online application can be subdivided into two categories:
(1) On the same CPU, an offline task seizes CPU resources, so that an online task cannot be scheduled in time, scheduling delay occurs, and the performance of the online task is influenced;
(2) On the same core, an online task and an offline task are simultaneously operated, and as a lot of hardware resources are shared between two hyper-threads (HT), hyper-thread interference may reduce the performance of online application.
The present disclosure relates generally to the second category of CPU interference problems discussed above.
Here, for convenience of description, the offline task is denoted as "batch", the online task is denoted as "LS (latency Sensitive)", and the online task in which the sensitivity to delay is extremely high is denoted as "L".
For L tasks, from the CPU level, the interference needs to be reduced most strongly to guarantee QoS.
For the second type of interference mentioned above, there are currently three main types of solutions, however, each of them has drawbacks. Each is briefly described below.
The first solution is to turn off the hyper-threading function, thus fundamentally avoiding hyper-threading interference.
However, the purpose of hyper-threading design is to fully utilize idle hardware resources in the core, and to improve the performance of the machine as a whole. The hyper-threading function is turned off, although hyper-threading interference can be avoided at all, but the utilization rate of machine resources is reduced, and an important purpose of mixed deployment cannot be realized at all: and the resource utilization rate is improved.
The second solution isolates the LS and batch tasks with minimum granularity of cores through a cpsets mechanism. Thus, the batch task and the LS task will not appear in the same core, thereby eliminating the hyper-thread interference caused by the batch task.
In most cases, not all cores are running L tasks, so the batch task may be dispatched to those cores that are not running L tasks.
Compared with the first solution, the solution is moderate, which can not only eliminate the hyper-thread interference caused by the batch task, but also improve the utilization rate of machine resources to a certain extent.
However, on the core to which the L task is assigned, the batch task is not allowed to be executed even if the L task is not currently in a running state due to a limit of the cpuiset. Thus, CPU resources are wasted to some extent.
The third solution is to find the most suitable LS task and batch task through a reasonable scheduling strategy, and run them on HT which are brothers of each other. Because the difference of the requirements of the two tasks on hardware resources is large, the competition on shared resources is reduced, and the hyper-thread interference can be relieved to a certain extent.
However, a difficulty with this approach is dynamically analyzing the characteristics of each running thread to select the best combination.
First, the number of threads in a running state is not large, but the number of surviving threads reaches hundreds, even thousands, and the characteristics of the threads may change continuously with the running time.
Secondly, it is a difficult task to determine appropriate indicators to accurately evaluate the characteristics of an application. Therefore, the implementation of the scheme is difficult, and the implementation accuracy is related to the effect of reducing the hyper-threading interference.
Therefore, a task scheduling scheme is still desired, which can effectively eliminate the hyper-thread interference caused by offline tasks and avoid significantly reducing the resource utilization rate compared with the first and second solutions; compared with the third solution, the method does not need to perform complex task dynamic analysis and evaluation, and is simpler to implement.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a task scheduling strategy and a task scheduling implementation method which are easy to implement, and the task scheduling strategy and the task scheduling implementation method can effectively eliminate hyper-thread interference caused by offline tasks under the condition of avoiding obviously reducing the resource utilization rate.
According to an aspect of the present invention, there is provided a task scheduling method, adapted to a physical core having a first logical core and a second logical core, the method including: maintaining a first task queue for the first logical core; responding to the requirement of allocating tasks for the first logic core, and judging whether a second current task running on a second logic core is a first type task or not; and deleting the second type of task in the first task queue from the first task queue if the second current task is the first type of task.
Preferably, the method may further comprise: and putting the second type of task deleted from the first task queue into a suspension list.
Preferably, the method may further comprise: and in the case that the second current task is not the first type of task, putting the second type of task in the suspension list back to the first task queue.
Preferably, only the second type of task from the first task queue in the suspension list is put back into the first task queue.
Preferably, the suspension list is a first suspension list corresponding to the first task queue; or its origin is recorded in association with each task in a suspension list.
Preferably, the method may further comprise: selecting a task from a first task queue to be executed by a first logical core; and recording the type of the selected task in a current task type field corresponding to the first logical core.
Preferably, the first type of task is an online task with high delay sensitivity, and the second type of task is an offline task.
According to another aspect of the present invention, there is also provided a task scheduling method, adapted to a physical core having a first logical core and a second logical core, the method including: responding to the first logic core to be or begin to execute the first type of tasks, and judging whether the second logic core is executing the second type of tasks; and under the condition that the second logic core is judged to execute the second type of tasks, interrupting the second type of tasks currently executed, and performing task scheduling again on the second logic core.
Preferably, the method may further comprise: in response to that the task to be or started to be executed by the first logic core is not the first type of task, judging whether the second logic core is executing an idle task; and under the condition that the second logic core is judged to execute the idle task, interrupting the currently executed idle task and carrying out task scheduling again on the second logic core.
Preferably, the first type of task is an online task with high delay sensitivity, and the second type of task is an offline task.
According to another aspect of the present invention, there is also provided a task scheduling apparatus adapted to a physical core having a first logical core and a second logical core, the apparatus including: queue maintenance means for maintaining a first task queue for the first logical core; the first judgment device is used for responding to the requirement of allocating the tasks to the first logic core and judging whether a second current task running on a second logic core is a first type task or not; and task deleting means for deleting the second type of task in the first task queue from the first task queue in a case where the second current task is the first type of task.
Preferably, the apparatus may further comprise: and the task suspension device is used for putting the second type of tasks deleted from the first task queue into a suspension list.
Preferably, the apparatus may further comprise: and the task putting-back device is used for putting the second type of task in the suspension list back to the first task queue under the condition that the second current task is not the first type of task.
Preferably, the task-putting back means puts only the second type of task from the first task queue in the suspension list back into the first task queue.
Preferably, the suspension list is a first suspension list corresponding to the first task queue; or its source is recorded in association with each task in a suspension list.
Preferably, the apparatus may further comprise: task selection means for selecting a task to be executed by the first logical core from the first task queue; and type recording means for recording the type of the selected task in a current task type field corresponding to the first logical core.
According to another aspect of the present invention, there is also provided a task scheduling apparatus adapted to a physical core having a first logical core and a second logical core, the apparatus including: second judging means for judging whether the second logic core is executing the second type of task in response to the first logic core being about to or starting to execute the first type of task; and the first rescheduling device is used for interrupting the second type of task which is currently executed and rescheduling the task for the second logic core under the condition of judging that the second type of task is executed by the second logic core.
Preferably, the apparatus may further comprise: third judging means for judging whether the second logical core is executing an idle task in response to the task to be executed or started to be executed by the first logical core not being the first type task; and the second rescheduling device is used for interrupting the idle task which is currently executed and rescheduling the task to the second logic core under the condition that the second logic core is judged to execute the idle task.
According to another aspect of the present invention, there is also provided a computing device comprising: a processor; and a memory having executable code stored thereon, which, when executed by the processor, causes the processor to perform the above-described task scheduling method according to the present invention.
According to another aspect of the present invention, there is also provided a non-transitory machine-readable storage medium having stored thereon executable code, which, when executed by a processor of an electronic device, causes the processor to perform the above task scheduling method according to the present invention.
Therefore, in the task scheduling process, under the condition of avoiding obviously reducing the resource utilization rate, the hyper-thread interference caused by the offline task can be effectively eliminated.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
Fig. 1 schematically shows a physical core having two CPUs. The physical core 100 is provided with a CPU1 110 and a CPU2 120.
Fig. 2 schematically illustrates a situation to be avoided by the NCS scheduling policy proposed by the present disclosure.
FIG. 3 shows a schematic flow chart of a task scheduling method according to one embodiment of the present disclosure.
Fig. 4A to 4D schematically show the task combination of two CPUs and their running queues on the same physical core in four cases.
Fig. 5 shows a schematic block diagram of a task scheduling apparatus according to one embodiment of the present disclosure.
FIG. 6 shows a schematic block diagram of a task scheduling apparatus according to one embodiment of the present disclosure.
FIG. 7 shows a schematic block diagram of a computing device, according to one embodiment of the present disclosure.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
[ term interpretation ]
1. Hyper-threading: (Hyper Thread, abbreviated HT)
To make efficient use of the CPU resources, some resources may be further added to run the second thread. This allows the idle resources of the CPU to run the second thread. This is a hyper-threading technique.
The CPU can simulate two threads to operate only by adding a small amount of resources. Only necessary resources are duplicated inside the CPU, so that two threads can run simultaneously, and the work of the two threads is processed in one unit time. Therefore, one physical core has two logical cores, corresponding to two threads respectively. Each logical core is treated as an independent CPU to the CPU scheduler. The logical core is also referred to hereinafter simply as HT.
2. Brother CPU
Two logical cores (HT) on the same physical core are siblings of each other, i.e., sibling CPUs.
3. Hyper-threading interference
On the same physical core, two HT's that are brothers share many hardware resources, such as L1, L2cache, ALU, pipeline, FPU, etc. When two HT's compete for a shared resource, one of the HT's needs to wait. Thus, the two HT's interfere with each other.
From a macroscopic point of view, there can be two cases:
case 1: only one thread A is operated on one physical core;
case 2: two HT's on one physical core run thread A and thread B, respectively.
For thread A, the performance of case 1 is better than the performance of case 2. In case 2, thread B causes interference to thread A, which is referred to as hyper-threading interference.
[ Schedule plan ]
The present disclosure designs a new task scheduling scheme for eliminating hyper-thread interference caused by batch task, and a hyper-thread interference elimination Scheduler (NCS) is implemented based on the scheme.
In the technical scheme of the disclosure, tasks in the hybrid deployment environment are divided into 5 types:
l: on-line tasks that are extremely sensitive to delays;
LS: online tasks other than L are sensitive to latency;
batch: off-line tasks, which are not sensitive to delay;
normal: other common tasks in the system;
idle: and the idle process is executed when the CPU is idle.
The aforementioned L tasks may be distinguished from the normal LS tasks based on predetermined rules or by predetermined mechanisms. For example, online tasks with delay sensitivity above a predetermined sensitivity threshold or tolerable delay times below a predetermined time threshold may be divided into L tasks, while other tasks are divided into LS tasks. Some tasks may also be set manually as L tasks.
The present disclosure proposes NCS scheduling strategy, i.e., L × task and batch task cannot run on two HT's that are brothers of each other at the same time; however, when the Ltask sleeps or exits, the batch task can continue to run. Other types of combinations are not limited and may operate arbitrarily on the same core.
By the method, the hyper-thread interference caused by the batch task can be eliminated, the CPU set is not limited, and compared with the existing second solution, the resource utilization rate of the system can be improved to a greater extent.
Fig. 1 schematically shows a physical core having two CPUs. The physical core 100 is provided with a CPU1 110 and a CPU2 120.
Fig. 2 schematically illustrates a situation to be avoided by the NCS scheduling policy proposed by the present disclosure, that is, the type of task executed by the CPU1 on the physical core 100 is L, while the type of task executed by the CPU2 120 is batch. At this time, the batch task executed on CPU2 120 will cause hyper-threading interference, affecting the L tasks executed on CPU1 110. Therefore, such task combinations need to be avoided.
In the following description, CPU1 and CPU2 represent two CPUs that are sibling to each other on the same physical core. It should be understood that CPU1 and CPU2 are symmetrical, and what is described in this disclosure for CPU1 also applies to CPU2; likewise, what is described for CPU2 also applies to CPU1.
Each CPU has its own run queue (rq), and the corresponding data structure struct rq is used to store the task in the ready state and record the state related to scheduling, for example, the type of the task currently executed by the corresponding CPU1 and CPU2, that is, the current task type, may be recorded. rq1 and rq2 represent run queues for CPU1 and CPU2, respectively. The types of the tasks currently executed by the corresponding CPU1 and CPU2, that is, the current task types, may also be recorded in the running queues rq1 and rq 2.
The task scheduling scheme according to the present disclosure is described in detail below with reference to fig. 3 and 4A to 4D.
FIG. 3 shows a schematic flow chart of a task scheduling method according to one embodiment of the present disclosure.
Fig. 4A to 4D schematically show the task combination of two CPUs and their running queues on the same physical core in four cases.
First, the type of task, i.e., L, LS, batch, normal, idle, can be identified through a predetermined identification mechanism.
For the Linux Kernel, a CPU control group (CPU control group) mechanism may be used, for example, to identify the type of task. It should be understood that other labeling mechanisms may be employed as desired. The invention is not limited to the specific labeling.
Next, a flow of selecting a next task to be executed from its execution queue rq1 and executing the task by one of the two CPUs on the same physical core, for example, CPU1, will be described in detail. It should be understood that the same may be true for the flow of selecting the next upcoming task for CPU2.
First, when it is necessary to select a next task to be executed for the CPU1 from its execution queue rq1 due to the reason that the execution of the current task of the CPU1 is completed, or a time slice allocated to the current task is ended, or the current task is interrupted (hereinafter also referred to as "break"), or the like, at step S310, the CPU1 task scheduling is triggered so as to select the next task to be executed for the CPU1.
After the task scheduling of the CPU1 is triggered, before the next task to be executed is actually selected for the CPU1 from the rq1, in step S320, it is determined whether the current task type curr-type2 recorded in the execution queue rq2 of the CPU2 is L ×. Here, the current task type recorded in rq2 is the type of task currently being executed by the CPU2.
Fig. 4A schematically shows a case where the current task type of the CPU2 is L when selecting a task for the CPU1.
As shown in fig. 4A, in the case that the current task type curr-type2 recorded by rq2 is L, in step S330, the batch tasks in rq1 (preferably all the batch tasks therein) are deleted (throttle) from rq1 and put into a throttle list (which may also be referred to as "suspension list") TL1 corresponding to rq 1. In other words, the batch task in rq1 is taken out and put into the corresponding throttle list.
It should be appreciated that rq1 and rq2 may correspond to different throttle lists TL1 and TL2, respectively, such that tasks originally assigned to CPU1 are left to CPU1 for later execution at the appropriate time and tasks originally assigned to CPU2 are left to CPU2 for later execution at the appropriate time.
On the other hand, rq1 and rq2 may also correspond to the same throttle list TL. For a task put into the throttle list TL, its source, i.e. from which task queue it came, can be recorded in association, so that it can be returned to the task queue where it was originally for execution at a later appropriate time. Alternatively, in some cases, the data may be placed in the throttle list without distinction and then reallocated to CPU1 or CPU2 when it is later appropriate to execute the data.
Thus, none of the remaining tasks in rq1 (if any) are batch tasks. Even if the duration of the L task currently executed by the CPU2 is long, or the CPU2 continuously executes a plurality of L tasks, in this process, the CPU1 performs task scheduling a plurality of times, and the CPU1 does not select the batch task to run.
Then, in step S350, the next task to be run is selected from rq 1. At this time, the selected task is not necessarily a batch task, and it does not cause hyper-threading interference to the L tasks being executed on the CPU2.
Thus, when selecting the next to-be-run task for the CPU1, it is avoided that the batch task is selected as the next to-be-run task at the CPU1 when the L task is being run on the CPU2, thereby avoiding the disadvantage that the batch task to be run on the CPU1 causes hyper-thread interference to the L task being run on the CPU2.
Fig. 4B schematically shows a case where the current task type of the CPU2 is not L when selecting a task for the CPU1.
As shown in fig. 4B, in case the current task type curr-type2 recorded by rq2 is not L, at step S340, the batch tasks in the throttle list TL1 corresponding to rq1 (preferably all the batch tasks therein, e.g. batch1, batch2, batch3 shown in fig. 4B) are put back into rq1 (unthrottle). For example, these batch tasks may all be placed at the tail of the task queue rq 1.
In step S350, the next task to be run is selected from rq 1.
It should be understood that, in the case that the type curr-type2 of the current task in rq2 is not L, step S350 may be executed first, the next task to be run is selected from rq1, and then step S340 is executed, and the batch tasks in the throttle list (preferably all the batch tasks therein) are put back into rq1 again. Step S340 may be performed even after CPU1 starts running the selected task in the following step S390. As long as rq1 is not empty before step S350 is performed. Performing step S340 first can avoid the situation where TL1 is not empty and rq1 is empty resulting in no task being selected for execution when performing step S350. Executing step S350 first can allocate a task to CPU1 more quickly.
Additionally, it should also be appreciated that if the last time CPU1 selected a task, the task that CPU2 was executing was not L, then the batch task in the throttle list TL1 has been replaced in rq 1. In this case, step S340 may not be performed.
After selecting the next upcoming task from rq1 for CPU1 in step S350, the type of task currently executed by CPU1, i.e. the current task type curr-type1, may be recorded in, for example, the current task type curr-type1 field of rq1 in step S360. It should be understood that the current task type field may also be set outside of rq 1. A corresponding current task type field may be set for each CPU.
Next, it is determined in step S370 whether or not task scheduling needs to be performed anew for the CPU2.
The condition for determining whether to re-schedule the task for the CPU2 may include, for example, at least one of the following two items:
(1) The current task type curr-type1 selected by the CPU1 is L, and the current task type curr-type2 of the CPU2 is batch, as shown in fig. 4C; and/or
(2) The current task type curr-type1 selected for CPU1 is not L, while the current task type curr-type2 of CPU2 is Idle, as shown in fig. 4D.
In the above situation (1) shown in fig. 4C, where the task scheduling needs to be performed again on the CPU2, if the task scheduling is not performed again on the CPU2, the batch task executed on the CPU2 will cause hyper-threading interference to the L tasks to be executed on the CPU1. Since the system first guarantees efficient execution of the L task, the batch task on CPU2 needs to be interrupted and rescheduled.
Thus, when L tasks are to be executed on CPU1, the executing batch task on CPU2 is interrupted. In this way, it is avoided that the executing batch task on CPU2 causes hyper-threading interference to the L tasks that are about to execute or are selected to execute just before CPU1.
In the case of the above-mentioned situation (1) in step S330 and step S370, it is effectively avoided that the L task and the batch task run on the CPUs (i.e. CPU1 and CPU 2) or the logic cores that are brother of each other from different aspects.
In the case (2) shown in fig. 4D where the CPU2 needs to be re-scheduled, neither the CPU1 nor the CPU2 execute the L task, and therefore, the problem of hyper-threading interference of the batch task on the L task is not involved.
However, it is possible that in some cases, the CPU2 is in an Idle state, and is executing the Idle task. For example, when the previous CPU1 will run the L task, the executed batch task on the CPU2 is interrupted, and the batch tasks are all the batch tasks in rq2, and the CPU2 starts executing the Idle task when no non-batch task is available for the CPU2 to select to execute.
After the execution of the L tasks on the CPU1 is completed, the CPU2 is still in an idle task. This will result in an inefficient use of resources. Therefore, it is necessary to interrupt the Idle task being executed on the CPU2 and reschedule the task to improve the resource utilization efficiency.
If it is determined in step S370 that the task scheduling needs to be resumed for the CPU2, in step S380, the CPU2 is notified, for example, through an Inter-Processor Interrupt mechanism (IPI), to Interrupt the currently running task and is triggered to reselect the task scheduling. Thus, the task scheduling flow described above with reference to fig. 3 starts to be executed for CPU2, and CPU1 and CPU2 are exchanged.
In step S390, the CPU1 starts executing the task selected therefor.
It should be appreciated that the CPU1 may start executing the task after selecting the task for it in step S350.
However, at least when the task to be executed is selected to be the "L" task for the CPU1, it is preferable to determine whether the task being executed by the CPU2 is a "batch task" in step S370, and to interrupt the "batch task" being executed by the CPU2 accordingly before starting the execution of step S390. Thus, the hyper-threading interference of the batch task on the CPU2 can be avoided when the execution is started from the L task on the CPU1.
Next, in step S395, it is determined whether or not the task scheduling of the CPU1 needs to be triggered.
For example, the task scheduling of CPU1 needs to be triggered in several cases:
(1) The current task of the CPU1 is already executed;
(2) The time slice allocated for the current task of the CPU1 has been exhausted;
(3) The running task on the CPU1 is interrupted and the CPU1 task scheduling needs to be triggered.
If it is determined that the task scheduling needs to be triggered, the process returns to step S310, and the next round of task scheduling flow for the CPU1 is started.
And returning to the step S390 to continue executing the current task under the condition that the task scheduling does not need to be triggered.
So far, the task scheduling method of the present invention has been described in detail with reference to fig. 3 and fig. 4A to 4D.
The task scheduling scheme of the present invention can also be implemented by a task scheduling device. Fig. 5 shows a schematic block diagram of a task scheduling apparatus according to an embodiment of the present invention. The functional blocks of the task scheduler 500 may be implemented by hardware, software, or a combination of hardware and software that implement the principles of the present disclosure. It will be appreciated by those skilled in the art that the functional blocks described in fig. 5 may be combined or divided into sub-blocks to implement the principles disclosed above. Thus, the description herein may support any possible combination, or division, or further definition of the functional modules described herein.
The task scheduling apparatus 500 shown in fig. 5 may be used to implement the task scheduling method shown in fig. 3, and only the functional modules that the task scheduling apparatus 500 may have and the operations that each functional module may perform are briefly described below, and for the details involved therein, reference may be made to the description above in conjunction with fig. 3, and details are not repeated here.
As shown in fig. 5, the task scheduling device 500 of the present disclosure may include a queue maintenance device 510, a first judgment device 520, and a task deletion device 530. The task scheduler 500 is adapted to a physical core having a first logical core and a second logical core.
Queue maintenance device 510 may be configured to maintain a first task queue for a first logical core.
The first determining means 520 may be configured to determine whether a second current task running on a second logical core is a task of a first type in response to a requirement for allocating a task to the first logical core.
The task deleting device 530 may be configured to delete the second type task in the first task queue from the first task queue if the second current task is the first type task. Preferably, the first type of task may be an online task with high delay sensitivity, and the second type of task may be an offline task.
As shown in fig. 5, the task scheduler 500 may further include a task suspending device 540. The task suspending means 540 may be configured to place the second type of task removed from the first task queue into a suspension list.
In addition, as shown in fig. 5, the task scheduling device 500 may further include a task replacing device 550. The task putting back device 550 may be configured to put the second type of task in the suspension list back into the first task queue if the second current task is not the first type of task. Preferably, the task putting back device 550 puts back only the second type of task from the first task queue in the suspension list into the first task queue. Wherein the suspension list may be a first suspension list corresponding to the first task queue, or its origin may be recorded in association with each task in the suspension list.
In addition, the task scheduler 500 may further include a task selection unit 560 and a type recording unit 570.
Task selection unit 560 may be configured to select a task from the first task queue to be executed by the first logical core.
The type recording means 570 may be for recording the type of the selected task in a current task type field corresponding to the first logical core.
Fig. 6 shows a schematic block diagram of a task scheduling apparatus according to another embodiment of the present disclosure. The task scheduler is adapted to a physical core having a first logical core and a second logical core.
As shown in fig. 6, the task scheduler 600 may include the task scheduler 500 shown in fig. 5, as well as the second determination means 610 and the first rescheduling means 620.
The second determining means 610 may be configured to determine whether the second logical core is executing the second type of task in response to the first logical core being about to or beginning to execute the first type of task. Preferably, the first type of task may be an online task with high delay sensitivity, and the second type of task may be an offline task.
The first rescheduling means 620 may be configured to, in case that it is determined that the second logical core is executing the second type of task, break the second type of task currently being executed, and reschedule the task for the second logical core.
The task scheduler 600 may further include a third determining means 630 and a second rescheduling means 640.
The third determining means 630 may be configured to determine whether the second logical core is executing an idle task in response to that the task that the first logical core is about to or starts to execute is not the first type of task.
The second rescheduling means 640 may be configured to, in case that it is determined that the second logical core is executing an idle task, interrupt the idle task currently being executed, and reschedule the task to the second logical core.
In addition, the task scheduling scheme of the present disclosure may also be implemented by a computing device. FIG. 7 shows a schematic block diagram of a computing device, according to one embodiment of the present disclosure.
As shown in fig. 7, a computing device 700 of the present disclosure may include a processor 710 and a memory 720. The memory 720 may have executable code stored thereon that, when executed by the processor 710, causes the processor 710 to perform the above-described task scheduling methods according to the present disclosure. For a specific implementation process, reference may be made to the related description above, and details are not described herein again.
The task scheduling scheme according to the present invention has been described in detail above with reference to the accompanying drawings.
Furthermore, the method according to the invention may also be implemented as a computer program or computer program product comprising computer program code instructions for carrying out the above-mentioned steps defined in the above-mentioned method of the invention.
Alternatively, the invention may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the steps of the above-described method according to the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (20)

1. A task scheduling method is applicable to a physical core having a first logic core and a second logic core, and is characterized by comprising the following steps:
maintaining a first task queue for the first logical core;
responding to the requirement of allocating tasks to the first logic core, and judging whether a second current task running on the second logic core is a first type task or not; and
and deleting a second type of tasks in the first task queue from the first task queue when the second current task is a first type of task, wherein the first type of tasks are tasks with high delay sensitivity, and the second type of tasks are tasks with low delay sensitivity.
2. The task scheduling method according to claim 1, further comprising:
and putting the second type of task deleted from the first task queue into a suspension list.
3. The task scheduling method according to claim 2, further comprising:
and in the case that the second current task is not the first type of task, putting the second type of task in the suspension list back into the first task queue.
4. The task scheduling method according to claim 3,
only the second type of task from the first task queue in the suspension list is placed back into the first task queue.
5. The task scheduling method according to claim 4,
the suspension list is a first suspension list corresponding to the first task queue; or
The source of each task is recorded in association with it in the suspension list.
6. A task scheduling method according to any one of claims 1 to 5, further comprising:
selecting a task from the first task queue to be executed by a first logical core; and
recording the type of the selected task in a current task type field corresponding to the first logical core.
7. Task scheduling method according to one of the claims 1 to 5,
the first type of task is an online task and the second type of task is an offline task.
8. A task scheduling method is applicable to a physical core having a first logic core and a second logic core, and is characterized by comprising the following steps:
in response to the first logical core being about to or beginning to execute a first type of task, determining whether the second logical core is executing a second type of task; and
and under the condition that the second logic core is judged to be executing the second type of tasks, interrupting the second type of tasks currently executed, and performing task scheduling again on the second logic core, wherein the first type of tasks are tasks with high delay sensitivity, and the second type of tasks are tasks with low delay sensitivity.
9. The task scheduling method according to claim 8, further comprising:
in response to that the task to be or started to be executed by the first logical core is not a first type of task, determining whether the second logical core is executing an idle task; and
and under the condition that the second logic core is judged to be executing the idle task, interrupting the currently executed idle task, and re-performing task scheduling on the second logic core.
10. Task scheduling method according to claim 8 or 9,
the first type of tasks are online tasks and the second type of tasks are offline tasks.
11. A task scheduling apparatus adapted for a physical core having a first logical core and a second logical core, the apparatus comprising:
a queue maintenance device to maintain a first task queue for the first logical core;
a first judging device, configured to, in response to a requirement for allocating a task to the first logic core, judge whether a second current task running on the second logic core is a first type task; and
and the task deleting device is used for deleting a second type of task in the first task queue from the first task queue under the condition that the second current task is a first type of task, wherein the first type of task is an online task with high delay sensitivity, and the second type of task is a task with low delay sensitivity.
12. The task scheduler of claim 11, further comprising:
and the task suspension device is used for putting the second type of tasks deleted from the first task queue into a suspension list.
13. The task scheduler of claim 12, further comprising:
and the task replacing device is used for replacing the second type of tasks in the suspension list into the first task queue under the condition that the second current task is not the first type of tasks.
14. The task scheduler of claim 13,
the task putting back device only puts back the second type of tasks from the first task queue in the suspension list into the first task queue.
15. The task scheduler of claim 14,
the suspension list is a first suspension list corresponding to the first task queue; or
The source of each task is recorded in association with it in the suspension list.
16. A task scheduling device according to any one of claims 11 to 15 further comprising:
task selection means for selecting a task to be executed by a first logical core from the first task queue; and
a type recording means for recording the type of the selected task in a current task type field corresponding to the first logical core.
17. A task scheduler adapted to a physical core having a first logical core and a second logical core, the task scheduler comprising:
second judging means for judging whether the second logic core is executing a second type of task in response to the first logic core being about to or starting to execute the first type of task; and
and the first rescheduling device is used for interrupting the second type of task which is currently executed and rescheduling the task to the second logic core under the condition that the second type of task is judged to be executed by the second logic core, wherein the first type of task is a task with high delay sensitivity, and the second type of task is a task with low delay sensitivity.
18. The task scheduler of claim 17, further comprising:
third judging means for judging whether the second logical core is executing an idle task in response to the task to be executed or started to be executed by the first logical core not being a first type task; and
and the second rescheduling device is used for interrupting the idle task which is currently executed and rescheduling the task to the second logic core under the condition that the second logic core is judged to execute the idle task.
19. A computing device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any of claims 1 to 10.
20. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any of claims 1-10.
CN201711099264.7A 2017-11-09 2017-11-09 Task scheduling method and device, storage medium and computing equipment Active CN109766168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711099264.7A CN109766168B (en) 2017-11-09 2017-11-09 Task scheduling method and device, storage medium and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711099264.7A CN109766168B (en) 2017-11-09 2017-11-09 Task scheduling method and device, storage medium and computing equipment

Publications (2)

Publication Number Publication Date
CN109766168A CN109766168A (en) 2019-05-17
CN109766168B true CN109766168B (en) 2023-01-17

Family

ID=66449478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711099264.7A Active CN109766168B (en) 2017-11-09 2017-11-09 Task scheduling method and device, storage medium and computing equipment

Country Status (1)

Country Link
CN (1) CN109766168B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111240829B (en) * 2019-12-31 2023-12-15 潍柴动力股份有限公司 Multi-core task scheduling method and device based on time slices, storage medium and electronic equipment
CN111399995A (en) * 2020-02-10 2020-07-10 山东师范大学 Adjusting method and system for guaranteeing service quality of delay sensitive program
CN111444012B (en) 2020-03-03 2023-05-30 中国科学院计算技术研究所 Dynamic resource regulation and control method and system for guaranteeing delay-sensitive application delay SLO

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104246694A (en) * 2011-12-29 2014-12-24 英特尔公司 Aggregated page fault signaling and handline
CN104281495A (en) * 2014-10-13 2015-01-14 湖南农业大学 Method for task scheduling of shared cache of multi-core processor
CN106557367A (en) * 2015-09-30 2017-04-05 联想(新加坡)私人有限公司 For device, the method and apparatus of granular service quality are provided for computing resource

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4797095B2 (en) * 2009-07-24 2011-10-19 株式会社日立製作所 Batch processing multiplexing method
US20160378545A1 (en) * 2015-05-10 2016-12-29 Apl Software Inc. Methods and architecture for enhanced computer performance

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104246694A (en) * 2011-12-29 2014-12-24 英特尔公司 Aggregated page fault signaling and handline
CN104281495A (en) * 2014-10-13 2015-01-14 湖南农业大学 Method for task scheduling of shared cache of multi-core processor
CN106557367A (en) * 2015-09-30 2017-04-05 联想(新加坡)私人有限公司 For device, the method and apparatus of granular service quality are provided for computing resource

Also Published As

Publication number Publication date
CN109766168A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN109766180B (en) Load balancing method and device, storage medium, computing equipment and computing system
US11550627B2 (en) Hardware accelerated dynamic work creation on a graphics processing unit
US9870252B2 (en) Multi-threaded processing with reduced context switching
US8793695B2 (en) Information processing device and information processing method
US6006247A (en) Method and system for scheduling threads and handling exceptions within a multiprocessor data processing system
CN110489213B (en) Task processing method and processing device and computer system
US9298504B1 (en) Systems, devices, and techniques for preempting and reassigning tasks within a multiprocessor system
KR20120058605A (en) Hardware-based scheduling of gpu work
CN109766168B (en) Task scheduling method and device, storage medium and computing equipment
US20120284720A1 (en) Hardware assisted scheduling in computer system
US20130152100A1 (en) Method to guarantee real time processing of soft real-time operating system
CN109840149B (en) Task scheduling method, device, equipment and storage medium
CN111324432A (en) Processor scheduling method, device, server and storage medium
JPH10143380A (en) Multiprocessor system
US20130125131A1 (en) Multi-core processor system, thread control method, and computer product
JP5726006B2 (en) Task and resource scheduling apparatus and method, and control apparatus
CN114816777A (en) Command processing device, method, electronic device and computer readable storage medium
CN116048756A (en) Queue scheduling method and device and related equipment
US9015719B2 (en) Scheduling of tasks to be performed by a non-coherent device
US6915516B1 (en) Apparatus and method for process dispatching between individual processors of a multi-processor system
JP5376042B2 (en) Multi-core processor system, thread switching control method, and thread switching control program
JP2007193744A (en) Information processing device, program and scheduling method
CN109800064B (en) Processor and thread processing method
JPH10260850A (en) Virtual computer system
US20040103414A1 (en) Method and apparatus for interprocess communications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230609

Address after: Room 1-2-A06, Yungu Park, No. 1008 Dengcai Street, Sandun Town, Xihu District, Hangzhou City, Zhejiang Province

Patentee after: Aliyun Computing Co.,Ltd.

Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK

Patentee before: ALIBABA GROUP HOLDING Ltd.

TR01 Transfer of patent right