CN118093133A - Task scheduling method and related equipment - Google Patents

Task scheduling method and related equipment Download PDF

Info

Publication number
CN118093133A
CN118093133A CN202410233040.4A CN202410233040A CN118093133A CN 118093133 A CN118093133 A CN 118093133A CN 202410233040 A CN202410233040 A CN 202410233040A CN 118093133 A CN118093133 A CN 118093133A
Authority
CN
China
Prior art keywords
node
task
scheduling
computing
computing node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410233040.4A
Other languages
Chinese (zh)
Inventor
林海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202410233040.4A priority Critical patent/CN118093133A/en
Publication of CN118093133A publication Critical patent/CN118093133A/en
Pending legal-status Critical Current

Links

Landscapes

  • Multi Processors (AREA)

Abstract

The disclosure provides a task scheduling method and related equipment. The method comprises the following steps: assigning a first task to a first scheduling node based on a preset rule, the first scheduling node being associated with a first computing node; and distributing the first task to the first computing node for processing.

Description

Task scheduling method and related equipment
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a task scheduling method and related equipment.
Background
The distributed task scheduling is mainly realized by a plurality of nodes competing for mutual exclusion locks, when a plurality of computing nodes contend for the same task lock, only one node can successfully acquire the lock, and the other nodes can fail. Moreover, the number of task locks preempted by each computing node is not fixed, which may result in some nodes being overloaded while others are unloaded. While classifying tasks by execution cycle may reduce the number of locks, the granularity of the execution cycle is difficult to select, and either too large or too small may affect the scheduling effect. It may also result in heavy tasks being concentrated in the same execution cycle, resulting in insufficient computing power. The load of the computing nodes in the task processing process is unbalanced, so that the waste of computing resources is caused, and the task processing efficiency is reduced.
Disclosure of Invention
The disclosure provides a task scheduling method and related equipment, which are used for solving the technical problems of waste of computing resources, reduction of task processing efficiency and the like caused by unbalanced computing load in the task processing process to a certain extent.
In a first aspect of the present disclosure, a task scheduling method is provided, including:
assigning a first task to a first scheduling node based on a preset rule, the first scheduling node being associated with a first computing node;
And distributing the first task to the first computing node for processing.
In a second aspect of the present disclosure, there is provided a task scheduling device, including:
a scheduling node allocation module for allocating a first task to a first scheduling node based on a preset rule, the first scheduling node being associated with a first computing node;
and the computing node distribution module is used for distributing the first task to the first computing node for processing.
In a third aspect of the disclosure, an electronic device is provided that includes one or more processors, a memory; and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the method of the first or second aspect.
In a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium containing a computer program which, when executed by one or more processors, causes the processors to perform the method of the first or second aspect.
In a fifth aspect of the present disclosure, there is provided a computer program product comprising computer program instructions which, when executed on a computer, cause the computer to perform the method of the first aspect.
As can be seen from the foregoing, the task scheduling method and related device provided by the present disclosure allocate a first task to a matched scheduling node according to a preset rule, and process the first task based on a computing node associated with the scheduling node. Compared with the method for directly distributing the tasks to the computing nodes, the method has the advantages that different tasks are dynamically distributed to the proper scheduling nodes by setting the scheduling nodes so as to be processed based on the computing nodes associated with the scheduling nodes, so that resource waste and unbalanced load caused by the fact that the computing nodes directly participate in task distribution are avoided, and task processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present disclosure or related art, the drawings required for the embodiments or related art description will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
Fig. 1 is a schematic diagram of a task scheduling architecture according to an embodiment of the disclosure.
Fig. 2 is a schematic hardware architecture diagram of an exemplary electronic device according to an embodiment of the disclosure.
Fig. 3 to 4 are schematic diagrams of a task scheduling method of the related art.
Fig. 5 is a schematic flow chart diagram of a task scheduling method of an embodiment of the present disclosure.
Fig. 6-7 are schematic diagrams of a task scheduling method according to an embodiment of the disclosure.
Fig. 8-9 are schematic diagrams of computing node expansion according to embodiments of the present disclosure.
Fig. 10 is a schematic diagram of a task scheduling device according to an embodiment of the disclosure.
Detailed Description
For the purposes of promoting an understanding of the principles and advantages of the disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present disclosure pertains. The terms "first," "second," and the like, as used in embodiments of the present disclosure, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
FIG. 1 illustrates a schematic diagram of a task scheduling architecture of an embodiment of the present disclosure. Referring to fig. 1, the task scheduling architecture 100 may include a server 110, a terminal 120, and a network 130 providing a communication link. The server 110 and the terminal 120 may be connected through a wired or wireless network 130. The server 110 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, security services, CDNs, and the like.
The terminal 120 may be a hardware or software implementation. For example, when the terminal 120 is a hardware implementation, it may be a variety of electronic devices having a display screen and supporting page display, including but not limited to smartphones, tablets, e-book readers, laptop and desktop computers, and the like. When the terminal 120 is implemented in software, it may be installed in the above-listed electronic device; it may be implemented as a plurality of software or software modules (e.g., software or software modules for providing distributed services) or as a single software or software module, without limitation.
It should be noted that, the task scheduling method provided in the embodiment of the present application may be executed by the terminal 120 or may be executed by the server 110. It should be understood that the number of terminals, networks, and servers in fig. 1 are illustrative only and are not intended to be limiting. There may be any number of terminals, networks, and servers, as desired for implementation.
Fig. 2 shows a schematic hardware structure of an exemplary electronic device 200 provided by an embodiment of the disclosure. As shown in fig. 2, the electronic device 200 may include: processor 202, memory 204, network module 206, peripheral interface 208, and bus 210. Wherein the processor 202, the memory 204, the network module 206, and the peripheral interface 208 are communicatively coupled to each other within the electronic device 200 via a bus 210.
The processor 202 may be a central processing unit (Central Processing Unit, CPU), a task scheduler, a neural Network Processor (NPU), a Microcontroller (MCU), a programmable logic device, a Digital Signal Processor (DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits. The processor 202 may be used to perform functions related to the techniques described in this disclosure. In some embodiments, processor 202 may also include multiple processors integrated as a single logic component. For example, as shown in fig. 2, the processor 202 may include a plurality of processors 202a, 202b, and 202c.
The memory 204 may be configured to store data (e.g., instructions, computer code, etc.). As shown in fig. 2, the data stored by the memory 204 may include program instructions (e.g., program instructions for implementing the task scheduling methods of embodiments of the present disclosure) as well as data to be processed (e.g., the memory may store configuration files of other modules, etc.). The processor 202 may also access program instructions and data stored in the memory 204 and execute the program instructions to perform operations on the data to be processed. The memory 204 may include volatile storage or nonvolatile storage. In some embodiments, memory 204 may include Random Access Memory (RAM), read Only Memory (ROM), optical disks, magnetic disks, hard disks, solid State Disks (SSD), flash memory, memory sticks, and the like.
The network module 206 may be configured to provide communications with other external devices to the electronic device 200 via a network. The network may be any wired or wireless network capable of transmitting and receiving data. For example, the network may be a wired network, a local wireless network (e.g., bluetooth, wiFi, near Field Communication (NFC), etc.), a cellular network, the internet, or a combination of the foregoing. It will be appreciated that the type of network is not limited to the specific examples described above. In some embodiments, network module 306 may include any combination of any number of Network Interface Controllers (NICs), radio frequency modules, receivers, modems, routers, gateways, adapters, cellular network chips, etc.
Peripheral interface 208 may be configured to connect electronic device 200 with one or more peripheral devices to enable information input and output. For example, the peripheral devices may include input devices such as keyboards, mice, touchpads, touch screens, microphones, various types of sensors, and output devices such as displays, speakers, vibrators, and indicators.
Bus 210 may be configured to transfer information between the various components of electronic device 200 (e.g., processor 202, memory 204, network module 206, and peripheral interface 208), such as an internal bus (e.g., processor-memory bus), an external bus (USB port, PCI-E bus), etc.
It should be noted that, although the architecture of the electronic device 200 described above only shows the processor 202, the memory 204, the network module 206, the peripheral interface 208, and the bus 210, in a specific implementation, the architecture of the electronic device 200 may also include other components necessary to achieve normal execution. Furthermore, those skilled in the art will appreciate that the architecture of the electronic device 200 may also include only the components necessary to implement the embodiments of the present disclosure, and not all of the components shown in the figures.
A task (Job) may generally refer to a backlog, and may need to be automatically performed at some future point, such as a timed task (CronJob). When the task does not need to be persistent, the computer memory can be resident, automatic execution is triggered by a Timer (Timer) of an operating system, but when a process crashes or a machine is restarted, the task cannot resume execution. This problem can be solved by persisting tasks to disk by way of local files, such as the Crontab tool provided by the Linux operating system. While Crontab solves the problem that tasks cannot resume execution when a process crashes or a machine restarts, usability and extensibility problems also exist. In addition, tasks also fail to resume execution when a disk is damaged or the machine is down. When the number of tasks is large, the computing capacity of a single machine is insufficient, and the tasks are required to be manually distributed to a plurality of machines for scheduling. Another solution is to persist the task to a separate Database system (Database), solving the task persistence problem through the disaster recovery capabilities of the Database itself. Moreover, any node downtime can quickly resume task execution at another node, namely the distributed task scheduling scheme.
In a distributed task scheduling scheme, since the node (e.g., storage node) that persists the task is not the same process or even the same machine as the node (e.g., computing node) that performs the task, the storage node needs to send the task data to the computing node over the network. Meanwhile, to solve the usability problem and the computing power deficiency problem of a single node, the storage node and the computing node generally need multiple instance deployment. Task allocation among computing nodes is typically accomplished by competing for mutually exclusive locks of tasks. As shown in fig. 3, fig. 3 shows a schematic diagram of a related art task scheduling method. In fig. 3, all computing nodes compete for locks of all tasks, and the computing node that has acquired the lock can acquire the execution rights of the corresponding task. As long as the number of the computing nodes is enough, all tasks can be guaranteed to be executed finally and are executed in a scattered mode among a plurality of computing nodes. However, k computing nodes struggle for the same task lock, only one computing node is likely to successfully struggle with the lock, and the other k-1 computing nodes fail to struggle with the lock, namely, n times of lock failure (k-1) are caused in total, and the lock itself also needs to consume computing resources. And how many task locks each computing node can rob are not fixed, and in extreme cases, a certain computing node may rob all task locks, i.e. load balancing of the computing nodes cannot be guaranteed.
Another task allocation manner is shown in fig. 4, and fig. 4 is a schematic diagram showing a task scheduling method of the related art. In fig. 4, tasks are classified according to a certain execution period, and the computing node does not contend for all task locks, but contends for locks of all execution periods, and the computing node that is in the lock can obtain the execution rights of all tasks of the corresponding execution period. Therefore, the number of locks is greatly reduced, and correspondingly, the computing resources consumed by the failure of locking can be reduced. However, tasks in the same execution cycle can only be allocated to the same computing node, the granularity of the execution cycle is difficult to determine, the granularity is larger, the number of tasks is too large, the computing power is insufficient, and the granularity is smaller, the number of execution cycle locks is too large. Tasks are classified according to execution periods, whether the tasks are urgent or not can not be effectively distinguished, a plurality of heavy-weight tasks are distributed in the same execution period, so that the computing capacity is insufficient, and some of the execution period tasks are lighter, so that the computing nodes are idle. Because the execution cycle is strictly segmented according to time, no gaps exist and no overlap exist, in general, only one computing node works in each time period, and other computing nodes wait, which also causes unnecessary resource waste.
Therefore, how to balance the calculation load in the task processing process, reduce the waste of calculation resources, improve the task processing efficiency, and the like becomes a technical problem to be solved urgently.
In view of this, the embodiment of the disclosure provides a task scheduling method and related equipment. And distributing the first task to the matched dispatching nodes according to preset rules, and processing the first task based on the computing nodes associated with the dispatching nodes. Compared with the method for directly distributing the tasks to the computing nodes, the method has the advantages that different tasks are dynamically distributed to the proper scheduling nodes by setting the scheduling nodes so as to be processed based on the computing nodes associated with the scheduling nodes, so that resource waste and unbalanced load caused by the fact that the computing nodes directly participate in task distribution are avoided, and task processing efficiency is improved.
Referring to fig. 5, fig. 5 shows a schematic flow chart of a task scheduling method according to an embodiment of the present disclosure. The task scheduling method according to the embodiment of the disclosure can be deployed at a server side, and the event comprises a plurality of flow nodes. In fig. 5, the task scheduling method 500 may further include the following steps.
In step S510, a first task is assigned to a first scheduling node associated with a first computing node based on a preset rule.
The preset rule may refer to a preset rule or algorithm, which is used for guiding allocation and management of tasks. The preset rules can be formulated according to various factors such as the characteristics of the tasks, the performance of the scheduling nodes, the system load and the like, so that the tasks can be distributed and processed efficiently and fairly. The first task refers to a task to be processed or distributed, such as a computing task, a data processing task, or a system management task. The first scheduling node may be a node for task scheduling, which may receive and allocate tasks. The first scheduling node may be associated with one or more computing nodes and responsible for assigning tasks to the associated computing nodes for processing. The first computing node may refer to a particular computing node associated with the first scheduling node for receiving and executing tasks assigned by the first scheduling node. Among other things, computing nodes typically have computing resources and storage capabilities that are capable of independently processing data and performing computing tasks.
In actual processing, the first task is distributed to the first scheduling node according to a preset rule, and then the first scheduling node distributes the first task to the associated first computing node for processing. The first scheduling node is a node for task scheduling, and the first computing node is a computing node for actually executing computing tasks. Compared with the prior art, the scheduling nodes are arranged between the tasks and the computing nodes, so that different tasks are dynamically distributed to the proper scheduling nodes to be processed based on the computing nodes associated with the scheduling nodes, resource waste and unbalanced load caused by the fact that the computing nodes directly participate in task distribution are avoided, and task processing efficiency is improved.
In step S520, the first task is allocated to the first computing node for processing.
In some embodiments, the first scheduling node is associated with at least one second computing node, the first computing node having a higher priority than the second computing node.
The computing nodes associated with each scheduling node are provided with a master node (Leader), and the master node can obtain the execution rights of all associated tasks of the scheduling node, namely, the associated tasks of each scheduling node are distributed to the master node for processing. There may also be a slave node (Follower) in the computing node associated with each scheduling node, the master node having a higher priority than the slave nodes. A scheduling node may associate one or more slave nodes. The first computing node may be a master node of the first scheduling node and the second computing node may be a slave node of the first scheduling node. In some embodiments, to avoid wasting resources, each scheduling node may be associated with one master node and one slave node. Each computing node can serve as a master node or a slave node of any scheduling node at the same time, and the number of the scheduling nodes which can be associated with each computing node at most can be limited to ensure load balancing. Referring to fig. 6, fig. 6 shows a schematic diagram of a task scheduling method according to an embodiment of the present disclosure. In fig. 6, a scheduling node 1 is associated with a computing node 1 as a master node, and a computing node 2 as a slave node; the dispatching node 2 is associated with a computing node 2 as a master node, and a computing node 3 as a slave node; … …. Task 1 and task 3 can be distributed to the scheduling node 1 according to a preset rule, and the scheduling node 1 distributes task 1 and task 3 to the computing node 1 for processing; task 2 and task 5 are distributed to the scheduling node 2, and the scheduling node 2 distributes task 2 and task 5 to the computing node 2 for processing.
In some embodiments, the method 500 further comprises:
in response to the first computing node processing the first task to be abnormal, the first scheduling node determines a third computing node from the second computing nodes;
And distributing the first task to the third computing node for processing.
When the first computing node serving as the master node processes the abnormal exit or downtime of the task, all the second computing nodes (namely the slave nodes) can reselect the third computing node as a new master node, and continue to execute the unfinished task. For example, a new master node may be reselected from the second computing node based on factors related to the task processing rate, such as CPU usage, memory usage, network latency, etc. Therefore, even if the computing node which is processing the task is abnormal, the task processing failure cannot be caused, and the disaster recovery problem of the computing node is solved.
In some embodiments, the first scheduling node determining a third computing node from the second computing nodes comprises:
determining the second computing node with the largest current resource amount as the third computing node;
or determining the second computing node with the shortest time required by the current processing task as the third computing node;
Or determining the second computing node with the least number of associated tasks as the third computing node.
Specifically, the slave node, which is the slave node having the largest current resource amount, the shortest time required for the current processing task, the smallest number of associated tasks, or the unassociated task, among the second computing nodes, may be determined as the third computing node, i.e., the new master node. The current resource amount, the time required by the current processing task, the number of related tasks or whether the related tasks are related to each other can have different priorities, and the priorities can be set according to requirements. For example, the judgment priority of the current resource amount > the judgment priority of the time length required for the current processing task > the judgment priority of the number of associated tasks. At this time, the node with the largest current resource amount in the second computing node may be determined first, and if there are a plurality of first candidate computing nodes with the largest current resource amount and the same current resource amount, a third computing node may be randomly determined from the first candidate computing nodes, or a time period required for the current processing task may be further determined. If there are still a plurality of second candidate computing nodes which have the shortest time length required for the current processing task and are the same in the first candidate computing nodes, a third computing node can be randomly determined from the second candidate computing nodes, or the number of associated tasks can be further judged. The second candidate computing node with the least number of associated tasks or no associated tasks may be determined as the third computing node.
According to the method of the embodiment of the disclosure, the matching process of the scheduling node and the task can be dynamic, and the preset rule of which scheduling node the task is allocated to is dynamically configurable, and is completely irrelevant to static attributes such as an execution period and the like.
In some embodiments, assigning the first task to the first scheduling node based on the preset rule comprises:
Determining a current amount of resources of the first scheduling node based on a current amount of remaining resources of a computing node associated with the first scheduling node;
The first task is allocated to the first scheduling node in response to the first amount of resources required to process the first task matching the current amount of resources of the first scheduling node.
Wherein the current amount of resources of each scheduling node may refer to an amount of resources of the computing node associated with the scheduling node for processing tasks, e.g., the current amount of resources of the first scheduling node may be determined based on the current amount of remaining resources of the first computing node associated with the first scheduling node. Whether to distribute the task to the scheduling node is judged according to the current resource amount of the scheduling node, and when the first resource amount required for processing the first task is smaller than the current resource amount of the first scheduling node (or the preset proportion of the current resource amount of the first scheduling node), the first task can be distributed to the first scheduling node by determining that the first resource amount and the first resource amount are matched.
In some embodiments, the method 500 further comprises:
The second task is allocated to the first scheduling node in response to the second amount of resources required to process the second task matching the current amount of resources of the first scheduling node.
When the second resource amount required for processing the second task is smaller than the current resource amount of the first scheduling node (or the preset proportion of the current resource amount of the first scheduling node), the second task can be distributed to the first scheduling node by determining that the first resource amount and the second resource amount are matched.
In some embodiments, the method 500 further comprises:
Responsive to a second amount of resources required to process the second task not matching a current amount of resources of the first scheduling node, assigning the second task to a second scheduling node; wherein a current amount of resources of the second scheduling node matches the second amount of resources required to process the second task.
And when the second resource amount required for processing the second task is greater than or equal to the current resource amount of the first scheduling node (or the preset proportion of the current resource amount of the first scheduling node), determining that the second task is not matched with the first scheduling node. At this time, the second resource amount required for processing the second task is smaller than the current resource amount of the second scheduling node (or a preset proportion of the current resource amount of the second scheduling node), and it is determined that the second task matches the second scheduling node, the second task may be allocated to the second scheduling node.
Therefore, the dynamic disk separation of tasks and scheduling nodes can be realized, and the load balancing and the resource waste avoiding are considered. For a heavy-weight task (for example, the required resource amount is larger than a certain value), the heavy-weight task can be distributed to different scheduling nodes, for a plurality of light-weight tasks (for example, the required resource amount is smaller than or equal to a certain value), the heavy-weight tasks can be distributed to the same scheduling node, the sufficient computing capacity of the master node is ensured, the waste and overload of computing resources are avoided, and the load balance of the computing nodes is realized. As shown in fig. 7, fig. 7 shows a schematic diagram of a task scheduling method according to an embodiment of the present disclosure. In FIG. 7, a heavy task 1 may be assigned to a scheduling node 1, a heavy task 2 may be assigned to a scheduling node 2, and a plurality of light tasks 1-n may be assigned to a scheduling node 3.
In some embodiments, the method 500 further comprises:
Assigning the second task to a third scheduling node in response to a second amount of resources required to process the second task matching both a current amount of resources of the first scheduling node and a current amount of resources of the third scheduling node;
The current resource amount of the third scheduling node is larger than the current resource amount of the first scheduling node, or the waiting time of the third scheduling node for processing the second task is smaller than the waiting time of the first scheduling node for processing the second task; or the number of the tasks associated with the third scheduling node is smaller than the number of the tasks associated with the first scheduling node; or the third scheduling node is not present with executing tasks.
Specifically, when the current resource quantity of the plurality of scheduling nodes can meet the requirement of the second resource quantity for processing the second task, a more appropriate scheduling node can be further selected based on the aspects of the current resource quantity, the waiting time of the processing task, the quantity of the associated tasks, whether the associated tasks exist or not and the like, so that load balancing is considered, resource waste is avoided, and therefore task processing efficiency is improved. For example, if the current resource amounts of the first scheduling node and the third scheduling node meet the requirement of processing the second task, the scheduling node with larger current resource amount, shorter waiting time, smaller associated task number and no associated task can be selected for allocation. Further, the current amount of resources, the waiting time for processing tasks, the number of associated tasks, whether there are associated tasks, etc. may also have different priorities. For example, the size of the current amount of resources > the latency of processing tasks > the number of associated tasks > whether there are executing tasks. If the current resource amount of the third scheduling node is larger than the current resource amount of the first scheduling node, the second task is distributed to the third scheduling node; if the current resource amount of the three scheduling nodes is equal to the current resource amount of the first scheduling node, further judging whether the waiting time of the third scheduling node for processing the second task is smaller than the waiting time of the first scheduling node for processing the second task. If the waiting time of the third scheduling node for processing the second task is smaller than the waiting time of the first scheduling node for processing the second task, the second task is distributed to the third scheduling node; if the waiting time of the third scheduling node for processing the second task is equal to the waiting time of the first scheduling node for processing the second task, further judging whether the number of the tasks associated with the third scheduling node is smaller than the number of the tasks associated with the first scheduling node. And if the number of the tasks associated with the third scheduling node is smaller than the number of the tasks associated with the first scheduling node, distributing the second tasks to the third scheduling node. And if the third scheduling node does not have the task in execution, distributing the second task to the third scheduling node. It should be appreciated that the above example is merely illustrative of one instance of allocation between tasks and scheduling nodes and is not intended to be limiting of preset rules. The preset rules may be dynamically set, without limitation.
In some embodiments, the method 500 further comprises:
Allocating a third task of the tasks associated with the first scheduling node to a third scheduling node in response to the associated task of the first computing node being greater than or equal to a first numerical value, or the sum of the amounts of resources required by the first computing node to process the associated task being greater than or equal to a first preset amount of resources; wherein the third scheduling node is associated with a fourth computing node;
And distributing the third task to the fourth computing node for processing.
Wherein the dynamic expansion of the computing node may be performed when the first computing node is overloaded (e.g., the sum of the amounts of resources required by the first computing node to process the associated tasks is greater than or equal to a first preset amount of resources, which may be at least a portion of the current amount of resources of the first scheduling node) or the associated tasks of the first computing node are excessive (e.g., the associated tasks required to process exceed a first value). The partially associated task of the first computing node may be assigned to a new third scheduling node, and task processing may be performed based on a fourth computing node associated with the third scheduling node (i.e., the master node of the third scheduling node). Referring to fig. 8, fig. 8 illustrates a schematic diagram of computing node expansion according to an embodiment of the present disclosure. In FIG. 8, computing node 1 is the master node of scheduling node 1, and tasks 1-4 are associated with scheduling node 1. When the overload of the computing node 1 or the excessive associated tasks are detected, the task 3-task 4 can be distributed to a new dispatching node, the main node of the new dispatching node is the new computing node, and then the task 3-task 4 is processed by the new computing node.
In some embodiments, the method 500 further comprises:
in response to the associated task of the first computing node being greater than or equal to a second numerical value, or the sum of the amount of resources required by the first computing node to process the associated task being greater than or equal to a second preset amount of resources, performing computing node capacity expansion on the scheduling node associated with the first computing node;
And distributing the associated tasks of the scheduling nodes associated with the first computing node to the expanded computing nodes for processing.
When the first computing node is overloaded or the associated tasks of the first computing node are too many, the scheduling node associated with the first computing node can be subjected to computing node capacity expansion, and the associated tasks of the scheduling node are distributed to the expanded computing node for processing. Referring to fig. 9, fig. 9 illustrates a schematic diagram of computing node expansion according to an embodiment of the present disclosure. In FIG. 9, computing node 1 is the master of scheduling node 1 and scheduling node 2, task 1-task 2 are associated with scheduling node 1, and task 3-task 4 are associated with scheduling node 2. When it is detected that computing node 1 is overloaded or associated tasks are too many, scheduling node 2 may be assigned to the new computing node, and tasks 3-4 are assigned by scheduling node 2 to the new computing node for processing.
Therefore, the capacity expansion of the computing nodes can be realized by dynamically adjusting the associated tasks of the computing nodes, the load balance of the computing nodes is considered, the waste of computing resources is avoided, and the processing efficiency of the tasks is improved.
It should be noted that the method of the embodiments of the present disclosure may be performed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the methods of embodiments of the present disclosure, the devices interacting with each other to accomplish the methods.
It should be noted that the foregoing describes some embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Based on the same technical concept, corresponding to the method of any embodiment, the disclosure further provides a task scheduling device, referring to fig. 10, where the task scheduling device includes:
a scheduling node allocation module for allocating a first task to a first scheduling node based on a preset rule, the first scheduling node being associated with a first computing node;
and the computing node distribution module is used for distributing the first task to the first computing node for processing.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of the various modules may be implemented in the same one or more pieces of software and/or hardware when implementing the present disclosure.
The device of the foregoing embodiment is configured to implement the corresponding task scheduling method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Based on the same technical concept, corresponding to any of the above embodiment methods, the present disclosure further provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the task scheduling method according to any of the above embodiments.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The storage medium of the foregoing embodiments stores computer instructions for causing the computer to execute the task scheduling method according to any one of the foregoing embodiments, and has the advantages of the corresponding method embodiments, which are not described herein.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the disclosure, including the claims, is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present disclosure, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present disclosure as described above, which are not provided in details for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present disclosure. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present disclosure, and this also accounts for the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform on which the embodiments of the present disclosure are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The disclosed embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Accordingly, any omissions, modifications, equivalents, improvements, and the like, which are within the spirit and principles of the embodiments of the disclosure, are intended to be included within the scope of the disclosure.

Claims (10)

1. A task scheduling method, comprising:
assigning a first task to a first scheduling node based on a preset rule, the first scheduling node being associated with a first computing node;
And distributing the first task to the first computing node for processing.
2. The method of claim 1, wherein the first scheduling node is associated with at least one second computing node, the first computing node having a higher priority than the second computing node, the method further comprising:
in response to the first computing node processing the first task to be abnormal, the first scheduling node determines a third computing node from the second computing nodes;
And distributing the first task to the third computing node for processing.
3. The method of claim 1, wherein assigning the first task to the first scheduling node based on the preset rule comprises:
Determining a current amount of resources of the first scheduling node based on a current amount of remaining resources of a computing node associated with the first scheduling node;
The first task is allocated to the first scheduling node in response to the first amount of resources required to process the first task matching the current amount of resources of the first scheduling node.
4. A method according to claim 3, further comprising:
assigning the second task to the first scheduling node in response to the second amount of resources required to process the second task matching the current amount of resources of the first scheduling node;
Or in response to the second amount of resources required to process the second task not matching the current amount of resources of the first scheduling node, assigning the second task to a second scheduling node; wherein a current amount of resources of the second scheduling node matches the second amount of resources required to process the second task;
Or in response to a second amount of resources required to process the second task matching both a current amount of resources of the first scheduling node, a current amount of resources of a third scheduling node, assigning the second task to the third scheduling node; the current resource amount of the third scheduling node is larger than the current resource amount of the first scheduling node, or the waiting time of the third scheduling node for processing the second task is smaller than the waiting time of the first scheduling node for processing the second task; or the number of the tasks associated with the third scheduling node is smaller than the number of the tasks associated with the first scheduling node; or the third scheduling node is not present with executing tasks.
5. The method of claim 2, the first scheduling node determining a third computing node from the second computing nodes, comprising:
determining the second computing node with the largest current resource amount as the third computing node;
or determining the second computing node with the shortest time required by the current processing task as the third computing node;
Or determining the second computing node with the least number of associated tasks as the third computing node.
6. The method of claim 1, further comprising:
Allocating a third task of the tasks associated with the first scheduling node to a third scheduling node in response to the associated task of the first computing node being greater than or equal to a first numerical value, or the sum of the amounts of resources required by the first computing node to process the associated task being greater than or equal to a first preset amount of resources; wherein the third scheduling node is associated with a fourth computing node;
And distributing the third task to the fourth computing node for processing.
7. The method of claim 1, further comprising:
in response to the associated task of the first computing node being greater than or equal to a second numerical value, or the sum of the amount of resources required by the first computing node to process the associated task being greater than or equal to a second preset amount of resources, performing computing node capacity expansion on the scheduling node associated with the first computing node;
And distributing the associated tasks of the scheduling nodes associated with the first computing node to the expanded computing nodes for processing.
8. A task scheduling device, comprising:
a scheduling node allocation module for allocating a first task to a first scheduling node based on a preset rule, the first scheduling node being associated with a first computing node;
and the computing node distribution module is used for distributing the first task to the first computing node for processing.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 7 when the program is executed.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202410233040.4A 2024-02-29 2024-02-29 Task scheduling method and related equipment Pending CN118093133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410233040.4A CN118093133A (en) 2024-02-29 2024-02-29 Task scheduling method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410233040.4A CN118093133A (en) 2024-02-29 2024-02-29 Task scheduling method and related equipment

Publications (1)

Publication Number Publication Date
CN118093133A true CN118093133A (en) 2024-05-28

Family

ID=91145120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410233040.4A Pending CN118093133A (en) 2024-02-29 2024-02-29 Task scheduling method and related equipment

Country Status (1)

Country Link
CN (1) CN118093133A (en)

Similar Documents

Publication Publication Date Title
CN107360206B (en) Block chain consensus method, equipment and system
CN108632365B (en) Service resource adjusting method, related device and equipment
CN107770088B (en) Flow control method and device
CN107562512B (en) Method, device and system for migrating virtual machine
JP6241300B2 (en) Job scheduling apparatus, job scheduling method, and job scheduling program
JP5664098B2 (en) Composite event distribution apparatus, composite event distribution method, and composite event distribution program
CN108334396B (en) Data processing method and device, and resource group creation method and device
WO2016011953A1 (en) Scheduling of service resource
CN107832143B (en) Method and device for processing physical machine resources
EP3575979B1 (en) Query priority and operation-aware communication buffer management
EP4404539A1 (en) Resource scheduling method, apparatus and system, device, medium, and program product
CN111104227A (en) Resource control method and device of K8s platform and related components
US20210160312A1 (en) Service processing methods and systrems based on a consortium blockchain network
CN113835865A (en) Task deployment method and device, electronic equipment and storage medium
CN113886069A (en) Resource allocation method and device, electronic equipment and storage medium
CN111381961A (en) Method and device for processing timing task and electronic equipment
CN111930516B (en) Load balancing method and related device
CN114116173A (en) Method, device and system for dynamically adjusting task allocation
US20180095440A1 (en) Non-transitory computer-readable storage medium, activation control method, and activation control device
CN111831408A (en) Asynchronous task processing method and device, electronic equipment and medium
CN109391663B (en) Access request processing method and device
CN118093133A (en) Task scheduling method and related equipment
CN112486638A (en) Method, apparatus, device and storage medium for executing processing task
CN115658311A (en) Resource scheduling method, device, equipment and medium
JP5526748B2 (en) Packet processing device, packet distribution device, control program, and packet distribution method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination