CN117785431A - Task scheduling distribution method and device, electronic equipment and storage medium - Google Patents

Task scheduling distribution method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117785431A
CN117785431A CN202410213357.1A CN202410213357A CN117785431A CN 117785431 A CN117785431 A CN 117785431A CN 202410213357 A CN202410213357 A CN 202410213357A CN 117785431 A CN117785431 A CN 117785431A
Authority
CN
China
Prior art keywords
task
queue
target
target task
scheduler
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410213357.1A
Other languages
Chinese (zh)
Other versions
CN117785431B (en
Inventor
付大伟
韩克党
汤子楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunli Intelligent Technology Co ltd
Original Assignee
Yunli Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunli Intelligent Technology Co ltd filed Critical Yunli Intelligent Technology Co ltd
Priority to CN202410213357.1A priority Critical patent/CN117785431B/en
Publication of CN117785431A publication Critical patent/CN117785431A/en
Application granted granted Critical
Publication of CN117785431B publication Critical patent/CN117785431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Debugging And Monitoring (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a task scheduling and distributing method, a device, electronic equipment and a storage medium, which relate to the technical field of computers and comprise the following steps: when a target task in a queue to be operated is acquired by an outbound queue of a target scheduler, the target scheduler is instructed to write the target task into a temporary storage queue of a server, so that a timer in the temporary storage queue is started to time the target task, and the target scheduler is instructed to schedule the target task; if the timing time is not overtime, determining a task scheduling and distributing result based on the confirmation result received by the temporary storage queue; if the timing time is overtime, writing the target task in the temporary storage queue into a failed retry queue based on the abnormal scheduler result received by the temporary storage queue, and pushing the target task in the failed retry queue into the queue to be operated again until the task scheduling and distributing result is determined. The invention ensures that each task can be executed, has no task leakage and repeated execution, is not easy to cause scheduling bottleneck, and improves the rate and accuracy of task scheduling and distribution.

Description

Task scheduling distribution method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a task scheduling and distributing method, a device, an electronic device, and a storage medium.
Background
In the technical field of computers, a scheduling engine provides setting, management and execution functions of scheduling tasks, realizes automatic starting and processing of tasks, and provides functions of automatic calculation, export and pushing of reports for users; the scheduling engine may generally comprise a data storage component, a scheduler, an executor and the like, and most open source scheduling engines generally adopt a database or a distributed coordinator as a lock, so that task scheduling allocation of a plurality of schedulers in a multithreaded concurrent mode is realized. However, since the existing multiple schedulers have great dependence on the lock of the database or the distributed coordinator, scheduling performance bottlenecks easily occur when the task amount is large and the historical data amount is large, and the process that the multiple schedulers contend for the lock of the database or the distributed coordinator also causes resource waste and single point of failure, so that the rate and the accuracy of task scheduling allocation are low.
Disclosure of Invention
The invention provides a task scheduling and distributing method, a device, electronic equipment and a storage medium, which are used for solving the defects that in the prior art, the concurrent process of a plurality of schedulers is too dependent on a database or a distributed coordinator, such as resource waste and single-point faults, scheduling bottlenecks are easy to occur, the rate and accuracy of task scheduling and distributing are very low, other distributed locks such as the database or the distributed coordinator are not needed in the task scheduling and distributing process, the resource waste and single-point faults are avoided, each task can be ensured to be executed, the situation of missing tasks and repeatedly executing the same task is avoided, and meanwhile, the scheduling bottlenecks are not easy to occur when the task quantity is large and the historical data quantity is large, and the rate and accuracy of task scheduling and distributing are greatly improved.
The invention provides a task scheduling and distributing method, which is applied to a server and comprises the following steps:
under the condition that a target task in a queue to be operated is acquired by an outbound queue of a target scheduler, the target scheduler is instructed to write the target task in the outbound queue into a temporary storage queue of the server, a timer in the temporary storage queue is started to start timing the target task based on a successful writing result, and the target scheduler is instructed to schedule and allocate the target task;
determining a task scheduling distribution result of the target task based on a confirmation result of the target scheduler for the target task, which is received by the temporary storage queue, under the condition that the timing time is not overtime;
and writing the target task in the temporary storage queue into a failed retry queue based on a scheduler abnormal result of the target scheduler received by the temporary storage queue under the condition that the timing time is overtime, and pushing the target task in the failed retry queue into the queue to be operated again under the condition that the temporary storage time of the target task in the failed retry queue reaches a first time threshold value, and repeatedly executing the steps until the task scheduling allocation result of the target task is determined.
According to the task scheduling and distributing method provided by the invention, the step of pushing the target task in the failed retry queue into the queue to be operated again is repeatedly executed until the task scheduling and distributing result of the target task is determined, and the method comprises the following steps:
re-pushing the target task in the failed retry queue into the queue to be operated, repeatedly executing the steps, and writing the target task in the temporary storage queue into a delay queue if the target task scheduling result is not determined under the condition that the accumulated repeated execution times reach a first time threshold;
and under the condition that the delay time of the target task in the delay queue reaches a second time threshold, rewriting the target task in the delay queue into the queue to be operated, repeatedly executing the steps, and under the condition that the accumulated repeated execution times reach a second time threshold, determining a task scheduling and distributing result of the target task.
According to the task scheduling and distributing method provided by the invention, the task scheduling and distributing result of the target task is determined under the condition that the accumulated repeated execution times reach the second time threshold, and the task scheduling and distributing method comprises the following steps:
If the accumulated repeated execution times reach the second time threshold, determining that the task scheduling and distributing result is successful in the task scheduling and distributing of the target task and writing the target task into a distributed queue, wherein the distributed queue records the distributing information of a node corresponding to the target task;
and under the condition that the accumulated repeated execution times reach the second time threshold, if the temporary storage queue receives a confirmation failure result aiming at the target task, determining that the task scheduling distribution result is the target task distribution scheduling failure and finishing the scheduling distribution aiming at the target task.
The task scheduling and distributing method provided by the invention further comprises the following steps:
and under the condition that the task scheduling allocation result is determined, removing the target tasks with the same ID in the temporary storage queue, removing the timer, and removing the target tasks with the same ID in the failed retry queue.
According to the task scheduling and distributing method provided by the invention, the target task in the queue to be operated is acquired by the dequeue of the target scheduler, and the method comprises the following steps:
Acquiring task execution plans corresponding to different types of tasks respectively;
determining an execution trigger condition for executing the target task based on each task execution plan;
writing the target task into the queue to be operated under the condition that the execution triggering condition is met, and indicating the queue to be operated to distribute the target task to at least one scheduler so that each scheduler obtains the target task through a distributed contention mechanism; until the target task in the queue to be run is acquired by the dequeue of the target scheduler in each scheduler.
According to the task scheduling and distributing method provided by the invention, the execution triggering condition for executing the target task is determined based on each task execution plan, and the method comprises the following steps:
determining a daily executable task amount based on each of the task execution plans;
determining the execution triggering condition under the condition that the daily executable task quantity is less than or equal to the total task execution quantity of all currently deployed schedulers;
and outputting indication information of a lateral expansion scheduler under the condition that the daily executable task quantity is larger than the total task execution quantity, and determining the execution triggering condition based on a lateral expansion success result.
The task scheduling and distributing method provided by the invention further comprises the following steps:
determining the total number of tasks to be executed in the queue to be executed;
and outputting indication information of a transverse expansion scheduler when the total number of the tasks exceeds a number threshold, and determining the target tasks popped from the queue to be operated based on a transverse expansion success result.
The invention also provides a task scheduling and distributing device which is applied to the server and comprises the following components:
the task scheduling and distributing unit is used for instructing the target scheduler to write the target task in the outgoing queue into a temporary storage queue of the server under the condition that the target task in the queue to be operated is acquired by the outgoing queue of the target scheduler, starting a timer in the temporary storage queue to start timing the target task based on a successful writing result, and instructing the target scheduler to schedule and distribute the target task;
the allocation scheduling confirming unit is used for determining a task scheduling allocation result of the target task based on a confirmation result of the target scheduler for the target task, which is received by the temporary storage queue, under the condition that the timing time is not overtime;
And the repeated scheduling allocation unit is used for writing the target task in the temporary storage queue into a failed retry queue based on the abnormal scheduler result of the target scheduler received by the temporary storage queue under the condition that the timing time is overtime, and re-pushing the target task in the failed retry queue into the queue to be operated under the condition that the temporary storage time of the target task in the failed retry queue reaches a first time threshold value, and repeatedly executing the steps until the task scheduling allocation result of the target task is determined.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the task scheduling allocation method according to any one of the above when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a task scheduling allocation method as described in any of the above.
According to the task scheduling and distributing method, the device, the electronic equipment and the storage medium, when the server determines that the target task in the queue to be operated is acquired by the dequeue of the target scheduler, the target scheduler is instructed to schedule and distribute the target task, meanwhile, the target scheduler is instructed to write the target task in the dequeue into the temporary queue of the server, and a timer in the temporary queue is started to start timing of the target task, so that the server can determine the task scheduling and distributing result of the target task based on the confirmation result of the target task received by the temporary queue by the target scheduler under the condition that the timing is not overtime, or can re-push the target task in the temporary queue into the queue to be operated for scheduling and distributing again based on the abnormal result of the scheduler received by the temporary queue under the condition that the timing time is overtime, after the target task in the temporary queue is written into the failed retry queue for a certain time, the target task in the failed retry queue is re-pushed into the queue to be operated again. Therefore, the server can ensure that other distributed locks such as a database or a distributed coordinator and the like are not needed in the task scheduling and distributing process through the working mechanisms of the queue to be operated, the temporary storage queue, the failure retry queue and the outgoing queue of the scheduler, so that resource waste and single-point faults are avoided, each task can be executed, the conditions of missing the task and repeatedly executing the same task are avoided, meanwhile, scheduling bottlenecks are not easy to occur when the task quantity is large and the historical data quantity is large, and the rate and the accuracy of task scheduling and distributing are greatly improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a task scheduling and distributing method provided by the invention;
FIG. 2 is a schematic diagram of a task allocation process based on multi-queue scheduling according to the present invention;
FIG. 3 is a schematic diagram of the overall process of the task scheduling and distributing method provided by the invention;
FIG. 4 is a schematic diagram of a task scheduling and distributing device provided by the invention;
fig. 5 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In embodiments of the present invention, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In the text description of the present invention, the character "/" generally indicates that the front-rear associated object is an or relationship. In addition, it should be noted that, the numbers of the objects described in the present invention, such as "first", "second", etc., are merely used to distinguish the described objects, and do not have any sequence or technical meaning.
In the technical field of computers, a scheduling engine provides setting, management and execution functions of scheduling tasks, realizes automatic starting and processing of tasks, and provides functions of automatic calculation, export and pushing of reports for users; the scheduler engine may generally include a data storage component, a scheduler, an executor, etc., wherein in a distributed scheduler engine, a coordinator is often required to coordinate the operations of the schedulers.
For example, for other open source scheduling engines such as XxlJob, dolphinScheduler, DAGScheduleX (Taier), quartz is typically used as the core scheduler, and MySql database or Zookeeper is used as the coordinator; the XxlJob is a distributed task scheduling platform, and the core design goal is to develop rapidly, learn simply, lightweight and expand easily; the Dolphin scheduler is a distributed easily-expanded visual database availability group (Database Availability Group, DAG) workflow task scheduling system of an open source, and mainly solves the problems that extraction-transformation-Load (ETL) processing is complicated in the process of big data research and development, so that task health status cannot be intuitively monitored, and the like, and the task is opened for use; DAGSchedulex is a distributed task scheduling engine capable of instance generation, instance scheduling, instance submission, instance operation and maintenance, instance alerting for tasks.
Yarn: the queue is used as a coordinator to perform resource evaluation and coordination, is a resource scheduling platform and is responsible for providing server operation resources for an operation program, and is equivalent to a distributed operating system platform.
Airflow: the core is completed by adopting Celery, which provides a task queue as a scheduling intermediary, but Celery does not provide a queue, and the completion is required to be completed by means of other middleware such as RabbitMQ, a database and the like; wherein RabbitMQ is open source message broker software (also known as message oriented middleware) implementing Advanced Message Queuing Protocol (AMQP).
It can be seen that most open source scheduling engines generally use a database or a distributed coordinator as a lock, so that task scheduling allocation is performed by multiple schedulers in a multithreaded concurrent manner.
However, since the existing multiple schedulers have great dependency on the database or the distributed coordinator, scheduling performance bottlenecks easily occur when the task amount is large and the historical data amount is large, and the process that the multiple schedulers contend for the database lock or the distributed coordinator also causes resource waste and single point of failure, so that the rate and the accuracy of task scheduling allocation are low.
In order to solve the technical problems, the invention provides a task scheduling distribution method, a device, electronic equipment and a storage medium, which can ensure that other distributed locks such as a distributed coordinator or a database are not needed to be relied on in the multi-task concurrent scheduling process, realize the non-locking of the task scheduling process, avoid the resource waste and single-point fault condition, and avoid the scheduling performance bottleneck when the task quantity is large and the historical data is more; and each task can be executed, so that the task missing condition and the repeated task executing condition are avoided, and the task scheduling and distributing speed and accuracy are greatly improved.
The task scheduling and distributing method, the device, the electronic equipment and the storage medium of the present invention are described below with reference to fig. 1 to 5, where the execution body of the task scheduling and distributing method is a server, and the server may be one server, or may be a server cluster formed by a plurality of servers, a cloud computing center, or the like. The present invention is not particularly limited to the specific form of the server. Further, the task scheduling and distributing method can be applied to a task scheduling and distributing device arranged in the server, and the task scheduling and distributing device can be realized by software, hardware or a combination of the two. The task scheduling and distributing method will be described below by taking an execution body of the task scheduling and distributing method as an example of a server.
In order to facilitate understanding of the task scheduling assignment method provided by the embodiment of the present invention, the task scheduling assignment method provided by the present invention will be described in detail by the following several exemplary embodiments. It is to be understood that the following several exemplary embodiments may be combined with each other and that some embodiments may not be repeated for the same or similar concepts or processes.
Referring to fig. 1, a flow chart of a task scheduling and distributing method according to an embodiment of the present invention is shown in fig. 1, and the task scheduling and distributing method includes the following steps 110 to 130.
Step 110, under the condition that the target task in the queue to be operated is acquired by the dequeue of the target scheduler, instructing the target scheduler to write the target task in the dequeue into the temporary storage queue of the server, starting a timer in the temporary storage queue to start timing the target task based on the successful writing result, and instructing the target scheduler to schedule and allocate the target task.
The queue to be run may be used to ensure that different types of tasks transmitted through the application programming interface (Application Programming Interface, API) can be acquired by at least one scheduler, the target scheduler may be one scheduler with the strongest performance of the current central processing unit (Central Process Unit, CPU) in the at least one scheduler, and the plurality of tasks may include, but are not limited to, data export tasks, data processing tasks, data import tasks, and other different types of tasks.
Specifically, in step 110, the server monitors the queue to be run provided therein in real time, so as to detect each task received by the queue to be run through the API interface, and when determining that the queue to be run receives at least one task, may instruct at least one currently deployed scheduler to contend for a task through a contention mechanism, when the queue to be run receives at least two tasks through the API interface, may determine, through a first-in first-out rule or other rules such as last-in first-out rule or last-out rule preset in the queue to be run, a currently deployed and started at least one scheduler to contend for the target task through the contention mechanism, instruct the target scheduler that obtains the target task to schedule and allocate the target task, and instruct the dequeue of the target scheduler that obtains the target task to write the target task into the temporary queue of the server when receiving the target task.
The method includes the steps that at least one scheduler contends for a target task, each scheduler requests a task from a server, when the server receives at least one task request, the server can instruct a queue to be operated to determine the currently popped target task from at least two tasks based on other rules such as a preset first-in first-out rule or a last-in first-out rule, and memory locks are adopted in concurrency; the memory lock range is small, the process of ejecting the target task from the memory can be completed instantaneously, and no bottleneck exists.
When the server detects a target task written into the temporary storage queue, a timer in the temporary storage queue can be started immediately and the target task is timed; the temporary storage queue may be preset with a plurality of timers, each of which may count corresponding to a target task, and each of which may count a certain time, for example, 1 minute by default.
It should be noted that, the target scheduler that obtains the target task may perform scheduling allocation on the target task according to the existing task scheduling allocation logic, and the process may use other scheduling allocation logic such as weighted average or resource calculation, and the specific scheduling allocation logic may refer to the existing scheduling allocation logic. The present invention is not particularly limited thereto.
And 120, determining a task scheduling and distributing result of the target task based on the confirmation result of the target scheduler for the target task received by the temporary storage queue under the condition that the timing time is not overtime.
Wherein, the acknowledgement result may include one of an acknowledgement success and an acknowledgement failure, the acknowledgement success may be specifically identified as Acknowledgement (ACK), and the acknowledgement failure may be specifically identified as negative acknowledgement (Negative Acknowledgement, NACK); the confirmation result may be a feedback result of the temporary queue given by the target scheduler according to a scheduling and allocation result of the target task, for example, the confirmation success is fed back to the temporary queue when the scheduling and allocation is successful, and the confirmation failure is fed back to the temporary queue when the scheduling and allocation is failed.
Specifically, the server can monitor whether the temporary storage queue receives a confirmation result fed back by the target scheduler under the condition that the time is not overtime in the process of timing the recorded target task by the timer in the temporary storage queue, and can determine the allocation scheduling result of the target task if the temporary storage queue receives the confirmation result fed back by the target scheduler under the condition that the time is not overtime; for example, when the confirmation result is that the confirmation is successful, the task scheduling and distributing result can be that the target task scheduling and distributing is successful; otherwise, when the confirmation result is the confirmation failure, the task scheduling allocation result can be determined to be the target task scheduling failure.
And 130, writing the target task in the temporary storage queue into a failed retry queue based on the abnormal scheduler result of the target scheduler received by the temporary storage queue under the condition that the timing time is overtime, and pushing the target task in the failed retry queue into the queue to be operated again under the condition that the temporary storage time of the target task in the failed retry queue reaches a first time threshold value, and repeatedly executing the steps until the task scheduling distribution result of the target task is determined.
Wherein the first time threshold may be a time of less than 1 minute, for example the first time threshold may be 100 milliseconds.
Specifically, in step 130, if the server performs scheduling assignment on the target task in the process of timing the recorded target task by using the timer in the temporary storage queue, and the target task encounters an extreme condition such as restarting, crashing or crashing, which characterizes that the target scheduler is abnormal, the server may determine that the temporary storage queue cannot receive a confirmation result fed back by the target scheduler when the timing is not overtime, at this moment, the server may start a timeout mechanism of the temporary storage queue, write the target task into the failed retry queue, and in case that it is determined that the temporary storage time of the target task in the failed retry queue exceeds the first time threshold, push the target task in the failed retry queue into the queue to be operated again, so that the target task is acquired again by the same target scheduler and performs scheduling assignment again, or performs scheduling assignment after being acquired by another target scheduler different from the target scheduler. And determining a task scheduling and distributing result of the target task.
It should be noted that, a monitoring logic is preset in the failed retry queue, and the monitoring logic is configured to monitor each task pushed into the failed retry queue, and according to a unique identifier of each target task (for example, an ID of each target task), perform cumulative count on the number of times that each target task is pushed into the failed retry queue, so as to determine a task scheduling allocation result of a corresponding target task when each target task is retried for how many times; for example, the maximum retry number is limited to determine the task scheduling allocation result of the corresponding target task.
When the server determines that the target task in the queue to be operated is acquired by the dequeue of the target scheduler, the server instructs the target scheduler to schedule and allocate the target task, instructs the target scheduler to write the target task in the dequeue into the temporary queue of the server, starts a timer in the temporary queue to start timing the target task, so that the server determines the task scheduling and allocating result of the target task based on the confirmation result of the target task received by the temporary queue by the target scheduler under the condition that the timing is not overtime, or re-pushes the target task in the temporary queue into the queue to be operated to perform scheduling and allocation again based on the abnormal result of the scheduler received by the temporary queue under the condition that the timing time is overtime, and writes the target task in the temporary queue into the failed retry queue for a certain time. Therefore, the server can ensure that other distributed locks such as a database or a distributed coordinator and the like are not needed in the task scheduling and distributing process through the working mechanisms of the queue to be operated, the temporary storage queue, the failure retry queue and the outgoing queue of the scheduler, so that resource waste and single-point faults are avoided, each task can be executed, the conditions of missing the task and repeatedly executing the same task are avoided, meanwhile, scheduling bottlenecks are not easy to occur when the task quantity is large and the historical data quantity is large, and the rate and the accuracy of task scheduling and distributing are greatly improved.
Based on the task scheduling and distributing method shown in fig. 1, in an example embodiment, in step 130, pushing the target task in the failed retry queue into the queue to be run again, and repeatedly executing the above steps until the temporary queue receives the confirmation result may include:
re-pushing the target task in the failed retry queue into the queue to be operated, returning to repeatedly execute the steps, and writing the target task in the temporary storage queue into the delay queue if the target task scheduling result of the target task is not determined under the condition that the accumulated repeated execution times reach the first time threshold; and re-writing the target task in the delay queue into the queue to be operated under the condition that the delay time of the target task in the delay queue reaches a second time threshold, repeatedly executing the steps, and determining a task scheduling and distributing result of the target task under the condition that the accumulated repeated execution times reach the second time threshold.
Specifically, the scheduler feeds back the scheduling and distributing result to the temporary storage queue of the server after each task scheduling and distributing is completed, so the server can return to the step 110 to repeatedly execute the steps under the condition that the target task in the failed retry queue is pushed back to the queue to be operated, when the accumulated repeated execution times reach the first time threshold, continuous scheduling is unsuccessful, an evaluation in which the state is failure blocking can be created for the target task, the target task corresponding to the evaluation is written into the delay queue, when the delay time is set as the second time threshold in the delay queue, the target task in the delay queue after the delay time is delayed as the second time threshold can be rewritten into the queue to be operated, the step 110 is returned to repeatedly execute the steps, when the accumulated repeated execution times reach the second time threshold, continuous scheduling and distributing are unsuccessful, the target task is not pushed into the queue to be operated any more, and the task scheduling and distributing result is determined to be the target task scheduling failure, at this time, the target task can be determined to be failed task or abnormal task, and whether a user is abnormal processing is prompted to schedule and distributing or not is performed.
Illustratively, the first time threshold may be 5, the second time threshold may be 10, and the second time threshold may be 1 minute.
It should be noted that, when determining the task scheduling and distributing result of the target task in the process that the accumulated number of times of repeated execution reaches the first time threshold, ending the repeated execution operation; and determining the task scheduling and distributing result before the accumulated repeated execution times do not reach the second time threshold, and ending the repeated execution operation.
Based on the task scheduling and distributing method shown in fig. 1, in an example embodiment, when the accumulated number of repeated executions reaches the second number threshold, the server determines a task scheduling and distributing result of the target task, and the specific implementation process may include:
under the condition that the accumulated repeated execution times reach a second time threshold, if the temporary storage queue receives a confirmation success result aiming at the target task, determining that the task scheduling and distributing result is successful in target task scheduling and writing the target task into the distributed queue, and recording the distributing information of the node corresponding to the target task in the distributed queue; and under the condition that the accumulated repeated execution times reach a second time threshold, if the temporary storage queue receives a confirmation failure result aiming at the target task, determining that the task scheduling and distributing result is the target task scheduling and distributing failure and finishing the scheduling and distributing aiming at the target task.
The node corresponding to the target task recorded in the allocated queue may be specifically an executor executing the target task.
Specifically, if the temporary storage queue receives a confirmation failure result fed back by the target scheduler for the target task when the accumulated repeated execution times reach the second threshold, the task scheduling distribution result can be determined to be the target task distribution scheduling failure and the target task is not distributed and scheduled any more; otherwise, if the accumulated repeated execution times reach the second time threshold or the accumulated repeated execution times do not reach the second time threshold, the temporary storage queue receives a confirmation success result fed back by the target scheduler for the target task, the task scheduling distribution result is determined to be the successful task allocation scheduling and the repeated execution operation is ended, and the target task which is successfully allocated and scheduled is written into the allocated queue so as to be convenient for the corresponding executor to execute the target task.
Based on the task scheduling and distributing method shown in fig. 1, in an example embodiment, the task scheduling and distributing method provided by the embodiment of the present invention may further include:
and under the condition that the task scheduling and distributing result is determined, removing the target tasks with the same ID in the temporary storage queue, removing the timer corresponding to the target tasks, and removing the target tasks with the same ID in the failed retry queue.
Specifically, when the server determines the task scheduling and distributing result of the target task, the target task with the same IDentity (ID) in the temporary storage queue can be deleted, and meanwhile, the target task with the same ID in the failed retry queue is deleted, so that the problem of repeated task operation caused by repeated pushing of the target task in the failed retry queue into the queue to be operated is prevented, and meanwhile, the memory occupation can be avoided.
Based on the task scheduling allocation method shown in fig. 1, in an example embodiment, the specific implementation process that the target task in the queue to be executed of the server is acquired by the dequeue of the target scheduler in step 110 may include:
firstly, acquiring task execution plans corresponding to different types of tasks respectively; further determining an execution triggering condition of the execution target task based on each task execution plan; then, under the condition that the execution triggering condition is met, writing the target task into a queue to be operated, and indicating the queue to be operated to distribute the target task to at least one scheduler so that each scheduler can acquire the target task through a distributed contention mechanism; until the target task in the queue to be run is acquired by the dequeue of the target scheduler in each scheduler.
The task execution plan may include, but is not limited to, an execution period of the corresponding type of task, that is, how often the corresponding type of task is executed; for example, the data import task is performed every 5 minutes.
Specifically, the server obtains the task execution plans corresponding to the different types of tasks, and the task execution plans can be obtained by a mode that a user inputs each task execution plan to the server through an API (application program interface), or can be obtained by the server from other devices establishing communication relation with the server. The present invention is not particularly limited thereto.
For each task execution plan, the server may determine an execution trigger condition of a target task to be dequeued in the to-be-run queue, where the execution trigger condition may be a condition for triggering execution of the target task, for example, when the task execution plan of the target task is to execute once every 3 minutes, the corresponding execution trigger condition may be that when the target task is not executed for the first time, a current time is 3 minutes from a time when the target task is executed for the first time, and a current time is 3 minutes from an acquisition time of the target task when the target task is executed for the first time.
At this time, when the server determines that the execution triggering condition of the target task is met, the target task may be written into the queue to be operated, and the queue to be operated is instructed to distribute the target task to at least one scheduler, so that each scheduler obtains the target task through a distributed contention mechanism; until the target task in the queue to be run is acquired by the dequeue of the target scheduler in each scheduler.
It should be noted that, the whole task scheduling and distributing process can ensure that the target task popped up by the queue to be operated can be obtained by at least one scheduler all the time, and each scheduler can contend for the target task according to the condition of the CPU resource until the dequeue of the target scheduler contends for the target task.
Based on the task scheduling allocation method shown in fig. 1, in an exemplary embodiment, to accelerate task scheduling allocation, the server may first determine, when acquiring each task execution plan, whether the number of currently deployed schedulers is sufficient to support the required execution task amount, so as to determine whether a horizontal expansion of the schedulers is required. Based on this, the server determines, based on each task execution plan, an execution trigger condition for executing the target task, and the specific implementation procedure may include:
determining a daily executable task amount based on each task execution plan; determining an execution triggering condition under the condition that the daily executable task quantity is less than or equal to the total task execution quantity of all currently deployed schedulers; and outputting indication information of the transverse expansion scheduler under the condition that the daily executable task quantity is larger than the total task execution quantity, and determining an execution triggering condition of the execution target task based on a transverse expansion success result.
Specifically, the server analyzes each task execution plan to determine the daily executable task amount, determines whether the daily executable task amount exceeds the total task execution amount of all currently deployed schedulers, and when the daily executable task amount exceeds the total task execution amount of all schedulers, can determine that all currently deployed schedulers cannot complete each task execution plan, can output indication information of a transverse expansion scheduler at the moment, and can determine execution triggering conditions of execution target tasks under the condition of transversely expanding at least one scheduler; conversely, when the daily executable task amount does not exceed the total task execution amount of all schedulers, it may be determined that all schedulers currently deployed are sufficient to complete each task execution plan, at which time an execution trigger condition for executing the target task may be determined.
It should be noted that, through testing, with the scheme of the present invention, three schedulers are enough to support the task amount of executing 10 ten thousand tasks per day, and the task scheduling amount can be further improved with the increase of schedulers.
Based on the task scheduling allocation method shown in fig. 1, in an exemplary embodiment, the server may determine whether to laterally expand the scheduler by determining whether the amount of tasks in the queue to be run reaches a certain scale, in addition to determining whether to laterally expand the scheduler according to whether all the deployed schedulers are enough to support the daily executable task amount. Based on this, the task scheduling and distributing method provided by the invention can further comprise the following steps:
Firstly, determining the total number of tasks to be executed in a queue to be run; further, when the total number of tasks exceeds the number threshold, the indication information of the transverse expansion scheduler is output, and the target task popped from the queue to be operated is determined based on the result of the successful transverse expansion.
Specifically, the server may determine the total number of tasks to be executed in the queue to be executed by counting different tasks within a preset time period or meeting different execution triggering conditions at the same time, determine whether the total number of tasks exceeds a number threshold, when the server determines that the total number of tasks exceeds the number threshold, indicate that the task content in the current queue to be executed reaches a certain scale and may accelerate the process of dequeuing by adding a scheduler, at this time, may output indication information of a laterally expanded scheduler, and determine at least one target task popped from the queue to be executed under the condition of laterally expanding at least one scheduler.
When the server determines that the total number of tasks does not exceed the number threshold, it is determined that the task content in the currently-to-be-operated queue is smaller in scale and does not affect the task queuing speed, and at least one target task popped from the to-be-operated queue can be directly determined.
For example, referring to the process schematic diagram of task allocation scheduling based on multiple queues shown in fig. 2, a queue and a set based on a memory may be understood as a queue to be executed, a failed retry queue, a temporary storage queue and an allocated queue all running in a memory, a task evaluation object may be understood as a task evaluation meeting a target task of an execution trigger condition according to a task execution plan, each task recorded in the allocated queue may be executed by a corresponding executor through an outbound queue, and a specific process of judging whether the number of failed retries (i.e. the accumulated number of repeated executions) of the corresponding target task exceeds a second number threshold in the failed retry queue may be executed by a failed task retry processor; each target task popped up in the queue to be run can also be acquired by the corresponding target scheduler through the corresponding dequeue and subjected to task scheduling allocation. Reference is made to the foregoing embodiments for specific procedures involved therein. And will not be described in detail herein.
Referring to the overall process schematic diagram of the task scheduling and distributing method shown in fig. 3, the creating plan may be understood as that a user creates each task execution plan through an API interface and uploads the task execution plan to a server, the periodic triggering may be understood as that each type of task may be periodically triggered according to a corresponding task execution plan, an evaluation queue is specifically a queue to be executed, a distribution queue is specifically a distributed queue, a scheduling algorithm may be understood as a scheduling and distributing algorithm of an existing scheduler, and the generated distribution scheme may be distribution information of an executor corresponding to a target task; the executor can acquire the allocation information of the target task to be operated of the node and execute the corresponding target task. Reference is made to the foregoing embodiments for specific procedures involved therein. And will not be described in detail herein.
As can be seen by combining fig. 2 and fig. 3 with the foregoing embodiments, the whole task scheduling and distributing process of the present invention uses multiple queues instead of other distributed locks similar to databases or distributed coordinators, so that each task can be ensured to be accurately executed once, no execution and repeated execution of the task can be avoided, and resource competition can not be brought about by laterally expanding multiple schedulers, so that the task scheduling and distributing process can be accelerated, and even when the task amount is large and the historical data amount is large, scheduling performance bottleneck can not occur, resource waste and single point failure can not be caused, thereby greatly improving the rate and accuracy of task scheduling and distributing.
The task scheduling and distributing device provided by the invention is described below, and the task scheduling and distributing device described below and the task scheduling and distributing method described above can be referred to correspondingly.
The task scheduling and distributing device provided by the invention is applied to a server. Referring to fig. 4, a schematic structural diagram of a task scheduling and distributing device provided by the present invention, as shown in fig. 4, the task scheduling and distributing device 400 includes: a task schedule allocation unit 410, an allocation schedule confirmation unit 420, and a repetition schedule allocation unit 430.
The task scheduling allocation unit 410 is configured to instruct the target scheduler to write the target task in the outgoing queue into the temporary storage queue of the server, start a timer in the temporary storage queue to start timing the target task based on the successful writing result, and instruct the target scheduler to schedule and allocate the target task when the target task in the queue to be run is acquired by the outgoing queue of the target scheduler.
And the allocation scheduling confirmation unit 420 is configured to determine a task scheduling allocation result of the target task based on the confirmation result of the target scheduler for the target task received by the temporary storage queue when the timing time is not overtime.
The repeated scheduling allocation unit 430 is configured to write, based on the scheduler exception result of the target scheduler received by the temporary storage queue, the target task in the temporary storage queue into the failed retry queue when the timing time is over, and re-push, based on the scheduler exception result of the target scheduler received by the temporary storage queue, the target task in the failed retry queue into the queue to be operated when the temporary storage time of the target task in the failed retry queue reaches the first time threshold, and repeatedly execute the above steps until the task scheduling allocation result of the target task is determined.
Optionally, the repeated scheduling allocation unit 430 is specifically configured to re-push the target task in the failed retry queue into the queue to be operated, repeatedly execute the above steps, and if the accumulated number of repeated executions reaches the first time threshold, the target task scheduling result is not determined, and then write the target task in the temporary queue into the delay queue; and re-writing the target task in the delay queue into the queue to be operated under the condition that the delay time of the target task in the delay queue reaches a second time threshold, repeatedly executing the steps, and determining a task scheduling and distributing result of the target task under the condition that the accumulated repeated execution times reach the second time threshold.
Optionally, the repeated scheduling allocation unit 430 is specifically configured to, if the temporary storage queue receives a confirmation success result for the target task in the case that the accumulated number of repeated executions reaches the second number of times threshold, determine that the task scheduling allocation result is successful for allocation and scheduling of the target task and write the target task into the allocated queue, and record allocation information of a node corresponding to the target task in the allocated queue; and under the condition that the accumulated repeated execution times reach a second time threshold, if the temporary storage queue receives a confirmation failure result aiming at the target task, determining that the task scheduling and distributing result is the target task scheduling and distributing failure and finishing the scheduling and distributing aiming at the target task.
Optionally, the repeated scheduling allocation unit 430 is specifically configured to remove the target task with the same ID in the temporary storage queue and remove the timer corresponding to the target task, and remove the target task with the same ID in the failed retry queue, when determining the task scheduling allocation result.
Optionally, the task scheduling and distributing unit 410 is specifically configured to obtain task execution plans corresponding to different types of tasks; determining an execution triggering condition of an execution target task based on each task execution plan; under the condition that the execution triggering condition is met, writing the target task into a queue to be operated, and indicating the queue to be operated to distribute the target task to at least one scheduler so that each scheduler can acquire the target task through a distributed contention mechanism; until the target task in the queue to be run is acquired by the dequeue of the target scheduler in each scheduler.
Optionally, the task scheduling allocation unit 410 is specifically configured to determine the daily executable task amount based on each task execution plan; determining an execution triggering condition under the condition that the daily executable task quantity is less than or equal to the total task execution quantity of all currently deployed schedulers; and outputting indication information of the transverse expansion scheduler under the condition that the daily executable task quantity is larger than the total task execution quantity, and determining an execution triggering condition based on a transverse expansion success result.
Optionally, the task scheduling allocation unit 410 is specifically configured to determine a total number of tasks to be executed in the queue to be executed; and outputting indication information of the transverse expansion scheduler when the total number of tasks exceeds a number threshold, and determining the target task popped from the queue to be operated based on a transverse expansion success result.
The task scheduling and distributing device 400 provided by the invention can execute the technical scheme of the task scheduling and distributing method in any embodiment, and the implementation principle and beneficial effects of the task scheduling and distributing device are similar to those of the task scheduling and distributing method, and can be seen from the implementation principle and beneficial effects of the task scheduling and distributing method, and the detailed description is omitted here.
Fig. 5 illustrates a physical schematic diagram of an electronic device, as shown in fig. 5, which may include: processor 510, communication interface 520, memory 530, and communication bus 540, wherein processor 510, communication interface 520, and memory 530 communicate with each other via communication bus 540. Processor 510 may invoke logic instructions in memory 530 to perform a task schedule allocation method comprising:
Under the condition that a target task in a queue to be operated is acquired by an outbound queue of a target scheduler, the target scheduler is instructed to write the target task in the outbound queue into a temporary storage queue of a server, a timer in the temporary storage queue is started to start timing the target task based on a successful writing result, and the target scheduler is instructed to schedule and distribute the target task; determining a task scheduling and distributing result of the target task based on a confirmation result of the target scheduler for the target task received by the temporary storage queue under the condition that the timing time is not overtime; writing the target task in the temporary storage queue into a failed retry queue based on the abnormal scheduler result of the target scheduler received by the temporary storage queue under the condition that the timing time is overtime, and pushing the target task in the failed retry queue into the queue to be operated again under the condition that the temporary storage time of the target task in the failed retry queue reaches a first time threshold value, and repeatedly executing the steps until the task scheduling distribution result of the target task is determined.
Further, the logic instructions in the memory 530 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, where the computer program product includes a computer program, where the computer program can be stored on a non-transitory computer readable storage medium, and when the computer program is executed by a processor, the computer can execute a task scheduling allocation method provided by the above methods, and the method includes:
under the condition that a target task in a queue to be operated is acquired by an outbound queue of a target scheduler, the target scheduler is instructed to write the target task in the outbound queue into a temporary storage queue of a server, a timer in the temporary storage queue is started to start timing the target task based on a successful writing result, and the target scheduler is instructed to schedule and distribute the target task; determining a task scheduling and distributing result of the target task based on a confirmation result of the target scheduler for the target task received by the temporary storage queue under the condition that the timing time is not overtime; writing the target task in the temporary storage queue into a failed retry queue based on the abnormal scheduler result of the target scheduler received by the temporary storage queue under the condition that the timing time is overtime, and pushing the target task in the failed retry queue into the queue to be operated again under the condition that the temporary storage time of the target task in the failed retry queue reaches a first time threshold value, and repeatedly executing the steps until the task scheduling distribution result of the target task is determined.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the task scheduling allocation method provided by the above methods, the method comprising:
under the condition that a target task in a queue to be operated is acquired by an outbound queue of a target scheduler, the target scheduler is instructed to write the target task in the outbound queue into a temporary storage queue of a server, a timer in the temporary storage queue is started to start timing the target task based on a successful writing result, and the target scheduler is instructed to schedule and distribute the target task; determining a task scheduling and distributing result of the target task based on a confirmation result of the target scheduler for the target task received by the temporary storage queue under the condition that the timing time is not overtime; writing the target task in the temporary storage queue into a failed retry queue based on the abnormal scheduler result of the target scheduler received by the temporary storage queue under the condition that the timing time is overtime, and pushing the target task in the failed retry queue into the queue to be operated again under the condition that the temporary storage time of the target task in the failed retry queue reaches a first time threshold value, and repeatedly executing the steps until the task scheduling distribution result of the target task is determined.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for task scheduling and allocation, applied to a server, the method comprising:
under the condition that a target task in a queue to be operated is acquired by an outbound queue of a target scheduler, the target scheduler is instructed to write the target task in the outbound queue into a temporary storage queue of the server, a timer in the temporary storage queue is started to start timing the target task based on a successful writing result, and the target scheduler is instructed to schedule and allocate the target task;
determining a task scheduling distribution result of the target task based on a confirmation result of the target scheduler for the target task, which is received by the temporary storage queue, under the condition that the timing time is not overtime;
And writing the target task in the temporary storage queue into a failed retry queue based on a scheduler abnormal result of the target scheduler received by the temporary storage queue under the condition that the timing time is overtime, and pushing the target task in the failed retry queue into the queue to be operated again under the condition that the temporary storage time of the target task in the failed retry queue reaches a first time threshold value, and repeatedly executing the steps until the task scheduling allocation result of the target task is determined.
2. The task scheduling and distributing method according to claim 1, wherein said re-pushing the target task in the failed retry queue into the queue to be executed repeatedly until determining a task scheduling and distributing result of the target task, includes:
re-pushing the target task in the failed retry queue into the queue to be operated, repeatedly executing the steps, and writing the target task in the temporary storage queue into a delay queue if the target task scheduling result is not determined under the condition that the accumulated repeated execution times reach a first time threshold;
And under the condition that the delay time of the target task in the delay queue reaches a second time threshold, rewriting the target task in the delay queue into the queue to be operated, repeatedly executing the steps, and under the condition that the accumulated repeated execution times reach a second time threshold, determining a task scheduling and distributing result of the target task.
3. The task scheduling and distributing method according to claim 2, wherein the determining the task scheduling and distributing result of the target task in the case that the accumulated number of repeated executions reaches a second number threshold includes:
if the accumulated repeated execution times reach the second time threshold, determining that the task scheduling and distributing result is successful in the task scheduling and distributing of the target task and writing the target task into a distributed queue, wherein the distributed queue records the distributing information of a node corresponding to the target task;
and under the condition that the accumulated repeated execution times reach the second time threshold, if the temporary storage queue receives a confirmation failure result aiming at the target task, determining that the task scheduling distribution result is the target task distribution scheduling failure and finishing the scheduling distribution aiming at the target task.
4. The task scheduling assignment method according to claim 2, characterized in that the method further comprises:
and under the condition that the task scheduling allocation result is determined, removing the target tasks with the same ID in the temporary storage queue, removing the timer, and removing the target tasks with the same ID in the failed retry queue.
5. The task scheduling allocation method according to any one of claims 1 to 4, wherein the target task in the queue to be run is acquired by an dequeue of a target scheduler, comprising:
acquiring task execution plans corresponding to different types of tasks respectively;
determining an execution trigger condition for executing the target task based on each task execution plan;
writing the target task into the queue to be operated under the condition that the execution triggering condition is met, and indicating the queue to be operated to distribute the target task to at least one scheduler so that each scheduler obtains the target task through a distributed contention mechanism; until the target task in the queue to be run is acquired by the dequeue of the target scheduler in each scheduler.
6. The task scheduling assignment method as claimed in claim 5, wherein the determining an execution trigger condition for executing the target task based on each of the task execution plans comprises:
determining a daily executable task amount based on each of the task execution plans;
determining the execution triggering condition under the condition that the daily executable task quantity is less than or equal to the total task execution quantity of all currently deployed schedulers;
and outputting indication information of a lateral expansion scheduler under the condition that the daily executable task quantity is larger than the total task execution quantity, and determining the execution triggering condition based on a lateral expansion success result.
7. A task scheduling assignment method according to any one of claims 1 to 3, characterized in that the method further comprises:
determining the total number of tasks to be executed in the queue to be executed;
and outputting indication information of a transverse expansion scheduler when the total number of the tasks exceeds a number threshold, and determining the target tasks popped from the queue to be operated based on a transverse expansion success result.
8. A task scheduling and distributing device, characterized by being applied to a server, the device comprising:
The task scheduling and distributing unit is used for instructing the target scheduler to write the target task in the outgoing queue into a temporary storage queue of the server under the condition that the target task in the queue to be operated is acquired by the outgoing queue of the target scheduler, starting a timer in the temporary storage queue to start timing the target task based on a successful writing result, and instructing the target scheduler to schedule and distribute the target task;
the allocation scheduling confirming unit is used for determining a task scheduling allocation result of the target task based on a confirmation result of the target scheduler for the target task, which is received by the temporary storage queue, under the condition that the timing time is not overtime;
and the repeated scheduling allocation unit is used for writing the target task in the temporary storage queue into a failed retry queue based on the abnormal scheduler result of the target scheduler received by the temporary storage queue under the condition that the timing time is overtime, and re-pushing the target task in the failed retry queue into the queue to be operated under the condition that the temporary storage time of the target task in the failed retry queue reaches a first time threshold value, and repeatedly executing the steps until the task scheduling allocation result of the target task is determined.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the task scheduling assignment method of any of claims 1 to 7 when executing the program.
10. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the task scheduling assignment method according to any one of claims 1 to 7.
CN202410213357.1A 2024-02-27 2024-02-27 Task scheduling distribution method and device, electronic equipment and storage medium Active CN117785431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410213357.1A CN117785431B (en) 2024-02-27 2024-02-27 Task scheduling distribution method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410213357.1A CN117785431B (en) 2024-02-27 2024-02-27 Task scheduling distribution method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117785431A true CN117785431A (en) 2024-03-29
CN117785431B CN117785431B (en) 2024-06-04

Family

ID=90402138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410213357.1A Active CN117785431B (en) 2024-02-27 2024-02-27 Task scheduling distribution method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117785431B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118093214A (en) * 2024-04-26 2024-05-28 华芯智上半导体设备(上海)有限公司 Handling task scheduling method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782360A (en) * 2020-06-28 2020-10-16 中国工商银行股份有限公司 Distributed task scheduling method and device
CN112540914A (en) * 2020-11-27 2021-03-23 北京百度网讯科技有限公司 Execution method, execution device, server and storage medium for unit test
CN114090198A (en) * 2021-10-27 2022-02-25 青岛海尔科技有限公司 Distributed task scheduling method and device, electronic equipment and storage medium
US20230020324A1 (en) * 2021-09-28 2023-01-19 Beijing Baidu Netcom Science Technology Co., Ltd. Task Processing Method and Device, and Electronic Device
CN117032987A (en) * 2023-08-25 2023-11-10 北京爱奇艺科技有限公司 Distributed task scheduling method, system, equipment and computer readable medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782360A (en) * 2020-06-28 2020-10-16 中国工商银行股份有限公司 Distributed task scheduling method and device
CN112540914A (en) * 2020-11-27 2021-03-23 北京百度网讯科技有限公司 Execution method, execution device, server and storage medium for unit test
US20230020324A1 (en) * 2021-09-28 2023-01-19 Beijing Baidu Netcom Science Technology Co., Ltd. Task Processing Method and Device, and Electronic Device
CN114090198A (en) * 2021-10-27 2022-02-25 青岛海尔科技有限公司 Distributed task scheduling method and device, electronic equipment and storage medium
CN117032987A (en) * 2023-08-25 2023-11-10 北京爱奇艺科技有限公司 Distributed task scheduling method, system, equipment and computer readable medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118093214A (en) * 2024-04-26 2024-05-28 华芯智上半导体设备(上海)有限公司 Handling task scheduling method and device, electronic equipment and storage medium
CN118093214B (en) * 2024-04-26 2024-07-16 华芯智上半导体设备(上海)有限公司 Handling task scheduling method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN117785431B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN117785431B (en) Task scheduling distribution method and device, electronic equipment and storage medium
CN111782360B (en) Distributed task scheduling method and device
US10838777B2 (en) Distributed resource allocation method, allocation node, and access node
CN105389209B (en) A kind of asynchronous batch tasks processing method and system
US9477521B2 (en) Method and system for scheduling repetitive tasks in O(1)
Bertogna et al. Limited preemption EDF scheduling of sporadic task systems
US8954968B1 (en) Measuring by the kernel the amount of time a monitored thread spends in a queue in order to monitor scheduler delays in a computing device
CN110018914B (en) Shared memory based message acquisition method and device
US20090320021A1 (en) Diagnosis of application performance problems via analysis of thread dependencies
EP2071453A1 (en) Dynamic code update
CN107566460B (en) Method and system for distributed deployment of planning tasks
CN114661449B (en) Task scheduling method, embedded system and computer readable storage medium
CN113157426B (en) Task scheduling method, system, equipment and storage medium
CN113391911B (en) Dynamic scheduling method, device and equipment for big data resources
Gu et al. Improving OCBP-based scheduling for mixed-criticality sporadic task systems
US20020010732A1 (en) Parallel processes run scheduling method and device and computer readable medium having a parallel processes run scheduling program recorded thereon
CN115422010A (en) Node management method and device in data cluster and storage medium
CN116302423A (en) Distributed task scheduling method and system for cloud management platform
Lima et al. Scheduling fixed-priority hard real-time tasks in the presence of faults
CN115509700A (en) Multi-type task management method and device
CN113032110A (en) High-availability task scheduling method based on distributed peer-to-peer architecture design
TW202029697A (en) Container control system for repeatedly executing serverless programs and method thereof
CN115297180B (en) Cluster scheduling method, device and storage medium
Matloff Advanced features of the SimPy language
US12045671B2 (en) Time-division multiplexing method and circuit for arbitrating concurrent access to a computer resource based on a processing slack associated with a critical program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant