WO2022257435A1 - 一种批量任务处理方法、装置、计算设备及存储介质 - Google Patents

一种批量任务处理方法、装置、计算设备及存储介质 Download PDF

Info

Publication number
WO2022257435A1
WO2022257435A1 PCT/CN2021/142108 CN2021142108W WO2022257435A1 WO 2022257435 A1 WO2022257435 A1 WO 2022257435A1 CN 2021142108 W CN2021142108 W CN 2021142108W WO 2022257435 A1 WO2022257435 A1 WO 2022257435A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
preset
running time
tasks
target
Prior art date
Application number
PCT/CN2021/142108
Other languages
English (en)
French (fr)
Inventor
曹威
Original Assignee
深圳前海微众银行股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海微众银行股份有限公司 filed Critical 深圳前海微众银行股份有限公司
Publication of WO2022257435A1 publication Critical patent/WO2022257435A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Definitions

  • the embodiments of the present application relate to the field of financial technology (Fintech), and in particular, to a batch task processing method, device, computing device, and storage medium.
  • Fetech financial technology
  • a new execution time is set based on the experience of the operation and maintenance personnel, and the new execution time is uploaded, so as to realize the adjustment of the task execution time.
  • the distribution of tasks in each preset queue is unbalanced, resulting in some preset queues being too idle when running tasks, and some preset queues being too busy when running tasks; Tasks that need to be executed first are queued to the back of the preset queue, and some tasks that do not need to be executed first are queued to the front of the preset queue.
  • the embodiment of the present application provides a batch task processing method, device, computing device and storage medium, so as to reasonably adjust the execution time of the batch task.
  • the embodiment of the present application provides a batch task processing method, the method comprising:
  • any preset queue according to the historical average starting running time and the first historical average running time of each task in the preset queue within the first period of time, determine the target starting running of each task in the preset queue Time, and based on the target starting running time, run the corresponding task.
  • the first historical average running time of the task represents the length of the running time of the task in the first period, to a certain extent, it reflects the time required for subsequent running of the task, so according to the first historical average
  • the running time and the number of preset queues assign tasks to each preset queue, so that each preset queue will not be too idle or too busy when running tasks; due to the historical average start running time of a task, it represents that the task is
  • the order of the initial running of the first period reflects to a certain extent the order of the subsequent initial running of the task.
  • the preset The target starting running time of each task in the queue, and then run the tasks in the preset queue in a more reasonable sequence, and reasonably adjust the execution time of each task in the batch task.
  • assigning the tasks to each of the preset queues based on the first historical average running time of each task in the batch task within the first period of time and the number of preset queues includes:
  • the target running time is based on The sum of the first historical average running time of all tasks in the batch task and the number of the preset queues are determined.
  • the tasks can be evenly distributed to each preset queue according to the number; since the first historical average operation of all tasks in the preset queue When the sum of the durations is too large, tasks are no longer assigned to the preset queue, which ensures the balance of the sum of the first historical average running time of all tasks in each preset queue, so that the first historical average running time of all tasks in each preset queue The sum of historical average running time will not be too much or too little, which further ensures that each preset queue will not be too idle or too busy when running tasks.
  • select tasks with the number of preset queues from the batch tasks and assign them to each of the preset queues including:
  • the median The value is the median value of the first historical average running time of the batch task
  • the selected tasks are assigned to each In the preset queue; by selecting the task corresponding to the first historical average running time of the number of preset queues adjacent to the median after the median value, the selected task will be selected according to the first historical average running time Allocate to each of the preset queues sequentially from shortest to longest, so that each batch assigns tasks to each of the preset queues in a relatively balanced manner according to the first historical average running time.
  • determining the target initial running time of each task in the preset queue according to the historical average initial running time and the first historical average running time of each task in the preset queue within the first period of time including:
  • any target task is a task that other tasks depend on;
  • the tasks other than the target task in the preset queue are arranged after the target task, and the tasks other than the target task are sorted from front to back according to the historical average starting running time; or
  • the target start running time of the previous task in the preset queue and the first historical average running time determine the target start running time of the next task, wherein the target start time of the first task in the preset queue
  • the running time is the corresponding historical average starting running time, or the preset starting running time.
  • the number of times the target task is depended on represents how many other tasks depend on the target task.
  • the target tasks that depend on a large number of times ensure the normal operation of more other tasks; since tasks other than the target task do not affect the normal operation of other tasks, these tasks are ranked after the target task; in addition, the history of these tasks
  • the average initial running time reflects to a certain extent the order in which these tasks are subsequently started to run. Therefore, these tasks are sorted according to the historical average initial running time from front to back.
  • the historical average starting running time of each task represents the order in which the task starts to run in the first period, and to a certain extent reflects the order in which the subsequent tasks start to run. Therefore, Sort each task from front to back according to the historical average start running time. After the tasks in the preset queue are sorted, the order of running each task is determined. Since the first historical average running time of each task reflects to a certain extent the time required for subsequent running of the task, so according to The target initial running time of the previous task in the preset queue and the first historical average running time can reasonably determine the target initial running time of the next task. By using the historical average starting running time of the first task in the preset queue, or the preset starting running time as its target starting running time, it is applicable to the needs of different scenarios.
  • the method before running the corresponding task based on the target starting running time, the method further includes:
  • the second time period is a time period before the first time period
  • the second time period is the time period before the first time period, if the first historical average running time of the task executed by a single thread is longer than the second history of the task executed by the single thread in the second time period
  • the increase in the average running time is greater than the preset increase, indicating that the running time of the task executed by the single thread increases more.
  • the method also includes:
  • the task corresponding to the rerun instruction is executed.
  • the information of the corresponding preset characterization mechanism and the corresponding task can be determined; thus, the corresponding task can be executed based on the information of the corresponding preset characterization mechanism, without re-running the task every time. Write rerun code while running.
  • running corresponding tasks includes:
  • the target dependency rule corresponding to the task based on the preset corresponding relationship; wherein the corresponding relationship includes Association of preset dependency information and dependency rules;
  • the target dependency rule determine whether to execute and complete the corresponding target task based on the information of the preset characterization mechanism, wherein the corresponding target task is the task on which the execution of the task depends. task;
  • the data to be processed corresponding to the information of the preset characterization mechanism is processed according to the calculation information of the task.
  • the corresponding dependency rules can be directly determined according to the task dependency information, so that when writing the calculation information of the task, there is no need to write the task
  • the corresponding dependency rules facilitate the management of dependency rules.
  • the embodiment of the present application also provides a batch task processing device, including:
  • a task allocation unit configured to allocate the tasks to each of the preset queues based on the first historical average running time of each task in the batch task within the first period of time and the number of preset queues;
  • the operation processing unit is configured to, for any preset queue, determine each task in the preset queue according to the historical average starting running time and the first historical average running time of each task in the preset queue within the first period of time.
  • the target start running time of the task and based on the target start running time, run the corresponding task.
  • the embodiment of the present application provides a computing device, including at least one processor and at least one memory, wherein the memory stores a computer program, and when the program is executed by the processor, the processing The server executes the batch task processing method described in any one of the above first aspects.
  • an embodiment of the present application provides a computer-readable storage medium, which stores a computer program executable by a computing device, and when the program runs on the computing device, the computing device executes the above-mentioned first The batch task processing method described in any one of the aspects.
  • FIG. 1 is a schematic flow diagram of the first batch task processing method provided by the embodiment of the present application.
  • FIG. 2 is a schematic flow diagram of the second batch task processing method provided by the embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a third batch task processing method provided in the embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a fourth batch task processing method provided by the embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a fifth batch task processing method provided by the embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a batch task processing device provided in an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a computing device provided by an embodiment of the present application.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone. three conditions.
  • the character "/" generally indicates that the contextual objects are an "or" relationship.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the present application, unless otherwise specified, "plurality” means two or more.
  • connection should be understood in a broad sense, for example, it may be a direct connection, or an indirect connection through an intermediary, or an internal connection between two devices.
  • execution time refers to the start running time of the task, and in some embodiments, "start running time” is directly used to refer to the "execution time”.
  • an embodiment of the present application proposes a batch task processing method, device, computing device, and storage medium, the method including: based on the first historical average running time of each task in the batch task within the first period of time, and the preset queue , assign the tasks to each of the preset queues; for any preset queue, according to the historical average starting running time of each task in the preset queue within the first period of time and the first historical average
  • the running time is determined by determining the target starting running time of each task in the preset queue, and running the corresponding task based on the target starting running time.
  • the first historical average running time of a task represents the length of the task’s running time in the first period, to a certain extent, it reflects the time required to run the task in the future, so according to the first historical average running time and preset The number of queues, assign tasks to each preset queue, so that each preset queue will not be too idle or too busy when running tasks; because the historical average start running time of a task, it represents the starting time of the task in the first period
  • the order in which the task starts to run reflects to a certain extent the order in which the task will start to run in the future. Therefore, according to the historical average initial running time and the first historical average running time of each task in the preset queue, determine the order of each task in the preset queue. The target start running time, and then run the tasks in the preset queue in a more reasonable order, and reasonably adjust the execution time of each task in the batch task.
  • the embodiment of the present application provides the first batch task processing method, as shown in Figure 1, including the following steps:
  • Step S101 Based on the first historical average running time of each task in the batch task within a first period and the number of preset queues, assign the tasks to each of the preset queues.
  • the first historical average running time of a task represents the length of the task’s running time in the first period, to a certain extent, it reflects the time required to run the task in the future, so according to the first historical average running time and preset
  • the number of queues determines the tasks in each preset queue, so that each preset queue will not be too idle or too busy when running tasks.
  • the above-mentioned first time period can be set according to the actual application scenario, such as a week before the current moment; correspondingly, the first historical average running time of any task is the average time of running the task every day in this week.
  • the first historical average running time of task 1 Among them, A 11 is the running time of task 1 on the first day of this week, A 12 is the running time of task 1 on the second day of this week, A 13 is the running time of task 1 on the third day of this week, A 14 is the running time of task 1 on the fourth day of the week, A 15 is the running time of task 1 on the fifth day of the week, A 16 is the running time of task 1 on the sixth day of the week, A 17 It is the running time of task 1 on the seventh day of the week.
  • This embodiment does not limit the specific implementation manner of determining the tasks in each preset queue.
  • the tasks in the preset queues meet at least one of the following conditions:
  • the ratio of the sum of the first historical average running time of all tasks in any preset queue to the target running time is less than the preset ratio; wherein, the target running time is based on the first history of all tasks in the batch task
  • the sum of the average running time and the number of the preset queues are determined. Taking a batch task including 100 tasks, there are 4 preset queues (denoted as preset queue a, preset queue b, preset queue c, and preset queue d), and the preset ratio is 1.25 as an example:
  • the sum of the first historical average running time of these 100 tasks is recorded as Target runtime
  • the sum of the first historical average running time of all tasks in the preset queue a is denoted as A a
  • the sum of the first historical average running time of all tasks in the preset queue b is denoted as A b
  • the sum of the first historical average running time of all tasks in the preset queue c is denoted as A a .
  • the sum of the first historical average running time is recorded as A c
  • the sum of the first historical average running time of all tasks in the preset queue d is recorded as A d ; among them, A a /A 0 ⁇ 1.25, A b /A 0 ⁇ 1.25, A c /A 0 ⁇ 1.25, A d /A 0 ⁇ 1.25.
  • the ratio of the sum of the first historical average running time of all tasks in each preset queue to the target running time is less than the above preset ratio, the sum of the first historical average running time of all tasks in each preset queue will not Too much or too little (for example, the difference between A a , A b , A c and A d mentioned above will not be too large), which further ensures that each preset queue will not be too idle or too busy when running tasks.
  • the number of tasks in the preset queue a is denoted as B a
  • B b the number of tasks in the preset queue b
  • B c the number of tasks in the preset queue c
  • B d the number of tasks in the preset queue d ; among them,
  • the number of tasks in each preset queue will not be too much or too little (such as the above-mentioned B a , B b , B c and The difference between B and d will not be too large), which ensures the balance of tasks in each preset queue.
  • the above-mentioned batch tasks include 100 tasks and there are 4 preset queues as an example, the application provides the following two ways of determining the tasks in each preset queue as examples :
  • the tasks corresponding to the 4 first historical average runtimes adjacent to the median value before the median value i.e. the above task 46, task 45, task 44 and task 43
  • the 4 first historical average running durations correspond to
  • the tasks i.e. the above task 55, task 56, task 57 and task 58
  • the task in the preset queue will no longer be selected; as in the above preset queue
  • the ratio of the sum of the first historical running time of the task in b to the target running time reaches the preset ratio, and from the remaining tasks, select the 3 first historical average running times before the median value and adjacent to the median value
  • the tasks corresponding to the duration are respectively regarded as tasks in the preset queue a, preset queue c and preset queue d; and select the tasks corresponding to the first three historical average running durations after the median value and adjacent to the median value , respectively as tasks in the preset queue a, preset queue c, and preset queue d, until the preset queues corresponding to the 100 tasks are determined.
  • four tasks i.e. the above-mentioned task 1, task 2, task 3 and task 4 are sequentially selected as tasks in the preset queue a, preset queue b, preset queue c and preset queue d;
  • the task in the preset queue will no longer be selected; as in the above preset queue
  • the ratio of the sum of the first historical running time of the tasks in a to the target running time reaches the preset ratio, and from the remaining tasks, three tasks are selected in turn as the preset queue b, the preset queue c and the preset queue respectively d, until the preset queue corresponding to each of the 100 tasks is determined.
  • the above method 1 and method 2 are to sort the batch tasks according to the first historical average running time from short to long. In some embodiments, the batch tasks can also be sorted according to the first historical average running time from long to short. For the manner of determining the tasks in each preset queue, reference may be made to the above-mentioned method 1 and method 2, which will not be repeated here.
  • this embodiment does not limit the way to trigger the execution of S101, which can be triggered based on user instructions; or triggered at fixed intervals, that is, the batch task processing method is executed regularly every cycle, for example, the batch task processing method is executed once every morning to determine this The target start run time for each task during the day.
  • Step S102 For any preset queue, determine the target of each task in the preset queue according to the historical average starting running time and the first historical average running time of each task in the preset queue within the first period of time start running time, and run corresponding tasks based on the target start running time.
  • the historical average start running time of a task represents the order in which the task starts to run in the first period, and to a certain extent reflects the order in which the task starts to run in the subsequent period, so according to the historical average of each task in the preset queue
  • the initial running time and the first historical average running time determine the target starting running time of each task in the preset queue, and run the tasks in the preset queue in a relatively reasonable order.
  • the first time period can be set according to the actual application scenario, or take the week before the current moment as an example.
  • the historical average starting running time of any task is the starting running time of each day in this week Average time for this task.
  • the historical average starting running time of task 1 C 11 is the time to start running task 1 on the first day of this week
  • C 12 is the time to start running task 1 on the second day of this week
  • C 13 is the time to start running task on the third day of this week 1
  • C 14 is the time to start running task 1 on the fourth day of the week
  • C 15 is the time to start running task 1 on the fifth day of the week
  • C 16 is the time to start running task 1 on the sixth day of the week
  • the time to start running task 1 C 17 is the time to start running task 1 on the seventh day of the week.
  • the device for executing the above method will load the batch tasks one by one into the job scheduling framework (such as Quartz) at startup, and manage them through the container framework (such as Spring).
  • the job scheduling framework such as Quartz
  • the container framework such as Spring
  • the job scheduling framework such as Quartz
  • the container framework such as Spring
  • the first historical average running time of the task represents the length of the running time of the task in the first period, to a certain extent, it reflects the time required for subsequent running of the task, so according to the first historical average
  • the running time and the number of preset queues determine the tasks in each preset queue, so that each preset queue will not be too idle or too busy when running tasks; due to the historical average start running time of a task, it represents that the task is
  • the order of the initial operation of a period reflects the order of the subsequent initial operation of the task to a certain extent. Therefore, the default queue is determined according to the historical average initial running time and the first historical average running time of each task in the preset queue.
  • the target starting running time of each task in the batch task is to run the tasks in the preset queue in a reasonable order, so as to reasonably adjust the execution time of each task in the batch task (that is, the starting running time).
  • the embodiment of the present application provides a second batch task processing method, as shown in Figure 2, including the following steps:
  • Step S201 Based on the first historical average running time of each task in the batch task within a first period and the number of preset queues, assign the tasks to each of the preset queues.
  • step S201 For the specific implementation manner of this step S201, reference may be made to the above step S101, which will not be repeated here.
  • Step S202 For any preset queue, if there are target tasks in the preset queue, sort the target tasks in the preset queue according to the number of dependencies from most to least.
  • the number of times the target task is depended on indicates how many other tasks depend on the target task.
  • target tasks in the preset queue a there are 5 target tasks in the preset queue a (respectively recorded as target task 1, target task 2, target task 3, target task 4, and target task 5); wherein, the number of times target task 1 is dependent is 5 , the number of times that target task 2 is dependent is 3, the number of times that target task 3 is dependent is 1, the number of times that target task 4 is dependent is 6, and the number of times that target task 5 is dependent is 2.
  • the ranking results of these 5 target tasks are : Target task 4, Target task 1, Target task 2, Target task 5, Target task 3.
  • some target tasks in the preset queue are relied on for the same number of times.
  • the target tasks with the same number of times of dependence can be randomly sorted; Sort by start time from front to back.
  • Step S203 Arranging the tasks other than the target task in the preset queue after the target task, and sorting the tasks other than the target task according to the historical average starting running time from front to back .
  • the historical average initial running time of these tasks reflects to a certain extent the order in which these tasks are subsequently started to run. Therefore, these tasks can be sorted according to the historical average initial running time from front to back.
  • the sorting results of all tasks in the preset queue a are: target task 4, target task 1, target task 2, target task 5, target task 3, common task 2, common task 7, common task 6, common task 1, common task 4.
  • Ordinary tasks 5.
  • Step S204 Determine the target starting running time of the next task according to the target starting running time of the previous task in the preset queue and the first historical average running time, wherein the first task in the preset queue
  • the target starting running time is the corresponding historical average starting running time, or the preset starting running time.
  • the order of running each task is determined. Since the first historical average running time of each task reflects to a certain extent the time required for subsequent running of the task, so according to The target initial running time of the previous task in the preset queue and the first historical average running time can reasonably determine the target initial running time of the next task.
  • This embodiment does not limit the specific implementation of determining the target initial running time of the next task based on the target initial running time of the previous task in the preset queue and the first historical average running time. For example, The time corresponding to the first historical average running time after the target starting running time of the previous task is used as the target starting running time for determining the next task.
  • Step S205 Run the corresponding task based on the target start running time.
  • the number of times the target task is depended on represents how many other tasks depend on the target task.
  • the target tasks that depend on a large number of times ensure the normal operation of more other tasks; since tasks other than the target task do not affect the normal operation of other tasks, these tasks are ranked after the target task; in addition, the history of these tasks
  • the average initial running time reflects to a certain extent the order in which these tasks are subsequently started to run. Therefore, these tasks are sorted according to the historical average initial running time from front to back. After the tasks in the preset queue are sorted, the order of running the tasks is determined.
  • the target initial running time of the next task can be reasonably determined.
  • the historical average starting running time of the first task in the preset queue, or the preset starting running time as its target starting running time it is applicable to the needs of different scenarios.
  • the embodiment of the present application provides a third batch task processing method, as shown in FIG. 3 , including the following steps:
  • Step S301 Based on the first historical average running time of each task in the batch task within a first period and the number of preset queues, assign the tasks to each of the preset queues.
  • Step S302 If there is no target task in the preset queue, sort each task in the preset queue according to the historical average starting running time from front to back.
  • Step S303 Determine the target starting running time of the next task according to the target starting running time of the previous task in the preset queue and the first historical average running time, wherein the first task in the preset queue
  • the target starting running time is the corresponding historical average starting running time, or the preset starting running time.
  • Step S304 Run the corresponding task based on the target start running time.
  • the historical average start running time of the task represents the order of the start running of the task in the first period, and reflects to a certain extent the time for the subsequent start running of the task. Sequentially, therefore, the tasks are sorted from front to back according to the historical average start running time. After the tasks in the preset queue are sorted, the order of running each task is determined. Since the first historical average running time of each task reflects to a certain extent the time required for subsequent running of the task, so according to The target initial running time of the previous task in the preset queue and the first historical average running time can reasonably determine the target initial running time of the next task. By using the historical average starting running time of the first task in the preset queue, or the preset starting running time as its target starting running time, it is applicable to the needs of different scenarios.
  • the embodiment of the present application provides a fourth batch task processing method, as shown in Figure 4, including the following steps:
  • Step S401 Based on the first historical average running time of each task in the batch task within a first period and the number of preset queues, assign the tasks to each of the preset queues.
  • step S401 may refer to the above-mentioned embodiments, which will not be repeated here.
  • Step S402 Comparing the first historical average running time of tasks executed by a single thread with the second historical average running time within a second time period, the second time period is a time period before the first time period.
  • the above-mentioned second time period is a time period before the first time period, and the duration of the second time period and the first time period may be the same or different, and there may be one or more second time periods. If there are multiple second time periods, it is necessary to compare the first historical average running time of the single-threaded task with the second historical average running time of the single-threaded task in each second time period.
  • the second period may include at least one of the following:
  • One week before the current time and one week ahead, for example, the current time is the early morning of the third Wednesday in May; the first period is from the early morning of the second Wednesday in May to the early morning of the third Wednesday in May ; The second period is from the early morning of the first Wednesday in May to the early morning of the second Wednesday in May.
  • the current time is the early morning of the third Wednesday in May; the first period is from the early morning of the second Wednesday in May to the early morning of the third Wednesday in May ; The second period is from the early morning of the second Wednesday in April to the early morning of the third Wednesday in April.
  • the above-mentioned second time period is only an illustration, and the number of the second time period and the specific second time period can be set according to the actual application scenario.
  • Step S403 If the increase of the first historical average running time compared to the second historical average running time is greater than a preset increase, switch the single-threaded execution task to multi-threaded execution.
  • the first historical average running time of the task executed by the single thread if there are multiple second time periods, it is necessary to compare the first historical average running time of the task executed by the single thread with the second historical average running time of the task executed by the single thread in each second time period. right.
  • the increase of the second historical average running time in part or all of the second period is greater than the preset increase, and the corresponding single-threaded execution task is switched to multi-threaded execution .
  • the increase in the first historical average running time of a task executed by a single thread compared to the second historical average running time of a task executed by a single thread in the second period reflects how much the running time of a task executed by a single thread has increased . If the first historical average running time of the task executed by a single thread is greater than the preset increase compared with the second historical average running time of the task executed by the single thread in the second period, it means that the task executed by the single thread is running The duration increases more.
  • the task executed by the above single thread can be executed by multiple threads, in order to avoid the time required for the subsequent execution of the task to continue to be too long, the task executed by the single thread can be switched to multi-threaded execution to reduce the need for subsequent execution of the task duration.
  • Step S404 For any preset queue, determine the target of each task in the preset queue according to the historical average starting running time and the first historical average running time of each task in the preset queue within the first period of time start running time, and run corresponding tasks based on the target start running time.
  • the second time period is the time period before the first time period, if the first historical average running time of the task executed by a single thread is longer than the second history of the task executed by the single thread in the second time period
  • the increase in the average running time is greater than the preset increase, indicating that the running time of the task executed by the single thread increases more.
  • the embodiment of the present application provides a fifth batch task processing method, as shown in Figure 5, including the following steps:
  • Step S501 Based on the first historical average running time of each task in the batch task within the first period and the number of preset queues, assign the tasks to each of the preset queues.
  • Step S502 For any preset queue, determine the target of each task in the preset queue according to the historical average starting running time and the first historical average running time of each task in the preset queue within the first period of time Start run time.
  • Step S503 For any task, after the target start running time of the task is reached, the task is executed sequentially based on the information of each preset characterization mechanism.
  • Multi-tenant architecture In order to achieve shared development and maintenance costs, multi-tenant architecture is widely used in the financial field. Multi-tenant architecture shares the same system or program components in a multi-tenant (institutional) environment and ensures the isolation of data between institutions.
  • the organization information corresponding to the task needs to be configured, and the organization corresponding to the task is determined when the task is executed, and the task is executed based on the corresponding organization.
  • developers may forget to configure the organization information corresponding to some tasks in the batch tasks, so that the data isolation of different organizations cannot be guaranteed.
  • this embodiment presets the relevant information of the characterization mechanism in advance in the device that executes the above method, so that when the developer writes the calculation information of the task, he does not need to write the mechanism information corresponding to the task, and there will be no omissions.
  • a task base class F is implemented in the device for executing the batch task processing method, and an abstract method M is provided for task scheduling of a specific organization, and all tasks inherit from the task base class F.
  • any task is executed, it is dispatched to the unified entry of the above-mentioned task base class F; after entering this entry, it first traverses the relevant information of all preset characterization mechanisms, and then calls it in the M method based on the information of each preset characterization mechanism The N method of the interface I, so as to execute the corresponding task (calculate the data to be processed corresponding to the information of the preset representation mechanism through the calculation information of the task to obtain the calculation data).
  • the execution of some tasks needs to depend on the target task, and for the information of any preset representation mechanism, the task can be performed in the following ways:
  • the target dependency rule corresponding to the task based on the preset corresponding relationship; wherein the corresponding relationship includes Association of preset dependency information and dependency rules;
  • the target dependency rule determine whether to execute and complete the corresponding target task based on the information of the preset characterization mechanism, wherein the corresponding target task is the task on which the execution of the task depends. task;
  • the data to be processed corresponding to the information of the preset characterization mechanism is processed.
  • variable attributes of the interface I defined in the task will be dispatched to the base class S when processing a single organization (taking organization A as an example) (that is, based on each predetermined Assuming the information of the representation mechanism, when the M method is called by the N method of the interface I), through the base class S, first determine whether there is dependency information corresponding to task 1, if there is dependency information, according to the preset corresponding relationship (preset dependency information Association with dependency rules), determine the target dependency rules corresponding to task 1 (during implementation, multiple dependency rules can be preset, such as type of batch task dependencies, file transfer task dependencies between institutions, internal data synchronization task dependencies, etc.
  • the initial running time of the target task is obtained, and then it is determined whether the corresponding target task is completed based on the execution of the above-mentioned mechanism A. If the corresponding target task is completed based on the execution of the above-mentioned mechanism A, according to the calculation information of task 1, the The data to be processed corresponding to A (the data to be processed includes the calculation results obtained based on organization A's execution of the above target task) is processed to obtain the calculation results based on organization A's execution of task 1.
  • the corresponding dependency rules can be directly determined according to the task dependency information, so that developers do not need to write the corresponding dependency of the task when writing the calculation information of the task. Rules, which facilitate the management of dependent rules.
  • the multi-tenant architecture needs to isolate the data of different organizations.
  • the developer does not need to write the organization information corresponding to the task when writing the calculation information of the task.
  • the tasks are executed sequentially based on the information of each preset representation agency, realizing the data isolation of different agencies. It avoids that the task cannot be performed based on the organization dimension due to the fact that the developer misses the organization information corresponding to the task, which affects the data isolation of different organizations.
  • a task when a task is executed based on the information of the preset representation mechanism, an error may occur, and at this time, the task needs to be executed again based on the information of the preset representation mechanism (ie, the task is rerun).
  • developers are required to write rerun codes each time a task is rerun, so that the device executing the above method executes the task again based on the information of the preset characterization mechanism according to the rerun code.
  • it is cumbersome to write rerun code every time a task is rerun, and it is easy to write wrong rerun code.
  • the above method also includes:
  • the task corresponding to the rerun instruction is executed.
  • the specific implementation of the task corresponding to the execution of the re-run command is similar to the above step S503, the only difference is that for the re-run command, the corresponding task is not executed based on the target starting running time, nor is it based on the information of each preset characterization mechanism in turn, but Based on the information of the preset representation mechanism corresponding to the rerun instruction, the task corresponding to the rerun instruction is executed, which will not be repeated here.
  • the information of the corresponding preset characterization mechanism and the corresponding task can be determined; thus, the corresponding task can be executed based on the information of the corresponding preset characterization mechanism, without re-running the task every time. Write rerun code while running.
  • the embodiment of the present application provides a batch task processing device.
  • the batch task processing device 600 includes:
  • a task allocation unit 601 configured to allocate the tasks to each of the preset queues based on the first historical average running time of each task in the batch task within the first period and the number of preset queues;
  • the operation processing unit 602 is configured to, for any preset queue, determine the tasks in the preset queue according to the historical average starting running time and the first historical average running time of each task in the preset queue within the first period of time.
  • the target start running time of each task and based on the target start running time, run the corresponding task.
  • the task allocation unit 601 is specifically configured to:
  • the target running time is based on The sum of the first historical average running time of all tasks in the batch task and the number of the preset queues are determined.
  • the task allocation unit 601 is specifically configured to:
  • the median The value is the median value of the first historical average running time of the batch task
  • the running processing unit 602 is specifically configured to:
  • any target task is a task that other tasks depend on;
  • the tasks other than the target task in the preset queue are arranged after the target task, and the tasks other than the target task are sorted from front to back according to the historical average starting running time; or
  • the target start running time of the previous task in the preset queue and the first historical average running time determine the target start running time of the next task, wherein the target start time of the first task in the preset queue
  • the running time is the corresponding historical average starting running time, or the preset starting running time.
  • the running processing unit 602 is further configured to:
  • the second time period is a time period before the first time period
  • the running processing unit 602 is further configured to:
  • the task corresponding to the rerun instruction is executed.
  • the running processing unit 602 is specifically configured to:
  • the target dependency rule corresponding to the task based on the preset corresponding relationship; wherein the corresponding relationship includes Association of preset dependency information and dependency rules;
  • the target dependency rule determine whether to execute and complete the corresponding target task based on the information of the preset characterization mechanism, wherein the corresponding target task is the task on which the execution of the task depends. task;
  • the data to be processed corresponding to the information of the preset characterization mechanism is processed according to the calculation information of the task.
  • the implementation of the device can refer to the implementation of the method, and the repetition will not be repeated.
  • the embodiment of the present application also provides a computing device 700, as shown in FIG.
  • the connection between the processor 701 and the memory 702 through the bus 703 in FIG. 7 is taken as an example.
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in FIG. 7 , but it does not mean that there is only one bus or one type of bus.
  • the processor 701 is the control center of the computing device, which can use various interfaces and lines to connect various parts of the computing device, by running or executing instructions stored in the memory 702 and calling data stored in the memory 702, thereby realizing data deal with.
  • the processor 701 may include one or more processing units, and the processor 701 may integrate an application processor and a modem processor.
  • the call processor mainly handles issuing instructions. It can be understood that the foregoing modem processor may not be integrated into the processor 701 .
  • the processor 701 and the memory 702 can be implemented on the same chip, and in some embodiments, they can also be implemented on independent chips.
  • the processor 701 can be a general processor, such as a central processing unit (CPU), a digital signal processor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array or other programmable logic devices, discrete gates or transistors Logic devices and discrete hardware components can implement or execute the methods, steps and logic block diagrams disclosed in the embodiments of the present application.
  • a general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiment of the batch task processing method can be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
  • the memory 702 can be used to store non-volatile software programs, non-volatile computer-executable programs and modules.
  • Memory 702 can include at least one type of storage medium, for example, can include flash memory, hard disk, multimedia card, card-type memory, random access memory (Random Access Memory, RAM), static random access memory (Static Random Access Memory, SRAM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Magnetic Memory, Disk , CD, etc.
  • Memory 702 is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory 702 in the embodiment of the present application may also be a circuit or any other device capable of implementing a storage function, and is used for storing program instructions and/or data.
  • the memory 702 stores a computer program, and when the program is executed by the processor 701, the processor 701 executes:
  • any preset queue according to the historical average starting running time and the first historical average running time of each task in the preset queue within the first period of time, determine the target starting running of each task in the preset queue Time, and based on the target starting running time, run the corresponding task.
  • processor 701 specifically executes:
  • the target running time is based on The sum of the first historical average running time of all tasks in the batch task and the number of the preset queues are determined.
  • processor 701 specifically executes:
  • the median The value is the median value of the first historical average running time of the batch task
  • processor 701 specifically executes:
  • any target task is a task that other tasks depend on;
  • the tasks other than the target task in the preset queue are arranged after the target task, and the tasks other than the target task are sorted from front to back according to the historical average starting running time; or
  • the target start running time of the previous task in the preset queue and the first historical average running time determine the target start running time of the next task, wherein the target start time of the first task in the preset queue
  • the running time is the corresponding historical average starting running time, or the preset starting running time.
  • the processor 701 before running the corresponding task based on the target start running time, the processor 701 further executes:
  • the second time period is a time period before the first time period
  • processor 701 also executes:
  • the task corresponding to the rerun instruction is executed.
  • processor 701 specifically executes:
  • the target dependency rule corresponding to the task based on the preset corresponding relationship; wherein the corresponding relationship includes Association of preset dependency information and dependency rules;
  • the target dependency rule determine whether to execute and complete the corresponding target task based on the information of the preset characterization mechanism, wherein the corresponding target task is the task on which the execution of the task depends. task;
  • the data to be processed corresponding to the information of the preset characterization mechanism is processed according to the calculation information of the task.
  • the computing device is the computing device in the method in the embodiment of this application, and the problem-solving principle of the computing device is similar to the method, the implementation of the computing device can refer to the implementation of the method, and the repetition will not be repeated.
  • an embodiment of the present application also provides a computer-readable storage medium, which stores a computer program executable by a computing device, and when the program is run on the computing device, the computing device Execute the steps of the batch task processing method above.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本申请实施例提供了一种批量任务处理方法、装置、计算设备及存储介质,该批量任务处理方法包括:基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中,使各预设队列运行任务时不会过于空闲或者过于繁忙;针对任一预设队列,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,并基于所述目标起始运行时间,通过较为合理的先后顺序运行预设队列中的任务,从而合理调整批量任务中每个任务的执行时间。

Description

一种批量任务处理方法、装置、计算设备及存储介质
相关申请的交叉引用
本申请要求在2021年06月11日提交中国专利局、申请号为202110651629.2、申请名称为“一种批量任务处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及金融科技(Fintech)领域,尤其涉及一种批量任务处理方法、装置、计算设备及存储介质。
背景技术
随着计算机技术的发展,越来越多的技术应用在金融领域,传统金融业正在逐步向金融科技转变,但由于金融行业的安全性、实时性要求,也对技术提出的更高的要求。在金融领域中,新增批量任务时,需要配置各任务的执行时间,根据配置的执行时间(即任务的起始运行时间)执行对应的任务。
相关技术中,任务执行过程,基于运维人员的经验设置新的执行时间,并将新的执行时间上传,从而实现对任务执行时间的调整。然而,这种处理方式由于不同运维人员的经验不同,可能会出现新的执行时间不合理的问题。例如:各预设队列中任务分配不均衡,导致有些预设队列运行任务时过于空闲,有些预设队列运行任务时过于繁忙;或者预设队列中各任务的起始运行时间设置不合理,有些任务需要先执行却排到预设队列的后面,有些任务不需要先执行却排到预设队列的前面。
综上,目前亟需一种批量任务处理方法,用以合理调整批量任务的执行时间。
发明内容
本申请实施例提供了一种批量任务处理方法、装置、计算设备及存储介质,用以合理调整批量任务的执行时间。
第一方面,本申请实施例提供了一种批量任务处理方法,该方法包括:
基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中;
针对任一预设队列,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,并基于所述目标起始运行时间,运行对应任务。
上述技术方案中,由于任务的第一历史平均运行时长,表征了该任务在第一时段的运行时长的长短,在一定程度反映了后续运行该任务的所需的时 长,因此根据第一历史平均运行时长以及预设队列的数量,将任务分配至各个预设队列中,使各预设队列运行任务时不会过于空闲或者过于繁忙;由于任务的历史平均起始运行时间,表征了该任务在第一时段的起始运行的先后,在一定程度反映了后续起始运行该任务的先后,因此根据预设队列中各任务的历史平均起始运行时间以及第一历史平均运行时长,确定预设队列中各任务的目标起始运行时间,进而通过较为合理的先后顺序运行预设队列中的任务,合理调整批量任务中每个任务的执行时间。
可选地,基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中,包括:
将所述批量任务中各任务按照所述第一历史平均运行时长从短到长进行排序;
基于排序结果,多批次从所述批量任务中选择预设队列的数量的任务,分配至各个所述预设队列中;
若任一预设队列中所有任务的第一历史平均运行时长之和与目标运行时长的比例超过预设比例,则不再将任务分配至所述预设队列中;所述目标运行时长为根据所述批量任务中所有任务的第一历史平均运行时长总和以及所述预设队列的数量确定的。
上述技术方案中,通过多批次从批量任务中选择预设队列的数量的任务,可以将任务按照数量均衡地分配至各个预设队列中;由于预设队列中所有任务的第一历史平均运行时长之和较多时,不再将任务分配至该预设队列中,保证了各预设队列中所有任务的第一历史平均运行时长之和的均衡性,使得各预设队列中所有任务的第一历史平均运行时长之和不会过多或者过少,进一步保证了各预设队列运行任务时不会过于空闲或者过于繁忙。
可选地,针对任一批次,从所述批量任务中选择预设队列的数量的任务,分配至各个所述预设队列中,包括:
选择在中位值之前,且与所述中位值相邻的预设队列的数量的第一历史平均运行时长对应的任务,依次分配至各个所述预设队列中;其中,所述中位值为所述批量任务的第一历史平均运行时长中的中位值;
选择在所述中位值之后,且与中位值相邻的预设队列的数量的第一历史平均运行时长对应的任务,依次分配至各个所述预设队列中。
上述技术方案中,通过选择在中位值之前,且与中位值相邻的第一历史平均运行时长对应的任务,将选择的任务按第一历史平均运行时长从长到短依次分配至各个所述预设队列中;通过选择在中位值之后,且与中位值相邻的预设队列的数量的第一历史平均运行时长对应的任务,将选择的任务按第一历史平均运行时长从短到长依次分配至各个所述预设队列中,使得每批次都将任务按照第一历史平均运行时长较为均衡地分配至各个预设队列中。
可选地,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,包括:
若所述预设队列中有目标任务,则将所述预设队列中目标任务按照被依 赖的次数从多到少进行排序,其中,任一目标任务为执行其他任务所依赖的任务;将所述预设队列中除所述目标任务之外的任务排在所述目标任务之后,且将除所述目标任务之外的任务按照历史平均起始运行时间从前到后进行排序;或者
若所述预设队列中没有目标任务,则将所述预设队列中各任务按照历史平均起始运行时间从前到后进行排序;
根据所述预设队列中前一任务的目标起始运行时间以及第一历史平均运行时长,确定下一任务的目标起始运行时间,其中所述预设队列中第一个任务的目标起始运行时间是对应的历史平均起始运行时间,或者预设的起始运行时间。
上述技术方案中,如果预设队列中有目标任务,目标任务被依赖的次数表征了有多少其他任务依赖该目标任务,通过将被依赖的次数多的目标任务排到前面,后续可以先运行被依赖的次数多的目标任务,进而保证较多的其他任务的正常运行;由于目标任务之外的任务不影响其他任务的正常运行,因此将这些任务排在目标任务之后;另外,这些任务的历史平均起始运行时间,在一定程度反映了后续起始运行这些任务的先后,因此,将这些任务按照历史平均起始运行时间从前到后进行排序。如果预设队列中没有目标任务,各任务的历史平均起始运行时间,表征了该任务在第一时段的起始运行的先后,在一定程度反映了后续起始运行该任务的先后,因此,将各任务按照历史平均起始运行时间从前到后进行排序。在将预设队列中的任务排序完成后,就确定了运行各任务的先后顺序,由于各任务的第一历史平均运行时长,在一定程度反映了后续运行该任务的所需的时长,因此根据预设队列中前一任务的目标起始运行时间以及第一历史平均运行时长,较为合理地确定出下一任务的目标起始运行时间。通过将预设队列中第一个任务的历史平均起始运行时间,或者预设的起始运行时间,作为其目标起始运行时间,从而适用于不同场景的需求。
可选地,在基于所述目标起始运行时间,运行对应任务之前,还包括:
将单线程执行的任务的第一历史平均运行时长,以及在第二时段内的第二历史平均运行时长进行比对,所述第二时段为所述第一时段之前的时段;
若所述第一历史平均运行时长相比于所述第二历史平均运行时长的增幅大于预设增幅,则将所述单线程执行的任务切换为多线程执行。
上述技术方案中,第二时段为所述第一时段之前的时段,如果单线程执行的任务的第一历史平均运行时长,相比于该单线程执行的任务在第二时段内的第二历史平均运行时长的增幅大于预设增幅,说明该单线程执行的任务运行时长增长较多,通过将单线程执行的任务切换为多线程执行,可以减少后续运行该任务的所需的时长。
可选地,所述方法还包括:
响应重跑指令,基于所述重跑指令对应的预设表征机构的信息,执行所述重跑指令对应的任务。
上述技术方案中,收到重跑指令,即可确定对应的预设表征机构的信息, 以及对应的任务;从而基于对应的预设表征机构的信息执行对应的任务,无需在每次进行任务重跑时编写重跑代码。
可选地,基于所述目标起始运行时间,运行对应任务,包括:
针对任一任务,若所述任务对应有依赖信息,则在所述任务的目标起始运行时间达到后,基于预设的对应关系确定所述任务对应的目标依赖规则;其中所述对应关系包括预设的依赖信息与依赖规则的关联;
针对任一预设表征机构的信息,根据所述目标依赖规则,确定是否基于所述预设表征机构的信息执行完成对应的目标任务,其中,所述对应的目标任务为执行所述任务所依赖的任务;
在确定基于所述预设表征机构的信息执行完成对应的目标任务后,根据所述任务的计算信息,对所述预设表征机构的信息对应的待处理数据进行处理。
上述技术方案中,通过提前预设依赖规则,并预设依赖信息以及依赖规则的对应关系,根据任务的依赖信息可直接确定对应的依赖规则,这样在编写任务的计算信息时,就无需编写任务对应的依赖规则,方便依赖规则的管理。
第二方面,本申请实施例还提供了一种批量任务处理装置,包括:
任务分配单元,用于基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中;
运行处理单元,用于针对任一预设队列,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,并基于所述目标起始运行时间,运行对应任务。
第三方面,本申请实施例提供一种计算设备,包括至少一个处理器以及至少一个存储器,其中,所述存储器存储有计算机程序,当所述程序被所述处理器执行时,使得所述处理器执行上述第一方面任一所述的批量任务处理方法。
第四方面,本申请实施例提供一种计算机可读存储介质,其存储有可由计算设备执行的计算机程序,当所述程序在所述计算设备上运行时,使得所述计算设备执行上述第一方面任一所述的批量任务处理方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的第一种批量任务处理方法的流程示意图;
图2为本申请实施例提供的第二种批量任务处理方法的流程示意图;
图3为本申请实施例提供的第三种批量任务处理方法的流程示意图;
图4为本申请实施例提供的第四种批量任务处理方法的流程示意图;
图5为本申请实施例提供的第五种批量任务处理方法的流程示意图;
图6为本申请实施例提供的一种批量任务处理装置的结构示意图;
图7为本申请实施例提供的一种计算设备的结构示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,显然,所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
本申请实施例中术语“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。
在本申请的描述中,除非另有明确的规定和限定,术语“连接”应做广义理解,例如,可以是直接相连,也可以通过中间媒介间接相连,可以是两个器件内部的连通。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
术语“执行时间”是指任务的起始运行时间,在一些实施例中直接用“起始运行时间”指代“执行时间”。
新增批量任务时,需要配置各任务的执行时间,根据配置的执行时间执行对应的任务。相关技术中,任务执行过程,基于运维人员的经验设置新的执行时间,并将新的执行时间上传,从而实现对任务执行时间的调整。
然而,这种处理方式由于不同运维人员的经验不同,可能会出现新的执行时间不合理的问题,例如:各预设队列中任务分配不均衡,导致有些预设队列运行任务时过于空闲,有些预设队列运行任务时过于繁忙;或者预设队列中各任务的起始运行时间设置不合理,有些任务需要先执行却排到预设队列的后面,有些任务不需要先执行却排到预设队列的前面。
鉴于此,本申请实施例提出一种批量任务处理方法、装置、计算设备及存储介质,该方法包括:基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中;针对任一预设队列,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,并基于所述目标起始运行时间,运行对应任务。
由于任务的第一历史平均运行时长,表征了该任务在第一时段的运行时 长的长短,在一定程度反映了后续运行该任务的所需的时长,因此根据第一历史平均运行时长以及预设队列的数量,将任务分配至各个预设队列中,使各预设队列运行任务时不会过于空闲或者过于繁忙;由于任务的历史平均起始运行时间,表征了该任务在第一时段的起始运行的先后,在一定程度反映了后续起始运行该任务的先后,因此根据预设队列中各任务的历史平均起始运行时间以及第一历史平均运行时长,确定预设队列中各任务的目标起始运行时间,进而通过较为合理的先后顺序运行预设队列中的任务,合理调整批量任务中每个任务的执行时间。
下面将结合附图及具体实施例,对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
本申请实施例提供第一种批量任务处理方法,如图1所示,包括以下步骤:
步骤S101:基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中。
由于任务的第一历史平均运行时长,表征了该任务在第一时段的运行时长的长短,在一定程度反映了后续运行该任务的所需的时长,因此根据第一历史平均运行时长以及预设队列的数量,确定各预设队列中的任务,使各预设队列运行任务时不会过于空闲或者过于繁忙。
上述第一时段可以根据实际应用场景进行设定,如当前时刻之前的一个星期;对应的,任一任务的第一历史平均运行时长,为这一个星期中每天运行该任务的时长的平均值。如:任务1的第一历史平均运行时长
Figure PCTCN2021142108-appb-000001
Figure PCTCN2021142108-appb-000002
其中,A 11为这一个星期中第一天任务1的运行时长,A 12为这一个星期中第二天任务1的运行时长,A 13为这一个星期中第三天任务1的运行时长,A 14为这一个星期中第四天任务1的运行时长,A 15为这一个星期中第五天任务1的运行时长,A 16为这一个星期中第六天任务1的运行时长,A 17为这一个星期中第七天任务1的运行时长。
本实施例对确定各预设队列中的任务的具体实现方式不做限定,一些可选的实施方式中,预设队列中的任务满足以下至少一个条件:
1)任一预设队列中所有任务的第一历史平均运行时长之和与目标运行时长的比例小于预设比例;其中,所述目标运行时长为根据所述批量任务中所有任务的第一历史平均运行时长总和以及所述预设队列的数量确定的。以批量任务包括100个任务,有4个预设队列(分别记作预设队列a、预设队列b、预设队列c以及预设队列d),预设比例为1.25为例:
这100个任务的第一历史平均运行时长总和记作
Figure PCTCN2021142108-appb-000003
目标运行时长
Figure PCTCN2021142108-appb-000004
预设队列a中所有任务的第一历史平均运行时长之和记作A a,预设队列b中所有任务的第一历史平均运行时长之和记作A b,预设队列c中所有任务的第一历史平均运行时长之和记作A c,预设队列d中所有任务的第一历史平均运行时长之和记作A d;其中,A a/A 0<1.25,A b/A 0<1.25,A c/A 0<1.25,A d/A 0<1.25。
由于各预设队列中所有任务的第一历史平均运行时长之和与目标运行时长的比例都小于上述预设比例,因此,各预设队列中所有任务的第一历史平均运行时长之和不会过多或者过少(如上述A a、A b、A c以及A d之间不会相差过大),进一步保证了各预设队列运行任务时不会过于空闲或者过于繁忙。
2)任意两个预设队列中任务数量之间的差值不超过预设数量差值。还是以上述4个预设队列为例,其中预设数量差值为10:
预设队列a中任务数量记作B a,预设队列b中任务数量记作B b,预设队列c中任务数量记作B c,预设队列d中任务数量记作B d;其中,|B a-B b|<10,|B a-B c|<10,|B a-B d|<10,|B b-B c|<10,|B b-B d|<10,|B c-B d|<10。
由于任意两个预设队列中任务数量之间的差值不超过预设数量差值,因此各预设队列中任务数量不会过多或者过少(如上述B a、B b、B c以及B d之间不会相差过大),保证了各预设队列中任务的均衡性。
可以理解,上述批量任务中任务的数量、预设队列的数量、预设比例以及预设数量差值都只是示例性说明,这些参数的具体数值都可根据实际应用场景设置,本申请并不以此为限。
为了使预设队列中的任务满足上述条件,还是以上述批量任务包括100个任务,有4个预设队列为例,本申请示例性提供了以下两种确定各预设队列中的任务的方式:
方式一:
将这100个任务按照第一历史平均运行时长从短到长进行排序,排序后分别记作任务1、任务2、任务3、……任务99、任务100;其中,任务1的第一历史平均运行时长为
Figure PCTCN2021142108-appb-000005
任务2的第一历史平均运行时长为
Figure PCTCN2021142108-appb-000006
任务3的第一历史平均运行时长为
Figure PCTCN2021142108-appb-000007
……,任务99的第一历史平均运行时长为
Figure PCTCN2021142108-appb-000008
任务100的第一历史平均运行时长为
Figure PCTCN2021142108-appb-000009
确定100个第一历史平均运行时长中的中位值
Figure PCTCN2021142108-appb-000010
从这100个任务中,选择中位值之前,且与中位值相邻的4个第一历史平均运行时长对应的任务(即上述任务50、任务49、任务48以及任务47),分别作为预设队列a、预设队列b、预设队列c以及预设队列d中的任务;并选择中位值之后,且与中位值相邻的4个第一历史平均运行时长对应的任务(即上述任务51、任务52、任务53以及任务54),分别作为预设队列a、预设队列b、预设队列c以及预设队列d中的任务;
从经过上述选择剩余的92个任务中,选择中位值之前,且与中位值相邻的4个第一历史平均运行时长对应的任务(即上述任务46、任务45、任务44以及任务43),分别作为预设队列a、预设队列b、预设队列c以及预设队列d中的任务;并选择中位值之后且与中位值相邻的4个第一历史平均运行时长对应的任务(即上述任务55、任务56、任务57以及任务58),分别作为预设队列a、预设队列b、预设队列c以及预设队列d中的任务;
经过多次选择后,如果有预设队列中的任务的第一历史运行时长之和与目标运行时长的比例达到预设比例,就不再选择该预设队列中的任务;如上述预设队列b中的任务的第一历史运行时长之和与目标运行时长的比例达到 预设比例,从剩余的任务中,选择中位值之前,且与中位值相邻的3个第一历史平均运行时长对应的任务,分别作为预设队列a、预设队列c以及预设队列d中的任务;并选择中位值之后且与中位值相邻的3个第一历史平均运行时长对应的任务,分别作为预设队列a、预设队列c以及预设队列d中的任务,直到确定出这100个任务各自对应的预设队列。
方式二:
将这100个任务按照第一历史平均运行时长从短到长进行排序,排序后分别记作任务1、任务2、任务3、……任务99、任务100;其中,任务1的第一历史平均运行时长为
Figure PCTCN2021142108-appb-000011
任务2的第一历史平均运行时长为
Figure PCTCN2021142108-appb-000012
任务3的第一历史平均运行时长为
Figure PCTCN2021142108-appb-000013
……,任务99的第一历史平均运行时长为
Figure PCTCN2021142108-appb-000014
任务100的第一历史平均运行时长为
Figure PCTCN2021142108-appb-000015
按照排序结果,依次选择4个任务(即上述任务1、任务2、任务3以及任务4),分别作为预设队列a、预设队列b、预设队列c以及预设队列d中的任务;
从第一次选择后剩余的96个任务中,依次选择4个任务(即上述任务5、任务6、任务7以及任务8),分别作为预设队列d、预设队列c、预设队列b以及预设队列a中的任务;
从第二次选择后剩余的92个任务中,依次选择4个任务(即上述任务9、任务10、任务11以及任务12),分别作为预设队列a、预设队列b、预设队列c以及预设队列d中的任务;
经过多次选择后,如果有预设队列中的任务的第一历史运行时长之和与目标运行时长的比例达到预设比例,就不再选择该预设队列中的任务;如上述预设队列a中的任务的第一历史运行时长之和与目标运行时长的比例达到预设比例,从剩余的任务中,依次选择3个任务,分别作为预设队列b、预设队列c以及预设队列d中的任务,直到确定出这100个任务各自对应的预设队列。
上述方式一以及方式二都是将批量任务按照第一历史平均运行时长从短到长进行排序,在一些实施例中,也可将批量任务按照第一历史平均运行时长从长到短进行排序,确定各预设队列中的任务的方式可参照上述方式一以及方式二,此处不再赘述。
另外,本实施例对触发执行S101的方式不做限定,可以基于用户指令触发;或者间隔固定时长触发,即每周期定时执行一次批量任务处理方法,如每天凌晨执行一次批量任务处理方法,确定这一天中各任务的目标起始运行时间。
步骤S102:针对任一预设队列,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,并基于所述目标起始运行时间,运行对应任务。
由于任务的历史平均起始运行时间,表征了该任务在第一时段的起始运行的先后,在一定程度反映了后续起始运行该任务的先后,因此根据预设队 列中各任务的历史平均起始运行时间以及第一历史平均运行时长,确定预设队列中各任务的目标起始运行时间,通过较为合理的先后顺序运行预设队列中的任务。
如上所述,第一时段可以根据实际应用场景进行设定,还是以当前时刻之前的一个星期为例,对应的,任一任务的历史平均起始运行时间,为这一个星期中每天起始运行该任务的时间的平均值。如任务1的历史平均起始运行时间
Figure PCTCN2021142108-appb-000016
C 11为这一个星期中第一天起始运行任务1的时间,C 12为这一个星期中第二天起始运行任务1的时间,C 13为这一个星期中第三天起始运行任务1的时间,C 14为这一个星期中第四天起始运行任务1的时间,C 15为这一个星期中第五天起始运行任务1的时间,C 16为这一个星期中第六天起始运行任务1的时间,C 17为这一个星期中第七天起始运行任务1的时间。
示例性的,执行上述方法的装置,在启动时会将批量任务一一加载到作业调度框架(如Quartz)内,并通过容器框架(如Spring)进行管理。在确定上述预设队列中各任务的目标起始运行时间后,更新到任务配置表;通过单独配置一个监测任务每隔一定时间去读取一次任务配置表中未加载生效的任务配置信息(目标起始运行时间),读取到对应任务配置信息后先去检查该任务是否已执行(已按照历史起始运行时间执行该任务)或正在执行(正在按照历史起始运行时间执行该任务);如果已执行或正在执行,就无需按照目标起始运行时间再次执行该任务;如果未执行,按照目标起始运行时间更新该任务的触发器,并重新加载到装置中,从而及时动态调整批量任务的执行时间。
上述技术方案中,由于任务的第一历史平均运行时长,表征了该任务在第一时段的运行时长的长短,在一定程度反映了后续运行该任务的所需的时长,因此根据第一历史平均运行时长以及预设队列的数量,确定各预设队列中的任务,使各预设队列运行任务时不会过于空闲或者过于繁忙;由于任务的历史平均起始运行时间,表征了该任务在第一时段的起始运行的先后,在一定程度反映了后续起始运行该任务的先后,因此根据预设队列中各任务的历史平均起始运行时间以及第一历史平均运行时长,确定预设队列中各任务的目标起始运行时间,通过较为合理的先后顺序运行预设队列中的任务,从而合理调整批量任务中每个任务的执行时间(即起始运行时间)。
针对预设队列中有目标任务的情况(目标任务为执行其他任务需要依赖的任务),本申请实施例提供第二种批量任务处理方法,如图2所示,包括以下步骤:
步骤S201:基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中。
该步骤S201的具体实现方式可参照上述步骤S101,此处不再赘述。
步骤S202:针对任一预设队列,若所述预设队列中有目标任务,则将所述预设队列中目标任务按照被依赖的次数从多到少进行排序。
本实施例,如果预设队列中有目标任务,目标任务被依赖的次数表征了 有多少其他任务依赖该目标任务,通过将被依赖的次数多的目标任务排到前面,后续可以先运行被依赖的次数多的目标任务,进而保证较多的其他任务的正常运行。
示例性的,预设队列a中有5个目标任务(分别记作目标任务1、目标任务2、目标任务3、目标任务4以及目标任务5);其中,目标任务1被依赖的次数为5,目标任务2被依赖的次数为3,目标任务3被依赖的次数为1,目标任务4被依赖的次数为6,目标任务5被依赖的次数为2,这5个目标任务的排序结果为:目标任务4、目标任务1、目标任务2、目标任务5、目标任务3。
上述预设队列a中的目标任务,以及各目标任务被依赖的次数只是示例性说明,本申请并不以此为限。
一些具体的实施例中,预设队列中有些目标任务被依赖的次数相同,这时可以将被依赖的次数相同的目标任务随机排序;或者将被依赖的次数相同的目标任务,按照历史平均起始运行时间从前到后进行排序。
步骤S203:将所述预设队列中除所述目标任务之外的任务排在所述目标任务之后,且将除所述目标任务之外的任务按照历史平均起始运行时间从前到后进行排序。
由于目标任务之外的任务不影响其他任务的正常运行,因此,将这些任务排在目标任务之后;
另外,这些任务的历史平均起始运行时间,在一定程度反映了后续起始运行这些任务的先后,因此,可将这些任务按照历史平均起始运行时间从前到后进行排序。
还是以上述预设队列a为例:
预设队列a中除了上述5个目标任务之外,还有7个其他的任务(分别记作普通任务1、普通任务2、普通任务3、普通任务4、普通任务5、普通任务6以及普通任务7),其中,这7个任务按照历史平均起始运行时间从前到后分别为:普通任务2、普通任务7、普通任务6、普通任务1、普通任务4、普通任务5、普通任务3。
预设队列a中所有任务的排序结果为:目标任务4、目标任务1、目标任务2、目标任务5、目标任务3、普通任务2、普通任务7、普通任务6、普通任务1、普通任务4、普通任务5、普通任务3。
上述预设队列a中的任务、目标任务被依赖的次数以及历史平均起始运行时间的前后都只是示例性说明,本申请并不以此为限。
步骤S204:根据所述预设队列中前一任务的目标起始运行时间以及第一历史平均运行时长,确定下一任务的目标起始运行时间,其中所述预设队列中第一个任务的目标起始运行时间是对应的历史平均起始运行时间,或者预设的起始运行时间。
在将预设队列中的任务排序完成后,就确定了运行各任务的先后顺序,由于各任务的第一历史平均运行时长,在一定程度反映了后续运行该任务的所需的时长,因此根据预设队列中前一任务的目标起始运行时间以及第一历 史平均运行时长,较为合理地确定出下一任务的目标起始运行时间。
本实施例对根据预设队列中前一任务的目标起始运行时间以及第一历史平均运行时长,确定下一任务的目标起始运行时间的具体实现方式不做限定,示例性的,可将前一任务的目标起始运行时间之后的第一历史平均运行时长对应的时间,作为确定下一任务的目标起始运行时间。
步骤S205:基于所述目标起始运行时间,运行对应任务。
上述技术方案中,如果预设队列中有目标任务,目标任务被依赖的次数表征了有多少其他任务依赖该目标任务,通过将被依赖的次数多的目标任务排到前面,后续可以先运行被依赖的次数多的目标任务,进而保证较多的其他任务的正常运行;由于目标任务之外的任务不影响其他任务的正常运行,因此将这些任务排在目标任务之后;另外,这些任务的历史平均起始运行时间,在一定程度反映了后续起始运行这些任务的先后,因此,将这些任务按照历史平均起始运行时间从前到后进行排序。在将预设队列中的任务排序完成后,就确定了运行各任务的先后顺序,由于任务的第一历史平均运行时长,在一定程度反映了后续运行该任务的所需的时长,因此根据预设队列中前一任务的目标起始运行时间以及第一历史平均运行时长,较为合理地确定出下一任务的目标起始运行时间。通过将预设队列中第一个任务的历史平均起始运行时间,或者预设的起始运行时间,作为其目标起始运行时间,从而适用于不同场景的需求。
针对预设队列中没有目标任务的情况,本申请实施例提供第三种批量任务处理方法,如图3所示,包括以下步骤:
步骤S301:基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中。
步骤S302:若所述预设队列中没有目标任务,则将所述预设队列中各任务按照历史平均起始运行时间从前到后进行排序。
步骤S303:根据所述预设队列中前一任务的目标起始运行时间以及第一历史平均运行时长,确定下一任务的目标起始运行时间,其中所述预设队列中第一个任务的目标起始运行时间是对应的历史平均起始运行时间,或者预设的起始运行时间。
步骤S304:基于所述目标起始运行时间,运行对应任务。
该步骤S301-S304的具体实现方式可参照上述实施例,此处不再赘述。
上述技术方案中,如果预设队列中没有目标任务,任务的历史平均起始运行时间,表征了该任务在第一时段的起始运行的先后,在一定程度反映了后续起始运行该任务的先后,因此,将各任务按照历史平均起始运行时间从前到后进行排序。在将预设队列中的任务排序完成后,就确定了运行各任务的先后顺序,由于各任务的第一历史平均运行时长,在一定程度反映了后续运行该任务的所需的时长,因此根据预设队列中前一任务的目标起始运行时间以及第一历史平均运行时长,较为合理地确定出下一任务的目标起始运行时间。通过将预设队列中第一个任务的历史平均起始运行时间,或者预设的起始运行时间,作为其目标起始运行时间,从而适用于不同场景的需求。
本申请实施例提供第四种批量任务处理方法,如图4所示,包括以下步骤:
步骤S401:基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中。
该步骤S401的具体实现方式可参照上述实施例,此处不再赘述。
步骤S402:将单线程执行的任务的第一历史平均运行时长,以及在第二时段内的第二历史平均运行时长进行比对,所述第二时段为所述第一时段之前的时段。
实施中,上述第二时段为第一时段之前的时段,第二时段与第一时段的时长可以相同可以不同,第二时段可以为一个,也可以为多个。如果有多个第二时段,需要将单线程执行的任务的第一历史平均运行时长,分别与该单线程执行的任务在各第二时段内的第二历史平均运行时长进行比对。
还是以第一时段为当前时刻之前的一个星期为例,第二时段可以包括以下至少一项:
1)当前时刻之前一个星期再往前算起的一个星期,如当前时刻是5月第3个星期三的凌晨;第一时段为5月第2个星期三的凌晨到5月第3个星期三的凌晨;第二时段为5月第1个星期三的凌晨到5月第2个星期三的凌晨。
2)当前时刻之前一个月再往前算起的一个星期,如当前时刻是5月第3个星期三的凌晨;第一时段为5月第2个星期三的凌晨到5月第3个星期三的凌晨;第二时段为4月第2个星期三的凌晨到4月第3个星期三的凌晨。
上述第二时段只是示例性说明,第二时段的数量以及具体的第二时段可以根据实际应用场景设定。
步骤S403:若所述第一历史平均运行时长相比于所述第二历史平均运行时长的增幅大于预设增幅,则将所述单线程执行的任务切换为多线程执行。
如上所述,如果有多个第二时段,需要将单线程执行的任务的第一历史平均运行时长,分别与该单线程执行的任务在各第二时段内的第二历史平均运行时长进行比对。实施中,如果满足第一历史平均运行时长,相比于在部分或者全部第二时段内的第二历史平均运行时长的增幅大于预设增幅,将对应的单线程执行的任务切换为多线程执行。
单线程执行的任务的第一历史平均运行时长,相比于该单线程执行的任务在第二时段内的第二历史平均运行时长的增幅,反映了该单线程执行的任务运行时长增长的多少。如果单线程执行的任务的第一历史平均运行时长,相比于该单线程执行的任务在第二时段内的第二历史平均运行时长的增幅大于预设增幅,说明该单线程执行的任务运行时长增长较多。
如果上述单线程执行的任务可以通过多线程执行,为了避免后续运行该任务的所需的时长继续过长,可将该单线程执行的任务切换为多线程执行,减少后续运行该任务的所需的时长。
步骤S404:针对任一预设队列,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,并基于所述目标起始运行时间,运行对应任 务。
上述技术方案中,第二时段为所述第一时段之前的时段,如果单线程执行的任务的第一历史平均运行时长,相比于该单线程执行的任务在第二时段内的第二历史平均运行时长的增幅大于预设增幅,说明该单线程执行的任务运行时长增长较多,通过将单线程执行的任务切换为多线程执行,可以减少后续运行该任务的所需的时长。
本申请实施例提供第五种批量任务处理方法,如图5所示,包括以下步骤:
步骤S501:基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中。
步骤S502:针对任一预设队列,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间。
该步骤S501-S502的具体实现方式可参照上述实施例,此处不再赘述。
步骤S503:针对任一任务,在所述任务的目标起始运行时间达到后,依次基于各预设表征机构的信息,执行所述任务。
为了实现共享开发和维护成本,多租户架构被广泛应用在金融领域中,多租户架构是在多租户(机构)环境下共用相同的***或程序组件,并且可确保各机构间数据的隔离性。一些实施例中,新增批量任务时,需要配置任务对应的机构信息,执行任务时确定任务对应的机构,基于对应的机构来执行该任务。然而,开发人员可能会忘记配置批量任务中的某些任务对应的机构信息,这样就无法保证不同机构的数据隔离。
基于此,本实施例通过在执行上述方法的装置中提前预设表征机构的相关信息,这样,开发人员在编写任务的计算信息时,就无需编写任务对应的机构信息,也就不会存在漏配某些任务对应的机构信息的问题。
示例性的,在执行批量任务处理方法的装置中实现一个任务基类F,并提供一个抽象方法M进行具体机构的任务调度,所有任务都继承任务基类F。执行任一任务时,均调度到上述任务基类F的统一入口;进入这个入口后,首先遍历所有的预设表征机构的相关信息,依次基于各预设表征机构的信息,在M方法内调用接口I的N方法,从而执行对应的任务(将预设表征机构的信息对应的待处理数据通过该任务的计算信息进行计算,得到计算数据)。
如上所述,执行一些任务需要依赖目标任务,针对任一预设表征机构的信息,可通过以下方式执行所述任务:
针对任一任务,若所述任务对应有依赖信息,则在所述任务的目标起始运行时间达到后,基于预设的对应关系确定所述任务对应的目标依赖规则;其中所述对应关系包括预设的依赖信息与依赖规则的关联;
针对任一预设表征机构的信息,根据所述目标依赖规则,确定是否基于所述预设表征机构的信息执行完成对应的目标任务,其中,所述对应的目标任务为执行所述任务所依赖的任务;
在确定基于所述预设表征机构的信息执行完成对应的目标任务后,根据 所述任务的计算信息,对所述预设表征机构的信息对应的待处理数据进行处理。
示例性的,任务内定义接口I的变量属性(即任务的计算信息,以任务1为例),处理单个机构(以机构A为例)时会调度到基类S内(即依次基于各预设表征机构的信息,在M方法被调用接口I的N方法),通过基类S,先确定任务1是否对应有依赖信息,如果有依赖信息,根据预设的对应关系(预设的依赖信息与依赖规则的关联),确定任务1对应的目标依赖规则(实施中,可以预设多个依赖规则,如同类型的批量任务依赖、机构之间文件传输任务的依赖、内部数据同步任务的依赖等);根据目标依赖规则得到目标任务的起始运行时间,进而确定是否基于上述机构A执行完成对应的目标任务,如果基于上述机构A执行完成对应的目标任务,根据任务1的计算信息,对机构A对应的待处理数据(待处理数据包括基于机构A执行完成上述目标任务所得到的计算结果),进行处理,得到基于机构A执行任务1所得到的计算结果。
通过提前预设依赖规则,并预设依赖信息以及依赖规则的对应关系,根据任务的依赖信息可直接确定对应的依赖规则,这样开发人员在编写任务的计算信息时,就无需编写任务对应的依赖规则,方便依赖规则的管理。
上述技术方案中,多租户架构需要进行不同机构的数据隔离,通过提前预设表征机构的相关信息,开发人员在编写任务的计算信息时,就无需编写任务对应的机构信息,在任务的目标起始运行时间达到后,依次基于各预设表征机构的信息,执行任务,实现了不同机构的数据隔离。避免了因开发人员漏配任务对应的机构信息,而导致不能基于机构维度执行该任务,影响不同机构的数据隔离。
另外,在基于预设表征机构的信息执行任务时,可能会发生错误,这时需要重新基于该预设表征机构的信息执行该任务(即任务重跑)。一些实施例中,需要开发人员在每次进行任务重跑时编写重跑代码,从而执行上述方法的装置根据重跑代码,再次基于该预设表征机构的信息执行该任务。但是每次进行任务重跑时都要编写重跑代码较为繁琐,且容易编写错误的重跑代码。
基于此,一些可选的实施方式中,上述方法还包括:
响应重跑指令,基于所述重跑指令对应的预设表征机构的信息,执行所述重跑指令对应的任务。
执行重跑指令对应的任务的具体实现方式与上述步骤S503类似,区别仅在于针对重跑指令,不用基于目标起始运行时间执行对应任务,也不用依次基于各预设表征机构的信息,而是基于重跑指令对应的预设表征机构的信息,执行重跑指令对应的任务,此处不再赘述。
上述技术方案中,收到重跑指令,即可确定对应的预设表征机构的信息,以及对应的任务;从而基于对应的预设表征机构的信息执行对应的任务,无需在每次进行任务重跑时编写重跑代码。
基于相同的发明构思,本申请实施例提供一种批量任务处理装置,参阅图6所示,批量任务处理装置600包括:
任务分配单元601,用于基于批量任务中各任务在第一时段内的第一历史 平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中;
运行处理单元602,用于针对任一预设队列,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,并基于所述目标起始运行时间,运行对应任务。
可选地,任务分配单元601具体用于:
将所述批量任务中各任务按照所述第一历史平均运行时长从短到长进行排序;
基于排序结果,多批次从所述批量任务中选择预设队列的数量的任务,分配至各个所述预设队列中;
若任一预设队列中所有任务的第一历史平均运行时长之和与目标运行时长的比例超过预设比例,则不再将任务分配至所述预设队列中;所述目标运行时长为根据所述批量任务中所有任务的第一历史平均运行时长总和以及所述预设队列的数量确定的。
可选地,任务分配单元601具体用于:
选择在中位值之前,且与所述中位值相邻的预设队列的数量的第一历史平均运行时长对应的任务,依次分配至各个所述预设队列中;其中,所述中位值为所述批量任务的第一历史平均运行时长中的中位值;
选择在所述中位值之后,且与中位值相邻的预设队列的数量的第一历史平均运行时长对应的任务,依次分配至各个所述预设队列中。
可选地,运行处理单元602具体用于:
若所述预设队列中有目标任务,则将所述预设队列中目标任务按照被依赖的次数从多到少进行排序,其中,任一目标任务为执行其他任务所依赖的任务;将所述预设队列中除所述目标任务之外的任务排在所述目标任务之后,且将除所述目标任务之外的任务按照历史平均起始运行时间从前到后进行排序;或者
若所述预设队列中没有目标任务,则将所述预设队列中各任务按照历史平均起始运行时间从前到后进行排序;
根据所述预设队列中前一任务的目标起始运行时间以及第一历史平均运行时长,确定下一任务的目标起始运行时间,其中所述预设队列中第一个任务的目标起始运行时间是对应的历史平均起始运行时间,或者预设的起始运行时间。
可选地,运行处理单元602在基于所述目标起始运行时间,运行对应任务之前,还用于:
将单线程执行的任务的第一历史平均运行时长,以及在第二时段内的第二历史平均运行时长进行比对,所述第二时段为所述第一时段之前的时段;
若所述第一历史平均运行时长相比于所述第二历史平均运行时长的增幅大于预设增幅,则将所述单线程执行的任务切换为多线程执行。
可选地,运行处理单元602还用于:
响应重跑指令,基于所述重跑指令对应的预设表征机构的信息,执行所述重跑指令对应的任务。
可选地,运行处理单元602具体用于:
针对任一任务,若所述任务对应有依赖信息,则在所述任务的目标起始运行时间达到后,基于预设的对应关系确定所述任务对应的目标依赖规则;其中所述对应关系包括预设的依赖信息与依赖规则的关联;
针对任一预设表征机构的信息,根据所述目标依赖规则,确定是否基于所述预设表征机构的信息执行完成对应的目标任务,其中,所述对应的目标任务为执行所述任务所依赖的任务;
在确定基于所述预设表征机构的信息执行完成对应的目标任务后,根据所述任务的计算信息,对所述预设表征机构的信息对应的待处理数据进行处理。
由于该装置即是本申请实施例中的方法中的装置,并且该装置解决问题的原理与该方法相似,因此该装置的实施可以参见方法的实施,重复之处不再赘述。
基于相同的技术构思,本申请实施例还提供了一种计算设备700,如图7所示,包括至少一个处理器701,以及与至少一个处理器连接的存储器702,本申请实施例中不限定处理器701与存储器702之间的具体连接介质,图7中处理器701和存储器702之间通过总线703连接为例。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图7中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
其中,处理器701是计算设备的控制中心,可以利用各种接口和线路连接计算设备的各个部分,通过运行或执行存储在存储器702内的指令以及调用存储在存储器702内的数据,从而实现数据处理。可选的,处理器701可包括一个或多个处理单元,处理器701可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理下发指令。可以理解的是,上述调制解调处理器也可以不集成到处理器701中。在一些实施例中,处理器701和存储器702可以在同一芯片上实现,在一些实施例中,它们也可以在独立的芯片上分别实现。
处理器701可以是通用处理器,例如中央处理器(CPU)、数字信号处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合批量任务处理方法实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
存储器702作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块。存储器702可以包括至少一种类型的存储介质,例如可以包括闪存、硬盘、多媒体卡、卡型存储器、随机访问存储器(Random Access Memory,RAM)、静态随机访问存储器(Static  Random Access Memory,SRAM)、可编程只读存储器(Programmable Read Only Memory,PROM)、只读存储器(Read Only Memory,ROM)、带电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、磁性存储器、磁盘、光盘等等。存储器702是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器702还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
在本申请实施例中,存储器702存储有计算机程序,当该程序被处理器701执行时,使得处理器701执行:
基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中;
针对任一预设队列,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,并基于所述目标起始运行时间,运行对应任务。
可选地,处理器701具体执行:
将所述批量任务中各任务按照所述第一历史平均运行时长从短到长进行排序;
基于排序结果,多批次从所述批量任务中选择预设队列的数量的任务,分配至各个所述预设队列中;
若任一预设队列中所有任务的第一历史平均运行时长之和与目标运行时长的比例超过预设比例,则不再将任务分配至所述预设队列中;所述目标运行时长为根据所述批量任务中所有任务的第一历史平均运行时长总和以及所述预设队列的数量确定的。
可选地,处理器701具体执行:
选择在中位值之前,且与所述中位值相邻的预设队列的数量的第一历史平均运行时长对应的任务,依次分配至各个所述预设队列中;其中,所述中位值为所述批量任务的第一历史平均运行时长中的中位值;
选择在所述中位值之后,且与中位值相邻的预设队列的数量的第一历史平均运行时长对应的任务,依次分配至各个所述预设队列中。
可选地,处理器701具体执行:
若所述预设队列中有目标任务,则将所述预设队列中目标任务按照被依赖的次数从多到少进行排序,其中,任一目标任务为执行其他任务所依赖的任务;将所述预设队列中除所述目标任务之外的任务排在所述目标任务之后,且将除所述目标任务之外的任务按照历史平均起始运行时间从前到后进行排序;或者
若所述预设队列中没有目标任务,则将所述预设队列中各任务按照历史平均起始运行时间从前到后进行排序;
根据所述预设队列中前一任务的目标起始运行时间以及第一历史平均运行时长,确定下一任务的目标起始运行时间,其中所述预设队列中第一个任务的目标起始运行时间是对应的历史平均起始运行时间,或者预设的起始运 行时间。
可选地,处理器701在基于所述目标起始运行时间,运行对应任务之前,还执行:
将单线程执行的任务的第一历史平均运行时长,以及在第二时段内的第二历史平均运行时长进行比对,所述第二时段为所述第一时段之前的时段;
若所述第一历史平均运行时长相比于所述第二历史平均运行时长的增幅大于预设增幅,则将所述单线程执行的任务切换为多线程执行。
可选地,处理器701还执行:
响应重跑指令,基于所述重跑指令对应的预设表征机构的信息,执行所述重跑指令对应的任务。
可选地,处理器701具体执行:
针对任一任务,若所述任务对应有依赖信息,则在所述任务的目标起始运行时间达到后,基于预设的对应关系确定所述任务对应的目标依赖规则;其中所述对应关系包括预设的依赖信息与依赖规则的关联;
针对任一预设表征机构的信息,根据所述目标依赖规则,确定是否基于所述预设表征机构的信息执行完成对应的目标任务,其中,所述对应的目标任务为执行所述任务所依赖的任务;
在确定基于所述预设表征机构的信息执行完成对应的目标任务后,根据所述任务的计算信息,对所述预设表征机构的信息对应的待处理数据进行处理。
由于该计算设备即是本申请实施例中的方法中的计算设备,并且该计算设备解决问题的原理与该方法相似,因此该计算设备的实施可以参见方法的实施,重复之处不再赘述。
基于相同的技术构思,本申请实施例还提供了一种计算机可读存储介质,其存储有可由计算设备执行的计算机程序,当所述程序在所述计算设备上运行时,使得所述计算设备执行上述批量任务处理方法的步骤。
本领域内的技术人员应明白,本申请的实施例可提供为方法、***、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器 中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (10)

  1. 一种批量任务处理方法,其特征在于,该方法包括:
    基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中;
    针对任一预设队列,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,并基于所述目标起始运行时间,运行对应任务。
  2. 如权利要求1所述的方法,其特征在于,基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中,包括:
    将所述批量任务中各任务按照所述第一历史平均运行时长从短到长进行排序;
    基于排序结果,多批次从所述批量任务中选择预设队列的数量的任务,分配至各个所述预设队列中;
    若任一预设队列中所有任务的第一历史平均运行时长之和与目标运行时长的比例超过预设比例,则不再将任务分配至所述预设队列中;所述目标运行时长为根据所述批量任务中所有任务的第一历史平均运行时长总和以及所述预设队列的数量确定的。
  3. 如权利要求2所述的方法,其特征在于,针对任一批次,从所述批量任务中选择预设队列的数量的任务,分配至各个所述预设队列中,包括:
    选择在中位值之前,且与所述中位值相邻的预设队列的数量的第一历史平均运行时长对应的任务,依次分配至各个所述预设队列中;其中,所述中位值为所述批量任务的第一历史平均运行时长中的中位值;
    选择在所述中位值之后,且与中位值相邻的预设队列的数量的第一历史平均运行时长对应的任务,依次分配至各个所述预设队列中。
  4. 如权利要求1所述的方法,其特征在于,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,包括:
    若所述预设队列中有目标任务,则将所述预设队列中目标任务按照被依赖的次数从多到少进行排序,其中,任一目标任务为执行其他任务所依赖的任务;将所述预设队列中除所述目标任务之外的任务排在所述目标任务之后,且将除所述目标任务之外的任务按照历史平均起始运行时间从前到后进行排序;或者
    若所述预设队列中没有目标任务,则将所述预设队列中各任务按照历史平均起始运行时间从前到后进行排序;
    根据所述预设队列中前一任务的目标起始运行时间以及第一历史平均运行时长,确定下一任务的目标起始运行时间,其中所述预设队列中第一个任务的目标起始运行时间是对应的历史平均起始运行时间,或者预设的起始运 行时间。
  5. 如权利要求1所述的方法,其特征在于,在基于所述目标起始运行时间,运行对应任务之前,还包括:
    将单线程执行的任务的第一历史平均运行时长,以及在第二时段内的第二历史平均运行时长进行比对,所述第二时段为所述第一时段之前的时段;
    若所述第一历史平均运行时长相比于所述第二历史平均运行时长的增幅大于预设增幅,则将所述单线程执行的任务切换为多线程执行。
  6. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    响应重跑指令,基于所述重跑指令对应的预设表征机构的信息,执行所述重跑指令对应的任务。
  7. 如权利要求1所述的方法,其特征在于,基于所述目标起始运行时间,运行对应任务,包括:
    针对任一任务,若所述任务对应有依赖信息,则在所述任务的目标起始运行时间达到后,基于预设的对应关系确定所述任务对应的目标依赖规则;其中所述对应关系包括预设的依赖信息与依赖规则的关联;
    针对任一预设表征机构的信息,根据所述目标依赖规则,确定是否基于所述预设表征机构的信息执行完成对应的目标任务,其中,所述对应的目标任务为执行所述任务所依赖的任务;
    在确定基于所述预设表征机构的信息执行完成对应的目标任务后,根据所述任务的计算信息,对所述预设表征机构的信息对应的待处理数据进行处理。
  8. 一种批量任务处理装置,其特征在于,包括:
    任务分配单元,用于基于批量任务中各任务在第一时段内的第一历史平均运行时长,以及预设队列的数量,将所述任务分配至各个所述预设队列中;
    运行处理单元,用于针对任一预设队列,根据所述预设队列中各任务在第一时段内的历史平均起始运行时间以及第一历史平均运行时长,确定所述预设队列中各任务的目标起始运行时间,并基于所述目标起始运行时间,运行对应任务。
  9. 一种计算设备,其特征在于,包括至少一个处理器以及至少一个存储器,其中,所述存储器存储有计算机程序,当所述程序被所述处理器执行时,使得所述处理器执行权利要求1至7任一所述的方法。
  10. 一种计算机可读存储介质,其特征在于,其存储有可由计算设备执行的计算机程序,当所述程序在所述计算设备上运行时,使得所述计算设备执行权利要求1至7任一所述的方法。
PCT/CN2021/142108 2021-06-11 2021-12-28 一种批量任务处理方法、装置、计算设备及存储介质 WO2022257435A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110651629.2A CN113326114A (zh) 2021-06-11 2021-06-11 一种批量任务处理方法及装置
CN202110651629.2 2021-06-11

Publications (1)

Publication Number Publication Date
WO2022257435A1 true WO2022257435A1 (zh) 2022-12-15

Family

ID=77421354

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/142108 WO2022257435A1 (zh) 2021-06-11 2021-12-28 一种批量任务处理方法、装置、计算设备及存储介质

Country Status (2)

Country Link
CN (1) CN113326114A (zh)
WO (1) WO2022257435A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116932227A (zh) * 2023-09-14 2023-10-24 西安华创马科智能控制***有限公司 一种基于单线程的任务调度方法及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326114A (zh) * 2021-06-11 2021-08-31 深圳前海微众银行股份有限公司 一种批量任务处理方法及装置
CN114880120A (zh) * 2022-05-10 2022-08-09 马上消费金融股份有限公司 数据处理方法、装置、设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060190938A1 (en) * 2005-02-22 2006-08-24 Capek Peter G System and method for predictive idle-time task initiation
CN103488691A (zh) * 2013-09-02 2014-01-01 用友软件股份有限公司 任务调度装置和任务调度方法
CN107315627A (zh) * 2017-05-31 2017-11-03 北京京东尚科信息技术有限公司 一种自动化配置数据仓库并行任务队列的方法和装置
CN108829506A (zh) * 2018-07-04 2018-11-16 中国建设银行股份有限公司 批量任务处理方法、装置及服务***
CN109669767A (zh) * 2018-11-30 2019-04-23 河海大学 一种面向多类型上下文依赖的任务封装和调度方法及***
CN110096345A (zh) * 2019-03-16 2019-08-06 平安科技(深圳)有限公司 智能任务调度方法、装置、设备及存储介质
CN111258745A (zh) * 2018-11-30 2020-06-09 华为终端有限公司 一种任务处理方法及设备
CN113326114A (zh) * 2021-06-11 2021-08-31 深圳前海微众银行股份有限公司 一种批量任务处理方法及装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060190938A1 (en) * 2005-02-22 2006-08-24 Capek Peter G System and method for predictive idle-time task initiation
CN103488691A (zh) * 2013-09-02 2014-01-01 用友软件股份有限公司 任务调度装置和任务调度方法
CN107315627A (zh) * 2017-05-31 2017-11-03 北京京东尚科信息技术有限公司 一种自动化配置数据仓库并行任务队列的方法和装置
CN108829506A (zh) * 2018-07-04 2018-11-16 中国建设银行股份有限公司 批量任务处理方法、装置及服务***
CN109669767A (zh) * 2018-11-30 2019-04-23 河海大学 一种面向多类型上下文依赖的任务封装和调度方法及***
CN111258745A (zh) * 2018-11-30 2020-06-09 华为终端有限公司 一种任务处理方法及设备
CN110096345A (zh) * 2019-03-16 2019-08-06 平安科技(深圳)有限公司 智能任务调度方法、装置、设备及存储介质
CN113326114A (zh) * 2021-06-11 2021-08-31 深圳前海微众银行股份有限公司 一种批量任务处理方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116932227A (zh) * 2023-09-14 2023-10-24 西安华创马科智能控制***有限公司 一种基于单线程的任务调度方法及装置
CN116932227B (zh) * 2023-09-14 2023-12-22 西安华创马科智能控制***有限公司 一种基于单线程的任务调度方法及装置

Also Published As

Publication number Publication date
CN113326114A (zh) 2021-08-31

Similar Documents

Publication Publication Date Title
WO2022257435A1 (zh) 一种批量任务处理方法、装置、计算设备及存储介质
US10911367B2 (en) Computerized methods and systems for managing cloud computer services
US11425194B1 (en) Dynamically modifying a cluster of computing nodes used for distributed execution of a program
KR102569086B1 (ko) 태스크 병렬 처리 방법, 장치, 시스템, 기억 매체 및 컴퓨터 기기
US20200167197A1 (en) Systems, methods, and apparatuses for implementing a scheduler with preemptive termination of existing workloads to free resources for high priority items
US9442760B2 (en) Job scheduling using expected server performance information
US9280390B2 (en) Dynamic scaling of a cluster of computing nodes
US11941434B2 (en) Task processing method, processing apparatus, and computer system
US9465663B2 (en) Allocating resources in a compute farm to increase resource utilization by using a priority-based allocation layer to allocate job slots to projects
US10089144B1 (en) Scheduling computing jobs over forecasted demands for computing resources
JP4935595B2 (ja) ジョブ管理方法、ジョブ管理装置およびジョブ管理プログラム
US7650601B2 (en) Operating system kernel-assisted, self-balanced, access-protected library framework in a run-to-completion multi-processor environment
US7962913B2 (en) Scheduling threads in a multiprocessor computer
US20200026571A1 (en) Systems, methods, and apparatuses for implementing a scheduler and workload manager with workload re-execution functionality for bad execution runs
CN108804211A (zh) 线程调度方法、装置、电子设备及存储介质
US20090178045A1 (en) Scheduling Memory Usage Of A Workload
US7681196B2 (en) Providing optimal number of threads to applications performing multi-tasking using threads
US11954419B2 (en) Dynamic allocation of computing resources for electronic design automation operations
WO2021208786A1 (zh) 一种线程管理方法及装置
US20060149611A1 (en) Peer to peer resource negotiation and coordination to satisfy a service level objective
US11561843B2 (en) Automated performance tuning using workload profiling in a distributed computing environment
US11080092B1 (en) Correlated volume placement in a distributed block storage service
GB2514585A (en) Task scheduler
US20240126460A1 (en) Enabling persistent memory for serverless applications
US11048554B1 (en) Correlated volume placement in a distributed block storage service

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21944931

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.04.2024)