CN112199201B - Delayed task processing method, device and equipment - Google Patents

Delayed task processing method, device and equipment Download PDF

Info

Publication number
CN112199201B
CN112199201B CN202011426478.2A CN202011426478A CN112199201B CN 112199201 B CN112199201 B CN 112199201B CN 202011426478 A CN202011426478 A CN 202011426478A CN 112199201 B CN112199201 B CN 112199201B
Authority
CN
China
Prior art keywords
task
delay
tasks
time
executed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011426478.2A
Other languages
Chinese (zh)
Other versions
CN112199201A (en
Inventor
曾熙
刘强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Fangduoduo Network Technologies Co ltd
Original Assignee
Shenzhen Fangduoduo Network Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Fangduoduo Network Technologies Co ltd filed Critical Shenzhen Fangduoduo Network Technologies Co ltd
Priority to CN202011426478.2A priority Critical patent/CN112199201B/en
Publication of CN112199201A publication Critical patent/CN112199201A/en
Application granted granted Critical
Publication of CN112199201B publication Critical patent/CN112199201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the invention relates to the technical field of task processing, and discloses a method, a device and equipment for processing a delayed task. The method comprises the following steps: setting a task loading thread and a task execution thread; inserting the delay task into a database, and storing the task content and the execution time of the delay task in the database; in the task loading thread, loading N delay tasks to be executed within a future time t1 into a memory ordered queue, and updating the next execution time of the delay tasks loaded into the memory ordered queue to a second execution time, wherein the second execution time = execution time + t2, and t2 > t 1; in the task execution thread, the delay tasks are sequentially acquired according to the arrangement sequence of the delay tasks in the memory ordered queue, the delay tasks are executed according to the execution time of the delay tasks, and the delay tasks in the database are deleted after the delay tasks are successfully executed. Through the mode, the embodiment of the invention reduces the consumption of the memory and is convenient for transverse expansion.

Description

Delayed task processing method, device and equipment
Technical Field
The embodiment of the invention relates to the technical field of task processing, in particular to a method, a device and equipment for processing a delayed task.
Background
At present, electronic devices such as computers or mobile terminals need to process a large number of tasks, including real-time tasks and delayed tasks. Regarding the delay task, in the service logic processing, some logics need to perform delay processing, such as logic of closing a live broadcast room overtime, automatically canceling an order overtime, sending a short message overtime, and the like, and the task set for the logic is the delay task.
In the prior art, most of the implementation schemes of the delay task are generally implemented by means of a Redis ordered set (sorted set) or a local time round algorithm. However, these methods consume a large amount of memory when the tasks are many.
Disclosure of Invention
In view of the foregoing problems, embodiments of the present invention provide a method, an apparatus, a device, and a computer-readable storage medium for processing a delayed task, which are used to solve the problem in the prior art that consumption of a memory is large.
According to an aspect of an embodiment of the present invention, there is provided a method for processing a delayed task, the method including:
setting a task loading thread and a task execution thread;
inserting a delay task into a database, wherein the database stores task content and execution time of the delay task;
in the task loading thread, loading N delayed tasks to be executed within a future time t1 into a memory ordered queue, and updating the next execution time of the delayed tasks loaded into the memory ordered queue to a second execution time, wherein the second execution time = execution time + t2, and t2 > t 1; wherein the number of the delayed tasks loaded each time is greater than or equal to the number of all the delayed tasks required to be executed in the future time t 1;
in the task execution thread, sequentially acquiring the delay tasks according to the arrangement sequence of the delay tasks in the memory ordered queue, executing the delay tasks according to the execution time of the delay tasks, and deleting the delay tasks in the database after the delay tasks are successfully executed.
The loading N delay tasks to be executed within the future time t1 into the memory ordered queue includes: if the number of the delay tasks needing to be executed in the future time t1 in the database is greater than or equal to N, loading the N delay tasks needing to be executed in the future time t1 into a memory ordered queue;
and if the number of the delayed tasks needing to be executed in the future time t1 in the database is less than N, loading all the delayed tasks needing to be executed in the future time t1 into the memory ordered queue.
After the next execution time of the delay task loaded into the memory ordered queue is updated to the second execution time, the method further includes:
judging whether the number of the delay tasks loaded at this time is N;
if the number of the delay tasks loaded at this time is N, returning to the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue;
if not, judging whether the database has a delay task to be executed;
if the delay tasks to be executed still exist in the database, returning to execute the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue;
if the to-be-executed delay task does not exist in the database, sleeping for a first preset time, wherein the first preset time is t1, and judging whether to finish the task loading thread or not after the sleeping is finished;
if the task loading thread is judged to be finished, finishing and exiting the task loading thread;
and if the task loading thread is judged not to be finished, returning to the step of inserting the delay task into the database.
Wherein the executing the delayed task according to the execution time of the delayed task includes:
at intervals of t3, judging whether the execution time of the first task of the memory ordered queue is equal to the current time;
and if so, executing the delay task.
Wherein, executing the delay task according to the execution time of the delay task further comprises:
if the execution time of the first task of the memory ordered queue is judged to be greater than the current time, sleeping for a second preset time, wherein the second preset time is 1s, and judging whether to finish the task execution thread after the sleeping is finished;
if the task execution thread is judged to be ended, ending and quitting the task execution thread;
and if the task execution thread is judged not to be finished, returning to the step of executing the delayed tasks which are sequentially obtained according to the arrangement sequence of the delayed tasks in the memory ordered queue.
Wherein the method further comprises:
in the task execution thread, if the delayed task fails to be executed, the delayed task in the database is not deleted;
in the task loading thread, when the updated next execution time of the delayed task with the failed execution is within the future time t1, the delayed task with the failed execution is loaded again.
Wherein the task loading thread comprises a plurality of threads deployed in a distributed manner;
in each task loading thread, before the N delayed tasks that need to be executed within the future time t1 are loaded into the memory-ordered queue, the method further includes:
acquiring a distributed lock;
if the distributed lock is successfully acquired, the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue is executed; after the delayed task to be executed in the future time t1 is loaded into the memory-ordered queue, the method further includes: releasing the distributed lock;
if the distributed lock is not acquired, returning to the step of acquiring the distributed lock after time t 4.
The embodiment of the invention also provides a delayed task processing device, which comprises:
the setting module is used for setting a task loading thread and a task execution thread;
the inserting module is used for inserting the delay task into a database, and the database stores the task content and the execution time of the delay task;
a loading module, configured to load, in the task loading thread, N delay tasks that need to be executed within a future time t1 into a memory ordered queue, and update a next execution time of the delay tasks loaded into the memory ordered queue to a second execution time, where the second execution time = execution time + t2, and t2 > t 1; wherein the number of the delayed tasks loaded each time is greater than or equal to the number of all the delayed tasks required to be executed in the future time t 1;
and the execution module is used for sequentially acquiring the delay tasks according to the arrangement sequence of the delay tasks in the memory ordered queue in the task execution thread, executing the delay tasks according to the execution time of the delay tasks, and deleting the delay tasks in the database after the delay tasks are successfully executed.
An embodiment of the present invention further provides an electronic device, including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation of the delayed task processing method.
The embodiment of the present invention further provides a computer-readable storage medium, where at least one executable instruction is stored in the storage medium, and when the executable instruction runs on an electronic device, the electronic device is enabled to execute the operation of the above-mentioned delay task processing method.
According to the embodiment of the invention, the task loading thread and the task execution thread are respectively set for the loading and the execution of the delay tasks, when the delay tasks are preloaded in the task loading thread, only N delay tasks needing to be executed in the future time t1 are loaded into the memory ordered queue instead of loading all the delay tasks, and the consumption of the memory is reduced. And after the delay task is loaded, updating the next execution time of the loaded delay task to be the current execution time + t2, namely adjusting the next execution time to a certain time, so as to prevent the delay task from being executed any more due to the failure of the next execution, and the delay task can be continuously executed when the next execution time comes.
In addition, the task loading threads can be distributed and deployed independently conveniently by dividing the two threads into a task loading mode and a task executing mode, and transverse expansion is facilitated.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart illustrating a method for processing a delayed task according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram illustrating a delayed task processing apparatus according to an embodiment of the present invention;
fig. 3 shows a schematic structural diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein.
Fig. 1 is a flowchart illustrating a method for processing a delayed task according to an embodiment of the present invention, where the method is performed by a device that needs to perform delayed task processing, such as an electronic device like a server, a computer, a mobile phone, a tablet computer, and so on. As shown in fig. 1, the method comprises the steps of:
step 110: setting a task loading thread and a task execution thread;
in the step, two processes are respectively set for the loading and the execution of the delay task: the task loading thread and the task execution thread are independent, so that the task loading and the task execution are not affected by each other and can be performed independently. The delay tasks can include tasks of closing a live broadcast room overtime, automatically canceling orders overtime, sending short messages overtime and the like, the tasks are generally provided with overtime time, and if no input is obtained after the overtime time, the delay tasks are executed.
Step 120: inserting a delay task into a database, wherein the database stores task content and execution time of the delay task;
this step may insert all the delayed tasks that need to be performed into the database. Specifically, a table entity may be newly created in the database for storing the delay task. The table entity mainly stores two pieces of information, namely task content and execution time of the delay task. The task content is used for describing what the task is doing, for example, the task content of the delay task for closing the live broadcast room overtime is closing the live broadcast room, the task content of the delay task for automatically canceling the order overtime is canceling the order, and the task content of the delay task for sending the short message overtime is sending the short message. The task content also needs to carry an event number or an object number for the task, such as a live broadcast room number, an order number, a short message receiver number, and the like. The execution time refers to specific overtime time, also called delay time, for example, the execution time for closing the live broadcast room overtime is 5 minutes, that is, the live broadcast room is closed if the main broadcast does not smoothly enter the live broadcast room within 5 minutes; the execution time of automatically canceling the order after time out is 15 minutes, namely the order is canceled if the order is not paid within 15 minutes; and if the execution time of sending the short message overtime is 1 minute, namely the verification code short message is sent again if the verification code is not input within 1 minute, and the like. The execution time is specifically represented by adding the delay time to the task start time, for example, if the task start time is 10 hours, 18 minutes and 00 seconds, and the delay time is 15 minutes, then the execution time is 10 hours, 33 minutes and 00 seconds.
The table entity may be built as follows:
TABLE 1
Serial number Task content Execution time
1 Cancel order 0001# 10 hours 33 minutes 00 seconds
2 Cancel order 0002# 10 hours, 35 minutes and 15 seconds
3 Cancel order 0003# 10 hours, 36 minutes and 12 seconds
…… …… ……
Step 130: in the task loading thread, loading N delayed tasks to be executed within a future time t1 into a memory ordered queue, and updating the next execution time of the delayed tasks loaded into the memory ordered queue to a second execution time, wherein the second execution time = execution time + t2, and t2 > t 1;
this step is a step of task loading threading. Wherein the number of the delayed tasks loaded each time is greater than or equal to the number of all the delayed tasks required to be executed in the future time t 1;
in the task loading thread, a delayed task which needs to be executed in a certain time period in the future needs to be loaded into the memory ordered queue. N may be set according to the capacity of the memory. For example, if 100 delay tasks are loaded in the memory, the processing efficiency of other tasks in the memory will not be affected, and N may be set to 100. In addition, N may also be set according to the future time t1, for example, each time loading is performed, the maximum value of the number of the delayed tasks that need to be executed in the future time t1 is a, and then N may be set to a, which ensures that each time the delayed tasks that are loaded can cover all the delayed tasks that need to be executed in the future time t1, and does not result in the delayed tasks failing to be loaded, which affects the smooth execution of the delayed tasks. Wherein N is a positive integer.
Of course, the future time t1 may be set inversely according to N. For example, N is 100, t1 is set as the maximum execution time in the execution times of the first 100 delay tasks, for example, the maximum execution time in the execution times of the first 100 delay tasks is 5s (the execution time in the program is a specific time point, for example, 17: 50 minutes, where 5s refers to the duration between the current time and the execution time, for example, the current time is 17: 45 minutes, and the duration between the current time and the execution time is 5 s), then t1 is set as 5 s. Therefore, each time of loading the delayed tasks can cover all the delayed tasks which need to be executed in the future time t1, and the delayed tasks cannot be loaded, so that the smooth execution of the delayed tasks is not influenced. Of course, t1 can also be determined according to human experience, for example, if the delay task to be executed in the future 5s is loaded, and the memory can accommodate the delay task, t1 is set to 5 s.
In addition, the delayed tasks can be loaded at fixed time intervals. The fixed time of the interval may be the same as the future time t 1.
The delay tasks stored in the database are not necessarily integer multiples of N, and the number of delay tasks loaded at the last time is not necessarily equal to N. Therefore, the loading N delay tasks to be executed in the future time t1 into the memory-ordered queue includes:
step a 1: if the number of the delay tasks needing to be executed in the future time t1 in the database is greater than or equal to N, loading the N delay tasks needing to be executed in the future time t1 into the memory ordered queue;
for example, the total number of the delayed tasks stored in the database is 480, N is 100, and t1 is 5 s. When each time of the first time to the fourth time is loaded, the number of the delay tasks to be executed in 5s in the future is exactly 100, so that each time of the first time to the fourth time can be loaded with 100 delay tasks, that is, the number of the loaded delay tasks is N.
Step a 2: and if the number of the delayed tasks needing to be executed in the future time t1 in the database is less than N, loading all the delayed tasks needing to be executed in the future time t1 into the memory ordered queue.
And when the delay tasks are loaded for the fifth time, only 80 delay tasks are left in the database, and if the 80 delay tasks need to be loaded in the future 5s, all the 80 delay tasks are loaded into the memory ordered queue. At this time, 80 is less than 100 (N) instead of being equal to 100.
In some embodiments, if the number of the delay tasks loaded last is not N but is less than N, it indicates that all the delay tasks in the database have been completely loaded through this loading, and the database may be put to sleep until a new delay task is inserted into the database. Therefore, after the next execution time of the delayed task loaded into the memory ordered queue is updated to the second execution time, the method further includes the following steps:
step b 1: judging whether the number of the delay tasks loaded at this time is N;
step b 2: if the number of the delay tasks loaded at this time is N, returning to the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue;
if the number of the currently loaded delay tasks is N, the delay tasks which need to be executed are still in the database, and the delay tasks need to be continuously loaded. Of course, the total number of the delay tasks in the database may be exactly an integral multiple of N, and at this time, there may be no delay task that needs to be executed in the database, and if there is no delay task, there is no delay task available for loading. The step can be executed to ensure that the delay tasks in the database can be loaded without omission.
Step b 3: if not, judging whether the database has a delay task to be executed;
if the number of the loaded delay tasks in a certain loading step is not N (less than N), there are two possibilities:
1. the number of delay tasks to be executed in the future time t1 required to be loaded at this time is originally less than N, and besides, the delay tasks which are not loaded exist in the database, and the execution time of the delay tasks exceeds t 1;
2. the number of the delayed tasks to be executed in the future t1 which needs to be loaded at this time is less than N, and besides, no unloaded delayed tasks exist in the database.
For the 1 st situation, the judgment result is that a delay task still needs to be executed in the database; in case 2, the determination result is that there is no delay task to be executed in the database.
Step b 4: if the delay tasks to be executed still exist in the database, returning to execute the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue;
because there are still time-delayed tasks to be performed in the database, the loading needs to be continued.
Step b 5: if the to-be-executed delay task does not exist in the database, sleeping for a first preset time, wherein the first preset time is t1, and judging whether to finish the task loading thread or not after the sleeping is finished; if the number of the currently loaded delay tasks is not N but is smaller than N, and there are no delay tasks to be executed in the database, it indicates that the tasks are all the delay tasks in the loaded database, and at this time, the task can be loaded to the thread to sleep.
Step b 6: if the task loading thread is judged to be finished, finishing and exiting the task loading thread;
when power failure, server upgrading and the like occur, the task loading thread needs to be ended.
Step b 7: and if the task loading thread is judged not to be finished, returning to the step of inserting the delay task into the database.
If the situations of power failure, server upgrading and the like do not occur, the task loading thread does not need to be ended. At this time, a new delay task can be continuously inserted into the database, and the task loading process is circularly processed.
The next execution time of the delay task refers to the time when the delay task is continuously executed next time if the delay task is not executed successfully this time. Second execution time = execution time + t2, which refers to the execution time of the task when this time it was loaded. the value of t2 needs to be greater than t1, so that if the task is not successfully executed after being recorded, it is ensured that the task can be continuously loaded and executed during subsequent loading tasks. For example, t1 is 5s, t2 is 15s, and in the task loading thread, 100 delayed tasks to be executed in 5s in the future are loaded into the memory ordered queue at the first time of loading, and the next execution time of the 100 delayed tasks is increased by 15 s. If 3 of the 100 delayed tasks are not successfully executed, the 3 tasks will be retained in the database and not deleted. And loading 100 delay tasks to be executed in the future 5s of the current time into the memory ordered queue during the second and third loading. Since the execution time of the previous 3 unexecuted tasks is adjusted to 15s next time, the 3 unexecuted tasks belong to the tasks to be executed in the future 5s of the current time at the fourth loading, and are to be loaded.
If the value of t2 is less than or equal to t1, there is no guarantee that the task can continue to be loaded when a subsequent task is loaded. For example, the execution time of a certain delayed task is 10 minutes 31 seconds, t1 is 5 seconds, t2 is 3 seconds, at 10 minutes 30 seconds, a task which needs to be executed in the future 5 seconds is loaded, the delayed task is loaded, and the next execution time of the delayed task is modified to 10 minutes 34 seconds. If it is not successfully executed, and the time for loading the task next time is 10 minutes and 35 seconds, the task does not belong to the task to be executed in the future 5 seconds, and therefore, it will not be loaded and cannot be smoothly executed.
Because the embodiment of the invention divides the delayed task loading and execution into two independent threads, the distributed deployment can be conveniently carried out, and mainly the distributed deployment is carried out aiming at the task loading thread. In some embodiments, the task loading thread comprises a plurality of distributively deployed threads; before the N delayed tasks to be executed within the future time t1 are loaded into the memory-ordered queue in each task loading thread, the method further includes the following steps:
step c 1: acquiring a distributed lock;
to prevent interference between multiple threads in a distributed system, a distributed coordination technique may be employed to schedule the threads. Distributed locks may allow a method to be executed by only one thread of one machine at a time in a distributed system environment. That is, although there are multiple task loading threads, only the task loading thread that acquires the distributed lock can load the task at the same time. For example, the setnx command and the key are unique identifiers of the locks. When a thread executes setnx to return to 1, the key originally does not exist, and the thread successfully obtains the lock; when a thread executing setnx returns 0, it indicates that the key already exists, and the thread fails to rob the lock.
Step c 2: if the distributed lock is successfully acquired, the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue is executed; after the N delayed tasks to be executed within the future time t1 are loaded into the memory-ordered queue, the method further includes: releasing the distributed lock;
if a task loading thread successfully acquires the distributed lock, the task loading can be executed. When the thread that gets the lock finishes executing the task, the lock needs to be released so that other threads can enter. For example, a del instruction may be executed to release the distributed lock. After releasing the distributed lock, other threads may continue to execute the setnx command to acquire the distributed lock.
Step c 3: if the distributed lock is not acquired, returning to the step of acquiring the distributed lock after time t 4.
If a certain task loading thread does not acquire the distributed lock, which indicates that another task loading thread owns the distributed lock at this time, the distributed lock may be acquired again after waiting for a set time t 4.
Step 140: in the task execution thread, sequentially acquiring the delay tasks according to the arrangement sequence of the delay tasks in the memory ordered queue, executing the delay tasks according to the execution time of the delay tasks, and deleting the delay tasks in the database after the delay tasks are successfully executed.
The execution of the delay task is asynchronous processing, and the processing speed is high.
In some embodiments, the executing the deferred task according to the execution time of the deferred task includes:
step d 1: at intervals of t3, judging whether the execution time of the first task of the memory ordered queue is equal to the current time;
here, t3 may be the minimum unit of time, e.g., 1s, so that the execution time of the delayed task is not missed.
Step d 2: and if so, executing the delay task.
When the execution time of the first task of the memory-ordered queue is equal to the current time, it indicates that the delayed task can be executed at this time, and thus can be executed immediately. If the execution time of the first task of the memory ordered queue is not equal to the current time, the execution time is generally greater than the current time, which indicates that the delayed task has not yet reached the execution time and can not be executed.
In some embodiments, when the execution time of the first task of the memory-ordered queue is not equal to the current time, the step of executing the delayed task according to the execution time of the delayed task further includes:
step e 1: if the execution time of the first task of the memory ordered queue is judged to be greater than the current time, sleeping for a second preset time, wherein the second preset time is 1s, and judging whether to finish the task execution thread after the sleeping is finished;
since the execution time of the delayed task is not yet reached, the current task execution thread may be put to sleep for a second preset time, which may be a minimum unit of time of 1s, so that the execution time of the delayed task is not missed.
Step e 2: if the task execution thread is judged to be ended, ending and quitting the task execution thread;
when power failure, server upgrading and the like occur, the task execution thread needs to be ended.
Step e 3: and if the task execution thread is judged not to be finished, returning to the step of executing the delayed tasks which are sequentially obtained according to the arrangement sequence of the delayed tasks in the memory ordered queue.
If the situations of power failure, server upgrading and the like do not occur, the task execution thread does not need to be ended. At this time, the task execution can be continued, and the task execution flow is circularly processed.
And deleting the delay tasks in the database after the delay tasks are successfully executed. If the delayed task fails to execute, the task in the database cannot be deleted, because the next execution needs to be waited, and the next execution time is the second execution time after the task is updated. Thus, in some embodiments, the method further comprises:
step f 1: in the task execution thread, if the delayed task fails to be executed, the delayed task in the database is not deleted;
step f 2: in the task loading thread, when the updated next execution time of the delayed task with the failed execution is within the future time t1, the delayed task with the failed execution is loaded again.
For example, the execution time of the delayed task is 17 hours, 30 minutes and 30 seconds, and the next execution time of the task after being loaded into the memory is updated to 17 hours, 30 minutes and 45 seconds; if the task is successfully executed, deleting the delayed task in the database without executing the delayed task again next time; if the task fails to execute, t1 is 5s, then at time 17, 30 minutes and 40 seconds, the delayed task will be loaded again.
According to the embodiment of the invention, the task loading thread and the task execution thread are respectively set for the loading and the execution of the delay tasks, when the delay tasks are preloaded in the task loading thread, only N delay tasks needing to be executed in the future time t1 are loaded into the memory ordered queue instead of loading all the delay tasks, and the consumption of the memory is reduced. And after the delay task is loaded, updating the next execution time of the loaded delay task to be the current execution time + t2, namely adjusting the next execution time to a certain time, so as to prevent the delay task from being executed any more due to the failure of the next execution, and the delay task can be continuously executed when the next execution time comes. In addition, the task loading threads can be distributed and deployed independently conveniently by dividing the two threads into a task loading mode and a task executing mode, and transverse expansion is facilitated.
Fig. 2 is a schematic structural diagram illustrating a delayed task processing apparatus according to an embodiment of the present invention. As shown in fig. 2, the apparatus 300 includes:
a setting module 310, configured to set a task loading thread and a task executing thread;
an inserting module 320, configured to insert the delay task into a database, where the database stores task content and execution time of the delay task;
a loading module 330, configured to load, in the task loading thread, N delayed tasks that need to be executed within a future time t1 into a memory ordered queue, and update a next execution time of the delayed tasks loaded into the memory ordered queue to a second execution time, where the second execution time = execution time + t2, and t2 > t 1; wherein the number of the delayed tasks loaded each time is greater than or equal to the number of all the delayed tasks required to be executed in the future time t 1;
the execution module 340 is configured to, in the task execution thread, sequentially obtain the delay tasks according to the arrangement order of the delay tasks in the memory ordered queue, execute the delay tasks according to the execution time of the delay tasks, and delete the delay tasks in the database after the delay tasks are successfully executed.
In an optional manner, the loading module 330 is further configured to:
if the number of the delay tasks needing to be executed in the future time t1 in the database is greater than or equal to N, loading the N delay tasks needing to be executed in the future time t1 into a memory ordered queue;
and if the number of the delayed tasks needing to be executed in the future time t1 in the database is less than N, loading all the delayed tasks needing to be executed in the future time t1 into the memory ordered queue.
In an optional manner, after executing and updating the next execution time of the delayed task loaded into the memory ordered queue to the second execution time, the loading module 330 is further configured to:
judging whether the number of the delay tasks loaded at this time is N;
if yes, returning to the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue;
if not, sleeping for a first preset time, wherein the first preset time is t1, and judging whether to finish the task loading thread after the sleeping is finished;
if the task loading thread is judged to be finished, finishing and exiting the task loading thread;
and if the task loading thread is judged not to be finished, returning to the step of inserting the delay task into the database.
In an optional manner, the execution module 340 is further configured to:
at intervals of t3, judging whether the execution time of the first task of the memory ordered queue is equal to the current time;
and if so, executing the delay task.
In an optional manner, the execution module 340 is further configured to:
if the execution time of the first task of the memory ordered queue is judged to be greater than the current time, sleeping for a second preset time, wherein the second preset time is 1s, and judging whether to finish the task execution thread after the sleeping is finished;
if the task execution thread is judged to be ended, ending and quitting the task execution thread;
and if the task execution thread is judged not to be finished, returning to the step of executing the delayed tasks which are sequentially obtained according to the arrangement sequence of the delayed tasks in the memory ordered queue.
In an optional manner, the execution module 340 is further configured to:
in the task execution thread, if the delayed task fails to be executed, the delayed task in the database is not deleted;
in the task loading thread, when the updated next execution time of the delayed task with the failed execution is within the future time t1, the delayed task with the failed execution is loaded again.
In an optional manner, the task loading thread includes a plurality of threads deployed in a distributed manner;
the loading module 330 is further configured to:
in each task loading thread, before the N delay tasks to be executed within the future time t1 are loaded into the memory ordered queue, a distributed lock is acquired;
if the distributed lock is successfully acquired, the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue is executed; after the N delay tasks needing to be executed in the future time t1 are loaded into the memory ordered queue, releasing the distributed lock;
if the distributed lock is not acquired, returning to the step of acquiring the distributed lock after time t 4.
According to the embodiment of the invention, the task loading thread and the task execution thread are respectively set for the loading and the execution of the delay tasks, when the delay tasks are preloaded in the task loading thread, only N delay tasks needing to be executed in the future time t1 are loaded into the memory ordered queue instead of loading all the delay tasks, and the consumption of the memory is reduced. And after the delay task is loaded, updating the next execution time of the loaded delay task to be the current execution time + t2, namely adjusting the next execution time to a certain time, so as to prevent the delay task from being executed any more due to the failure of the next execution, and the delay task can be continuously executed when the next execution time comes. In addition, the task loading threads can be distributed and deployed independently conveniently by dividing the two threads into a task loading mode and a task executing mode, and transverse expansion is facilitated.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 3, the electronic device may include: a processor (processor)402, a Communications Interface 404, a memory 406, and a Communications bus 408.
Wherein: the processor 402, communication interface 404, and memory 406 communicate with each other via a communication bus 408. A communication interface 404 for communicating with network elements of other devices, such as clients or other servers. The processor 402 is configured to execute the program 410, and may specifically perform the relevant steps in the above-described embodiment of the method for processing the delayed task.
In particular, program 410 may include program code comprising computer-executable instructions.
The processor 402 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
An embodiment of the present invention provides a computer-readable storage medium, where the storage medium stores at least one executable instruction, and when the executable instruction is executed on an electronic device, the electronic device is enabled to execute a delay task processing method in any method embodiment described above.
The embodiment of the invention provides a delay task processing device, which is used for executing the delay task processing method.
Embodiments of the present invention provide a computer program, where the computer program can be called by a processor to enable an electronic device to execute a method for processing a delayed task in any of the above method embodiments.
Embodiments of the present invention provide a computer program product, which includes a computer program stored on a computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are run on a computer, the computer is caused to execute the method for processing a delayed task in any of the above-mentioned method embodiments.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (8)

1. A method for processing a delayed task, the method comprising:
setting a task loading thread and a task execution thread;
inserting a delay task into a database, wherein the database stores task content and execution time of the delay task;
in the task loading thread, loading N delayed tasks to be executed within a future time t1 into a memory ordered queue, and updating the next execution time of the delayed tasks loaded into the memory ordered queue to a second execution time, wherein the second execution time = execution time + t2, and t2 > t 1; wherein the number of the delayed tasks loaded each time is greater than or equal to the number of all the delayed tasks required to be executed in the future time t 1; t1 is the maximum execution time in the execution times of each batch of N delayed tasks;
in the task execution thread, sequentially acquiring the delay tasks according to the arrangement sequence of the delay tasks in the memory ordered queue, executing the delay tasks according to the execution time of the delay tasks, and deleting the delay tasks in the database after the delay tasks are successfully executed;
the loading N delay tasks to be executed within the future time t1 into the memory ordered queue includes: if the number of the delay tasks needing to be executed in the future time t1 in the database is equal to N, loading the N delay tasks needing to be executed in the future time t1 into a memory ordered queue;
if the number of the delay tasks needing to be executed in the future time t1 in the database is less than N, loading all the delay tasks needing to be executed in the future time t1 into a memory ordered queue;
after the next execution time of the delay task loaded into the memory ordered queue is updated to the second execution time, the method further includes:
judging whether the number of the delay tasks loaded at this time is N;
if the number of the delay tasks loaded at this time is N, returning to the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue;
if not, judging whether the database has a delay task to be executed;
if the delay tasks to be executed still exist in the database, returning to execute the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue;
if the to-be-executed delay task does not exist in the database, sleeping for a first preset time, wherein the first preset time is t1, and judging whether to finish the task loading thread or not after the sleeping is finished;
if the task loading thread is judged to be finished, finishing and exiting the task loading thread;
and if the task loading thread is judged not to be finished, returning to the step of inserting the delay task into the database.
2. The method of claim 1, wherein the executing the delayed task according to the execution time of the delayed task comprises:
at intervals of t3, judging whether the execution time of the first task of the memory ordered queue is equal to the current time;
and if so, executing the delay task.
3. The method of claim 1, wherein executing the delayed task according to the execution time of the delayed task further comprises:
if the execution time of the first task of the memory ordered queue is judged to be greater than the current time, sleeping for a second preset time, wherein the second preset time is 1s, and judging whether to finish the task execution thread after the sleeping is finished;
if the task execution thread is judged to be ended, ending and quitting the task execution thread;
and if the task execution thread is judged not to be finished, returning to the step of executing the delayed tasks which are sequentially obtained according to the arrangement sequence of the delayed tasks in the memory ordered queue.
4. The method of claim 1, further comprising:
in the task execution thread, if the delayed task fails to be executed, the delayed task in the database is not deleted;
in the task loading thread, when the updated next execution time of the delayed task with the failed execution is within the future time t1, the delayed task with the failed execution is loaded again.
5. The method of claim 1, wherein the task loading threads comprise a plurality of distributively deployed threads;
in each task loading thread, before the N delayed tasks that need to be executed within the future time t1 are loaded into the memory-ordered queue, the method further includes:
acquiring a distributed lock;
if the distributed lock is successfully acquired, the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue is executed; after the N delayed tasks to be executed within the future time t1 are loaded into the memory-ordered queue, the method further includes: releasing the distributed lock;
if the distributed lock is not acquired, returning to the step of acquiring the distributed lock after time t 4.
6. A delayed task processing apparatus, characterized in that the apparatus comprises:
the setting module is used for setting a task loading thread and a task execution thread;
the inserting module is used for inserting the delay task into a database, and the database stores the task content and the execution time of the delay task;
a loading module, configured to load, in the task loading thread, N delay tasks that need to be executed within a future time t1 into a memory ordered queue, and update a next execution time of the delay tasks loaded into the memory ordered queue to a second execution time, where the second execution time = execution time + t2, and t2 > t 1; wherein the number of the delayed tasks loaded each time is greater than or equal to the number of all the delayed tasks required to be executed in the future time t 1; t1 is the maximum execution time in the execution times of each batch of N delayed tasks;
the execution module is used for sequentially acquiring the delay tasks according to the arrangement sequence of the delay tasks in the memory ordered queue in the task execution thread, executing the delay tasks according to the execution time of the delay tasks, and deleting the delay tasks in the database after the delay tasks are successfully executed;
the loading module is further configured to:
if the number of the delay tasks needing to be executed in the future time t1 in the database is greater than or equal to N, loading the N delay tasks needing to be executed in the future time t1 into a memory ordered queue;
if the number of the delay tasks needing to be executed in the future time t1 in the database is less than N, loading all the delay tasks needing to be executed in the future time t1 into a memory ordered queue;
after the loading module executes and updates the next execution time of the delay task loaded into the memory ordered queue to the second execution time, the loading module is further configured to:
judging whether the number of the delay tasks loaded at this time is N;
if yes, returning to the step of loading the N delay tasks to be executed in the future time t1 into the memory ordered queue;
if not, sleeping for a first preset time, wherein the first preset time is t1, and judging whether to finish the task loading thread after the sleeping is finished;
if the task loading thread is judged to be finished, finishing and exiting the task loading thread;
and if the task loading thread is judged not to be finished, returning to the step of inserting the delay task into the database.
7. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation of the delayed task processing method according to any one of claims 1-5.
8. A computer-readable storage medium having stored therein at least one executable instruction, which when executed on an electronic device, causes the electronic device to perform the operations of the delayed task processing method according to any one of claims 1 to 5.
CN202011426478.2A 2020-12-09 2020-12-09 Delayed task processing method, device and equipment Active CN112199201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011426478.2A CN112199201B (en) 2020-12-09 2020-12-09 Delayed task processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011426478.2A CN112199201B (en) 2020-12-09 2020-12-09 Delayed task processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN112199201A CN112199201A (en) 2021-01-08
CN112199201B true CN112199201B (en) 2021-03-16

Family

ID=74033861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011426478.2A Active CN112199201B (en) 2020-12-09 2020-12-09 Delayed task processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN112199201B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325002A (en) * 2018-09-03 2019-02-12 北京京东金融科技控股有限公司 Text file processing method, device, system, electronic equipment, storage medium
CN110515709A (en) * 2019-07-25 2019-11-29 北京达佳互联信息技术有限公司 Task scheduling system, method, apparatus, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011076304A (en) * 2009-09-30 2011-04-14 Hitachi Ltd Device, method and program for allocating job
CN102629220A (en) * 2012-03-08 2012-08-08 北京神州数码思特奇信息技术股份有限公司 Dynamic task allocation and management method
CN109766194B (en) * 2018-11-29 2021-02-05 南瑞集团有限公司 Method and system for realizing low-coupling plan task component based on message

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325002A (en) * 2018-09-03 2019-02-12 北京京东金融科技控股有限公司 Text file processing method, device, system, electronic equipment, storage medium
CN110515709A (en) * 2019-07-25 2019-11-29 北京达佳互联信息技术有限公司 Task scheduling system, method, apparatus, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112199201A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN107450971B (en) Task processing method and device
CN107153643B (en) Data table connection method and device
CN111381972A (en) Distributed task scheduling method, device and system
CN109918187B (en) Task scheduling method, device, equipment and storage medium
CN112865992B (en) Method and device for switching master nodes in distributed master-slave system and computer equipment
CN111708586A (en) Application starting configuration item loading method and device, computer equipment and storage medium
CN110471774A (en) A kind of data processing method and device based on unified task schedule
CN112199201B (en) Delayed task processing method, device and equipment
CN109542922B (en) Processing method for real-time service data and related system
CN108536541B (en) Process engine object processing method and device
US9864771B2 (en) Method and server for synchronizing a plurality of clients accessing a database
CN111880910A (en) Data processing method and device, server and storage medium
US20230053933A1 (en) Techniques for improving resource utilization in a microservices architecture via priority queues
CN112217849B (en) Task scheduling method, system and computer equipment in SD-WAN system
CN111258728A (en) Task execution method and device, storage medium and electronic device
CN113535338A (en) Interaction method, system, storage medium and electronic device for data access
CN112395057A (en) Data processing method and device based on timing task and computer equipment
CN112541041A (en) Data processing method, device, server and storage medium
CN111159236A (en) Data processing method and device, electronic equipment and storage medium
CN113703874B (en) Data stream processing method, device, equipment and readable storage medium
CN118093214B (en) Handling task scheduling method and device, electronic equipment and storage medium
CN113238862B (en) Distributed task scheduling method and device
CN113032131B (en) Redis-based distributed timing scheduling system and method
CN111541623B (en) Data processing method and device
CN113806388A (en) Service processing method and device based on distributed lock

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant