CN107391243B - Thread task processing equipment, device and method - Google Patents

Thread task processing equipment, device and method Download PDF

Info

Publication number
CN107391243B
CN107391243B CN201710522332.XA CN201710522332A CN107391243B CN 107391243 B CN107391243 B CN 107391243B CN 201710522332 A CN201710522332 A CN 201710522332A CN 107391243 B CN107391243 B CN 107391243B
Authority
CN
China
Prior art keywords
task
thread pool
waiting queue
thread
basic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710522332.XA
Other languages
Chinese (zh)
Other versions
CN107391243A (en
Inventor
唐琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN201710522332.XA priority Critical patent/CN107391243B/en
Publication of CN107391243A publication Critical patent/CN107391243A/en
Application granted granted Critical
Publication of CN107391243B publication Critical patent/CN107391243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a thread task processing device, a thread task processing device and a thread task processing method. The thread task processing device comprises: the processor acquires tasks from the corresponding waiting queues by the working threads of the basic thread pool nodes, wherein the tasks are added into the waiting queues of the basic thread pool nodes in the corresponding task priority intervals according to task priorities, and different basic thread pool nodes execute the tasks in parallel and execute the tasks acquired from the waiting queues; and the memory stores the working threads and the waiting queues of the basic thread pool nodes. The scheme of the invention can reduce the delay time of the low-priority task and prevent the low-priority task from starving.

Description

Thread task processing equipment, device and method
Technical Field
The invention relates to the technical field of mobile internet, in particular to a thread task processing device, a thread task processing device and a thread task processing method.
Background
At present, in the related application program service, a thread pool can be set, and the thread pool executes tasks by repeatedly utilizing the existing working threads through pre-creating a certain number of working threads, so that the purpose of reducing the thread creating and destroying expenses is achieved. The thread pool can effectively limit the number of the execution working threads according to the system and the application scene, and prevent excessive system resources from being consumed.
In a system with priority scheduling, tasks are generally set with respective priorities, tasks with high priorities are executed by a worker thread earlier, and tasks with low priorities need the higher priority tasks to be executed completely before being executed by the worker thread. Therefore, in the related art, a problem that a task of a low priority is not always executed tends to occur under priority scheduling. Although some solutions prevent tasks from being starved by raising the priority of tasks that cannot be executed for a long time, the method also has the disadvantage that tasks with low priority all need to wait for a period of time before being executed, and in practical scene applications, delay time (latency) of all the tasks may be high.
Disclosure of Invention
To solve the above technical problems, the present invention provides a thread task processing device, apparatus and method, which can reduce the delay time of low priority tasks and prevent the low priority tasks from starving.
According to an aspect of the present invention, there is provided a thread task processing apparatus including:
the processor acquires tasks from the corresponding waiting queues by the working threads of the basic thread pool nodes, wherein the tasks are added into the waiting queues of the basic thread pool nodes in the corresponding task priority intervals according to task priorities, and different basic thread pool nodes execute the tasks in parallel and execute the tasks acquired from the waiting queues;
and the memory stores the working threads and the waiting queues of the basic thread pool nodes.
According to another aspect of the present invention, there is provided a computer apparatus comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of:
the method comprises the steps that a work thread of a basic thread pool node obtains a task from a corresponding waiting queue, wherein the task is added into the waiting queue of the basic thread pool node corresponding to a task priority interval according to task priority, and different basic thread pool nodes execute the task in parallel;
and executing the tasks acquired from the waiting queue.
According to another aspect of the invention, there is provided a non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform a method of:
the method comprises the steps that a work thread of a basic thread pool node obtains a task from a corresponding waiting queue, wherein the task is added into the waiting queue of the basic thread pool node corresponding to a task priority interval according to task priority, and different basic thread pool nodes execute the task in parallel;
and executing the tasks acquired from the waiting queue.
According to another aspect of the present invention, there is provided a thread task processing apparatus including:
the task acquisition module is used for acquiring tasks from the corresponding waiting queues by the working threads of the basic thread pool nodes, wherein the tasks are added into the waiting queues of the basic thread pool nodes in the corresponding task priority intervals according to task priorities, and different basic thread pool nodes execute the tasks in parallel;
and the task execution module is used for executing the tasks acquired from the waiting queue by the task acquisition module.
Optionally, the apparatus further comprises:
and the task application module is used for applying for the basic thread pool node in the higher task priority interval to acquire the task from the waiting queue under the condition that the task acquisition module acquires the task from the corresponding waiting queue by the working thread of the basic thread pool node and the task is empty.
Optionally, the apparatus further comprises:
the thread pool configuration module is used for creating a hierarchical thread pool node serving as a non-leaf node management task priority interval and a basic thread pool node serving as a leaf node management waiting queue and a working thread to form a binary tree structure; and configuring a task priority interval, a working thread and a waiting queue for the basic thread pool node, wherein the working thread corresponds to the waiting queue.
Optionally, the apparatus further comprises:
a task adding module, configured to find, according to the binary tree structure, a basic thread pool node corresponding to a task priority interval by a hierarchical thread pool node according to a priority of the task, and add the task to a waiting queue of the basic thread pool node;
and the task acquisition module acquires the tasks added into the waiting queue of the basic thread pool node by the task adding module.
Optionally, the apparatus further comprises:
and the configuration adjusting module is used for adjusting the number of the working threads configured for the basic thread pool node and the size of the waiting queue.
According to another aspect of the present invention, there is provided a thread task processing method, including:
the method comprises the steps that a work thread of a basic thread pool node obtains a task from a corresponding waiting queue, wherein the task is added into the waiting queue of the basic thread pool node corresponding to a task priority interval according to task priority after being created, and different basic thread pool nodes execute the task in parallel;
and executing the tasks acquired from the waiting queue.
Optionally, the method further includes:
and under the condition that the task acquired from the corresponding waiting queue by the working thread of the basic thread pool node is empty, applying for acquiring the task from the waiting queue to the basic thread pool node in a higher task priority interval.
Optionally, before the work thread of the basic thread pool node acquires the task from the corresponding waiting queue, the method further includes:
establishing a hierarchical thread pool node serving as a non-leaf node management task priority interval and a basic thread pool node serving as a leaf node management waiting queue and a working thread to form a binary tree structure;
and configuring a task priority interval, a working thread and a waiting queue for the basic thread pool node, wherein the working thread corresponds to the waiting queue.
Optionally, the task is added to the waiting queue of the basic thread pool node corresponding to the task priority interval according to the task priority after being created, and the method includes:
finding out basic thread pool nodes corresponding to task priority intervals by the hierarchical thread pool nodes according to the priority of the task in the binary tree structure;
and adding the task into a waiting queue of the basic thread pool node.
Optionally, the method further includes:
and adjusting the quantity of the working threads configured for the basic thread pool node and the size of the waiting queue.
It can be found that, in the technical solution of the embodiment of the present invention, after being created, a task may be added to a waiting queue of a basic thread pool node corresponding to a task priority interval according to task priority, and different basic thread pool nodes have independent waiting queues and work threads, and may execute the task in parallel, and a work thread of a basic thread pool node may execute after acquiring the task from a corresponding waiting queue, so that even a task with low priority may be added to waiting queues of other thread pools to be executed by the work thread without waiting in one waiting queue all the time, thereby reducing delay time of the task, and preventing starvation of the low priority task.
Furthermore, in the embodiment of the present invention, when the task obtained from the corresponding waiting queue by the work thread of the basic thread pool node is empty, the basic thread pool node in the higher-priority task priority interval is applied for obtaining the task from the waiting queue, that is, the low-priority work thread can apply for the task from the higher-priority work thread, so that the thread utilization rate is increased, and the resource utilization rate is increased.
Furthermore, the embodiment of the invention creates a hierarchical thread pool node as a non-leaf node to manage a task priority interval and a basic thread pool node as a leaf node to manage a waiting queue and a working thread to form a binary tree structure, and the basic thread pool node corresponding to the task priority interval can be found by the hierarchical thread pool node according to the priority of the task according to the binary tree structure; and adding the task into a waiting queue of the basic thread pool node, namely, the searching efficiency of the basic thread pool can be improved through a tree structure.
Furthermore, the embodiment of the present invention may adjust the number of the working threads and the size of the wait queue configured for the basic thread pool node, for example, adjust the size of the wait queue and the number of the working threads of the basic thread pool node according to the service model and the system load condition, so that the configuration is more efficient and has universality.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
FIG. 1 is a schematic block diagram of a thread task processing device according to one embodiment of the present invention;
FIG. 2 is a schematic block diagram of a computer device in accordance with one embodiment of the present invention;
FIG. 3 is a schematic block diagram of a thread task processing apparatus according to one embodiment of the present invention;
FIG. 4 is another schematic block diagram of a thread task processing apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart diagram of a method of processing thread tasks in accordance with one embodiment of the present invention;
FIG. 6 is another schematic flow chart diagram of a method for processing thread tasks in accordance with an embodiment of the present invention.
Fig. 7 is a schematic diagram of application of a binary tree structure of a thread task processing method according to an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The invention provides a thread task processing device, a thread task processing device and a thread task processing method, which can reduce the delay time of low-priority tasks and prevent the low-priority tasks from starving.
The technical solutions of the embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic block diagram of a thread task processing apparatus according to an embodiment of the present invention.
Referring to fig. 1, in a thread task processing apparatus 10, there is included: a processor 11 and a memory 12.
The processor 11 is used for acquiring tasks from the corresponding waiting queues by the working threads of the basic thread pool nodes, wherein the tasks are added into the waiting queues of the basic thread pool nodes in the corresponding task priority intervals according to task priorities, and different basic thread pool nodes execute the tasks in parallel and execute the tasks acquired from the waiting queues;
and the memory 12 stores the working threads and the waiting queues of the basic thread pool nodes.
FIG. 2 is a schematic block diagram of a computer device according to one embodiment of the present invention.
Referring to fig. 2, in a computer apparatus 20, comprising: a processor 21 and a memory 22.
A memory 22 having executable code stored thereon which, when executed by the processor 21, causes the processor 21 to perform the method of:
the method comprises the steps that a work thread of a basic thread pool node obtains a task from a corresponding waiting queue, wherein the task is added into the waiting queue of the basic thread pool node corresponding to a task priority interval according to task priority, and different basic thread pool nodes execute the task in parallel;
and executing the tasks acquired from the waiting queue.
Embodiments of the present invention also provide a non-transitory machine-readable storage medium having executable code stored thereon, which when executed by a processor of an electronic device, causes the processor to perform the following method:
the method comprises the steps that a work thread of a basic thread pool node obtains a task from a corresponding waiting queue, wherein the task is added into the waiting queue of the basic thread pool node corresponding to a task priority interval according to task priority, and different basic thread pool nodes execute the task in parallel;
and executing the tasks acquired from the waiting queue.
It should be noted that the processor in the above-mentioned apparatus may be configured as an independent device, the independent device may be referred to as a thread task processing device, and the thread task processing device may include a plurality of sub-modules, and the structure of the thread task processing device will be described in detail below with reference to fig. 3 to 4.
Fig. 3 is a schematic block diagram of a thread task processing apparatus according to an embodiment of the present invention.
Referring to fig. 3, a thread task processing apparatus 30 includes: a task obtaining module 31 and a task executing module 32.
The task obtaining module 31 is configured to obtain a task from a corresponding waiting queue by a work thread of a basic thread pool node, where the task is added to the waiting queue of the basic thread pool node corresponding to a task priority interval according to a task priority, and different basic thread pool nodes execute the task in parallel.
Wherein the task is added to a waiting queue of a basic thread pool node corresponding to a task priority interval according to task priority after being created, comprising: finding out basic thread pool nodes corresponding to task priority intervals by the hierarchical thread pool nodes according to the priority of the task in the binary tree structure; and adding the task into a waiting queue of the basic thread pool node.
And the task execution module 32 is configured to execute the task acquired from the waiting queue by the task acquisition module 31.
It can be seen from this embodiment that, in the technical solution of the embodiment of the present invention, a task may be added to a waiting queue of a basic thread pool node corresponding to a task priority interval according to a task priority after being created, and different basic thread pool nodes have independent waiting queues and work threads and may execute the task in parallel, and a work thread of a basic thread pool node may execute the task after acquiring the task from the corresponding waiting queue, so that even a task with a low priority may be added to waiting queues of other thread pools to wait for execution by the work thread without waiting in one waiting queue all the time, thereby reducing the delay time of the task and preventing starvation of the task with the low priority.
Fig. 4 is another schematic block diagram of a thread task processing apparatus according to an embodiment of the present invention.
As shown in fig. 4, a thread task processing apparatus 40 includes: the task adding system comprises a task obtaining module 31, a task executing module 32, a task applying module 33, a thread pool configuration module 34, a task adding module 35 and a configuration adjusting module 36.
The functions of the task obtaining module 31 and the task executing module 32 can be seen in fig. 3.
The task application module 33 is configured to apply the task acquisition module for the basic thread pool node in the higher task priority interval to acquire the task from the waiting queue when the task acquisition module acquires that the task from the corresponding waiting queue is empty by the working thread of the basic thread pool node.
A thread pool configuration module 34, configured to create a hierarchical thread pool node serving as a non-leaf node management task priority interval and a basic thread pool node serving as a leaf node management waiting queue and a work thread, so as to form a binary tree structure; and configuring a task priority interval, a working thread and a waiting queue for the basic thread pool node, wherein the working thread corresponds to the waiting queue.
A task adding module 35, configured to find, according to the binary tree structure, a basic thread pool node corresponding to a task priority interval by a hierarchical thread pool node according to a priority of the task, and add the task to a waiting queue of the basic thread pool node; the task obtaining module 31 obtains the task added to the waiting queue of the basic thread pool node by the task adding module 35.
And a configuration adjusting module 36, configured to adjust the number of the working threads configured for the basic thread pool node and the size of the wait queue.
It can be found that, in the embodiment of the present invention, not only can the delay time of the low-priority task be reduced and the low-priority task be prevented from being starved, but also the task can be obtained from the waiting queue by the work thread of the basic thread pool node under the condition that the task obtained from the corresponding waiting queue by the work thread of the basic thread pool node is empty, and the task obtained from the waiting queue can be applied to the basic thread pool node in the higher-priority task priority interval, that is, the low-priority work thread can apply for the task to the higher-priority work thread, so as to improve the work thread utilization rate and the resource utilization rate. In addition, the searching efficiency of the basic thread pool can be improved through the tree structure; and the size of the waiting queue and the number of the working threads of the basic thread pool node can be adjusted, so that the configuration is more efficient and has universality.
The thread task processing device and apparatus of the present invention are described above in detail, and the thread task processing method corresponding to the present invention is described below.
FIG. 5 is a schematic flow chart diagram of a method for processing thread tasks according to one embodiment of the invention.
The method can be applied to a thread task processing device, and the thread task processing device can be located in a server or a client device.
Referring to fig. 5, the method includes:
in step 501, a task is obtained from a corresponding waiting queue by a worker thread of a base thread pool node, wherein the task is added to the waiting queue of the base thread pool node corresponding to a task priority interval according to task priority after being created, and wherein different base thread pool nodes execute the task in parallel.
In this step, the task is added to the waiting queue of the basic thread pool node corresponding to the task priority interval according to the task priority after being created, and the method includes: finding out basic thread pool nodes corresponding to task priority intervals by the hierarchical thread pool nodes according to the priority of the task in the binary tree structure; and adding the task into a waiting queue of the basic thread pool node.
In step 502, the task retrieved from the wait queue is executed.
In this step, after the work thread acquires the task from the wait queue, the acquired task may be executed.
It can be seen from this embodiment that, in the technical solution of the embodiment of the present invention, a task may be added to a waiting queue of a basic thread pool node corresponding to a task priority interval according to a task priority after being created, and different basic thread pool nodes have independent waiting queues and work threads and may execute the task in parallel, and a work thread of a basic thread pool node may execute the task after acquiring the task from the corresponding waiting queue, so that even a task with a low priority may be added to waiting queues of other thread pools to wait for being executed by the thread without waiting in one waiting queue all the time, thereby reducing the delay time of the task and preventing the low priority task from starving.
FIG. 6 is another schematic flow chart diagram of a method for processing thread tasks in accordance with an embodiment of the present invention. Fig. 6 describes the solution according to an embodiment of the invention in more detail with respect to fig. 5.
The embodiment of the invention designs a multi-priority thread pool based on a binary tree structure to solve the defect of high delay time of low-priority tasks and prevent the low-priority tasks from starving. The scheme of the embodiment of the invention introduces a multi-priority waiting queue, manages the waiting queues of multiple working threads through a binary tree structure, wherein each waiting queue corresponds to one or one group of working threads, and the binary tree structure comprises two types of nodes including HP (hierarchical Pool) nodes and BP (Base Pool) nodes. The HP nodes serve as non-leaf nodes and manage priority intervals of the sub-trees of the binary tree; the BP nodes serve as leaf nodes, manage wait queues and work thread groups, while maintaining pointers to higher priority BP nodes.
According to the scheme of the embodiment of the invention, for the newly submitted task, the BP node matched with the priority can be found through the binary tree, the waiting queue in the BP node has no priority sequencing, and the new task is directly added to the end of the waiting queue. And after the work thread of the BP node executes the task, extracting a new task from the waiting queue. If the waiting queue in the BP node has no waiting task, the BP node can apply for the task in the waiting queue of the high-priority BP node from the BP node with higher priority. That is to say, the task in the lower priority thread executing the local interval can execute the task in the higher priority interval, so that the task in the higher priority can be executed by the task in the lower priority queue through the sharing of the work threads among the waiting queues, and the purpose of improving the resource utilization rate is achieved while avoiding the high delay time of the task in the lower priority.
Described below in conjunction with fig. 6, with reference to fig. 6, the method includes:
in step 601, a hierarchical thread pool node (HP node) and a base thread pool node (BP node) are created, a binary tree structure is formed, and task priority intervals, work threads, and wait queues are configured.
In the step, initializing model work, creating an HP node and a BP node, configuring a priority interval of the BP node, configuring the size of a waiting queue and the number of working threads of a corresponding interval, thereby establishing a tree structure of a binary tree, and then initializing the corresponding working threads. It should be noted that the priorities of different BP nodes and the number of working threads of the corresponding BP nodes may be manually set or adjusted according to a service scenario.
With reference to the binary tree structure application diagram of fig. 7, the priority interval of BP1 may be set to be less than 20, and the priority interval of BP2 may be set to be greater than or equal to 20 and less than 50; the priority interval of BP3 is greater than or equal to 50 and less than 70, the priority interval of BP4 is greater than or equal to 70, wherein the smaller the priority value, the higher the priority level is, therefore, the priority levels of BP1, BP2, BP3 and BP4 in fig. 7 are gradually reduced from the high priority level; the BP1, BP2, BP3 and BP4 are respectively configured with the number of work threads and the size of a waiting queue, for example, the BP1 is configured with one work thread and a corresponding waiting queue. In this case, nodes of adjacent priorities may be combined into a node at an upper layer two by two, for example, BP1 and BP2 are combined into HP2 at an upper layer two by two, BP3 and BP4 are combined into HP3 at an upper layer two by two, and HP2 and HP3 are combined into HP1 at an upper layer two by two. The HP node as a non-leaf node manages the priority interval of the subtree, for example, HP1 uses priority 50 as a boundary point, the priority less than 50 is managed by HP2, and the priority greater than or equal to 50 is managed by HP 3.
In step 602, a task is created, and the task is added to the waiting queue of the basic thread pool node corresponding to the task priority interval according to the task priority.
In this step, for a newly created task, a corresponding BP node is found in the binary tree according to the priority of the task, for example, a task with a priority of 60 in fig. 7, an HP3 is found by an HP1 according to the priority of greater than 50, an HP3 according to the priority of greater than 50, a corresponding BP node is found to be BP3 (the priority interval of BP3 is greater than or equal to 50 and less than 70), and then the task is added to a waiting queue of the corresponding BP node. If the wait queue is full, an error message is returned.
In step 603, the work thread of the base thread pool node retrieves the task from the corresponding wait queue.
In this step, the worker thread of the BP node extracts a new task from the wait queue. And through the priority interval, matching and searching from the topmost node to the lower layer in sequence, wherein each branch line has corresponding priority comparison. For example, in fig. 7, for a task with priority 60, the HP3 of the next layer is found by the HP1 of the topmost node according to the priority of the new task being greater than 50, and the HP3 of the next layer is found to be BP3 according to the priority being greater than 50 (the priority interval of BP3 is greater than or equal to 50 and less than 70).
In step 604, the task retrieved from the wait queue is executed.
In this step, the worker thread of the BP node executes a new task extracted from the wait queue.
In step 605, when the task acquired from the corresponding waiting queue by the work thread of the basic thread pool node is empty, the basic thread pool node in the higher task priority interval applies for acquiring the task from the waiting queue.
In this step, if the waiting queue of the current BP node is empty, a task is applied to the BP node in the higher priority interval until there is no waiting task with higher priority. That is to say, the low-priority working thread can apply for the task from the higher-priority working thread, so that the utilization rate of the working thread and the utilization rate of resources are improved while the high delay time of the low-priority task is avoided.
In step 606, the task requested from the waiting queue of the base thread pool node of the higher task priority interval is executed.
In step 607, the number of work threads and the size of the wait queue configured for the base thread pool node are adjusted.
Note that the step 607 has no inevitable order relationship with the previous steps, and the adjustment processing of the step 607 may be performed during a setting period as necessary.
According to the embodiment of the invention, the size of the waiting queue of the BP node and the number of the working threads can be adjusted according to the service model and the system load condition. Because the task scales of different services with different priorities are different, the number of the working threads in the corresponding BP node can be adjusted according to the services.
It can be found that the embodiment of the invention can improve the BP searching efficiency through the tree structure; the BP node in the embodiment of the invention has independent waiting queues and working threads, so that the delay time of low-priority tasks can be reduced, and the starvation of the low-priority tasks is prevented; and under the condition that the task acquired from the corresponding waiting queue by the working thread of the basic thread pool node is empty, the basic thread pool node in the higher-priority task priority interval is applied for acquiring the task from the waiting queue, namely, the low-priority working thread can apply for the task from the higher-priority working thread, so that the high delay time of the low-priority task is avoided, the utilization rate of the working thread is improved, and the resource utilization rate is improved. In addition, the searching efficiency of the basic thread pool can be improved through the tree structure; and the size of the waiting queue and the number of the working threads of the basic thread pool node can be adjusted, so that the configuration is more efficient and has universality.
The method of the embodiment of the invention can be applied to a priority scheduling system, can prevent the starvation of low-priority tasks and simultaneously reduces the delay time of the tasks. According to the business model, the embodiment of the invention can configure different priority intervals for the BP nodes, and different BP nodes can allocate different waiting queue sizes and the number of the working threads, so that the method and the device have strong universality.
The technical solution according to the present invention has been described in detail above with reference to the accompanying drawings.
Furthermore, the method according to the invention may also be implemented as a computer program or computer program product comprising computer program code instructions for carrying out the above-mentioned steps defined in the above-mentioned method of the invention.
Alternatively, the invention may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the steps of the above-described method according to the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

1. A thread task processing apparatus, comprising:
the processor is used for creating a hierarchical thread pool node serving as a non-leaf node management task priority interval and a basic thread pool node serving as a leaf node management waiting queue and a working thread to form a binary tree structure, configuring the task priority interval, the working thread and the waiting queue for the basic thread pool node, wherein the working thread corresponds to the waiting queue, and acquiring a task from the waiting queue of the basic thread pool node by the working thread of the basic thread pool node, wherein the task is found out of the basic thread pool node corresponding to the task priority interval according to the priority of the task after being created, and is added into the waiting queue of the basic thread pool node corresponding to the task priority interval, and different basic thread pool nodes execute the task in parallel and execute the task acquired from the waiting queue;
and the memory stores the working threads and the waiting queues of the basic thread pool nodes.
2. A computer device for thread task processing, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of:
establishing a hierarchical thread pool node serving as a non-leaf node management task priority interval and a basic thread pool node serving as a leaf node management waiting queue and a working thread to form a binary tree structure;
configuring a task priority interval, a working thread and a waiting queue for the basic thread pool node, wherein the working thread corresponds to the waiting queue;
acquiring a task from a waiting queue of the basic thread pool node by a working thread of the basic thread pool node, wherein the task finds the basic thread pool node corresponding to a task priority interval according to the priority of the task after being created, and adds the task into the waiting queue of the basic thread pool node corresponding to the task priority interval, wherein different basic thread pool nodes execute the task in parallel;
and executing the tasks acquired from the waiting queue.
3. A non-transitory machine-readable storage medium for thread task processing having executable code stored thereon, which when executed by a processor of an electronic device, causes the processor to perform a method of:
establishing a hierarchical thread pool node serving as a non-leaf node management task priority interval and a basic thread pool node serving as a leaf node management waiting queue and a working thread to form a binary tree structure;
configuring a task priority interval, a working thread and a waiting queue for the basic thread pool node, wherein the working thread corresponds to the waiting queue;
acquiring a task from a waiting queue of the basic thread pool node by a working thread of the basic thread pool node, wherein the task finds the basic thread pool node corresponding to a task priority interval according to the priority of the task after being created, and adds the task into the waiting queue of the basic thread pool node corresponding to the task priority interval, wherein different basic thread pool nodes execute the task in parallel;
and executing the tasks acquired from the waiting queue.
4. A thread task processing apparatus, comprising:
the thread pool configuration module is used for creating a hierarchical thread pool node serving as a non-leaf node management task priority interval and a basic thread pool node serving as a leaf node management waiting queue and a working thread to form a binary tree structure; configuring a task priority interval, a working thread and a waiting queue for the basic thread pool node, wherein the working thread corresponds to the waiting queue;
a task adding module, configured to find, according to the binary tree structure, a basic thread pool node corresponding to a task priority interval by a hierarchical thread pool node according to a priority of the task, and add the task to a waiting queue of the basic thread pool node;
the task acquisition module is used for acquiring tasks from the waiting queues of the basic thread pool nodes by the working threads of the basic thread pool nodes, wherein different basic thread pool nodes execute the tasks in parallel;
and the task execution module is used for executing the tasks acquired from the waiting queue by the task acquisition module.
5. The apparatus of claim 4, further comprising:
and the task application module is used for applying for the basic thread pool node in the higher task priority interval to acquire the task from the waiting queue under the condition that the task acquisition module acquires the task from the corresponding waiting queue by the working thread of the basic thread pool node and the task is empty.
6. The apparatus of claim 4 or 5, further comprising:
and the configuration adjusting module is used for adjusting the number of the working threads configured for the basic thread pool node and the size of the waiting queue.
7. A thread task processing method is characterized by comprising the following steps:
establishing a hierarchical thread pool node serving as a non-leaf node management task priority interval and a basic thread pool node serving as a leaf node management waiting queue and a working thread to form a binary tree structure;
configuring a task priority interval, a working thread and a waiting queue for the basic thread pool node, wherein the working thread corresponds to the waiting queue;
the method for acquiring tasks from a waiting queue of a basic thread pool node by a working thread of the basic thread pool node, wherein the tasks are added into the waiting queue of the basic thread pool node corresponding to a task priority interval according to task priorities after being created, wherein different basic thread pool nodes execute the tasks in parallel, and the tasks are added into the waiting queue of the basic thread pool node corresponding to the task priority interval according to the task priorities after being created comprises the following steps: finding out basic thread pool nodes corresponding to task priority intervals by the hierarchical thread pool nodes according to the priority of the task in the binary tree structure; adding the task into a waiting queue of the basic thread pool node;
and executing the tasks acquired from the waiting queue.
8. The method of claim 7, further comprising:
and under the condition that the task acquired from the corresponding waiting queue by the working thread of the basic thread pool node is empty, applying for acquiring the task from the waiting queue to the basic thread pool node in a higher task priority interval.
9. The method of claim 7, further comprising:
and adjusting the quantity of the working threads configured for the basic thread pool node and the size of the waiting queue.
CN201710522332.XA 2017-06-30 2017-06-30 Thread task processing equipment, device and method Active CN107391243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710522332.XA CN107391243B (en) 2017-06-30 2017-06-30 Thread task processing equipment, device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710522332.XA CN107391243B (en) 2017-06-30 2017-06-30 Thread task processing equipment, device and method

Publications (2)

Publication Number Publication Date
CN107391243A CN107391243A (en) 2017-11-24
CN107391243B true CN107391243B (en) 2020-10-16

Family

ID=60334812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710522332.XA Active CN107391243B (en) 2017-06-30 2017-06-30 Thread task processing equipment, device and method

Country Status (1)

Country Link
CN (1) CN107391243B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108572865A (en) * 2018-04-04 2018-09-25 国家计算机网络与信息安全管理中心 A kind of task queue treating method and apparatus
CN108595249A (en) * 2018-05-02 2018-09-28 联想(北京)有限公司 A kind of virtual machine method for scheduling task and electronic equipment
CN108984283A (en) * 2018-06-25 2018-12-11 复旦大学 A kind of adaptive dynamic pipeline parallel method
CN109144694A (en) * 2018-08-09 2019-01-04 北京城市网邻信息技术有限公司 Information system configuration method, device, equipment and computer readable storage medium
CN109345443A (en) * 2018-10-19 2019-02-15 珠海金山网络游戏科技有限公司 Data processing method and device calculate equipment and storage medium
CN110046038A (en) * 2019-03-12 2019-07-23 平安普惠企业管理有限公司 A kind of task processing method and device based on thread pool
CN110413390A (en) * 2019-07-24 2019-11-05 深圳市盟天科技有限公司 Thread task processing method, device, server and storage medium
CN110716797A (en) * 2019-09-10 2020-01-21 无锡江南计算技术研究所 DDR4 performance balance scheduling structure and method for multiple request sources
CN111400010A (en) * 2020-03-18 2020-07-10 中国建设银行股份有限公司 Task scheduling method and device
CN111782295B (en) * 2020-06-29 2023-08-29 珠海豹趣科技有限公司 Application program running method and device, electronic equipment and storage medium
CN112445614B (en) * 2020-11-03 2024-06-28 华帝股份有限公司 Thread data storage management method, computer equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101252455A (en) * 2008-03-25 2008-08-27 上海文广科技发展有限公司 Grouping broadcast control method based on broadcast
CN101739293B (en) * 2009-12-24 2012-09-26 航天恒星科技有限公司 Method for scheduling satellite data product production tasks in parallel based on multithread
CN102541653B (en) * 2010-12-24 2013-12-25 新奥特(北京)视频技术有限公司 Method and system for scheduling multitasking thread pools
US20130322275A1 (en) * 2012-05-31 2013-12-05 Telefonaktiebolaget L M Ericsson (Publ) Monitoring and allocation of interface resources in a wireless communication system
CN104159316B (en) * 2013-05-14 2018-12-25 北京化工大学 A kind of dispatching method of LTE base station upper layer multi-user
CN103716256B (en) * 2013-12-30 2017-06-16 湖南网数科技有限公司 A kind of method and apparatus that infrastructure is chosen for Web content service
CN106020954A (en) * 2016-05-13 2016-10-12 深圳市永兴元科技有限公司 Thread management method and device

Also Published As

Publication number Publication date
CN107391243A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN107391243B (en) Thread task processing equipment, device and method
CN109582455B (en) Multithreading task processing method and device and storage medium
CN108762896B (en) Hadoop cluster-based task scheduling method and computer equipment
CN106802826B (en) Service processing method and device based on thread pool
CN106919449B (en) Scheduling control method of computing task and electronic equipment
US11080090B2 (en) Method and system for scalable job processing
US9654408B2 (en) Strict queue ordering in a distributed system
CN101452399B (en) Task secondary scheduling module and method
CN105511950A (en) Dispatching management method for task queue priority of large data set
CN107818012B (en) Data processing method and device and electronic equipment
CN112463290A (en) Method, system, apparatus and storage medium for dynamically adjusting the number of computing containers
CN109491775B (en) Task processing and scheduling method used in edge computing environment
CN112099937A (en) Resource management method and device
US9894143B1 (en) Pre-processing and processing pipeline for queue client
CN106775975B (en) Process scheduling method and device
CN109388501B (en) Communication matching method, device, equipment and medium based on face recognition request
CN109032779B (en) Task processing method and device, computer equipment and readable storage medium
CN111290842A (en) Task execution method and device
CN109189581B (en) Job scheduling method and device
CN104636205B (en) A kind of method and apparatus that task is seized
US11743200B2 (en) Techniques for improving resource utilization in a microservices architecture via priority queues
CN114265676A (en) Cluster resource scheduling method, device, equipment and medium
CN113127289B (en) Resource management method, computer equipment and storage medium based on YARN cluster
CN109412973B (en) Audio processing method and device and storage medium
CN108279982B (en) Method, system and equipment for managing pbs resources and hadoop resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200813

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 510627 Guangdong city of Guangzhou province Whampoa Tianhe District Road No. 163 Xiping Yun Lu Yun Ping square B radio tower 13 layer self unit 01

Applicant before: Guangdong Shenma Search Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant