CN113961338A - Management system and management method of dynamic thread pool and thread task processing method - Google Patents

Management system and management method of dynamic thread pool and thread task processing method Download PDF

Info

Publication number
CN113961338A
CN113961338A CN202111092624.7A CN202111092624A CN113961338A CN 113961338 A CN113961338 A CN 113961338A CN 202111092624 A CN202111092624 A CN 202111092624A CN 113961338 A CN113961338 A CN 113961338A
Authority
CN
China
Prior art keywords
thread
thread pool
size
pool
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111092624.7A
Other languages
Chinese (zh)
Inventor
赵华
马涛
王元强
何迎利
葛红舞
聂云杰
缪巍巍
韦小刚
曾锃
张翔
蔡国龙
张宇新
徐春晓
梁伟
王佳
陈民
周飞飞
刘路
卢岸
杨晓林
李宇航
安立源
赵振非
曾强
翁春华
樊卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
State Grid Jiangsu Electric Power Co Ltd
Nari Information and Communication Technology Co
State Grid Electric Power Research Institute
Original Assignee
State Grid Corp of China SGCC
State Grid Jiangsu Electric Power Co Ltd
Nari Information and Communication Technology Co
State Grid Electric Power Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, State Grid Jiangsu Electric Power Co Ltd, Nari Information and Communication Technology Co, State Grid Electric Power Research Institute filed Critical State Grid Corp of China SGCC
Priority to CN202111092624.7A priority Critical patent/CN113961338A/en
Publication of CN113961338A publication Critical patent/CN113961338A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a management system, a management method and a thread task processing method of a dynamic thread pool, wherein the system comprises a credible inspection module and a dynamic thread pool module; checking the task added into the thread pool through a credible checking module; and storing the checked task of adding the thread pool through the dynamic thread pool module, and dynamically adjusting the size of the thread pool according to the thread request. The invention realizes the self-adaptive adjustment of the thread pool, can evenly distribute the system resources obtained by each application program, effectively utilizes the memory resources of the processor and improves the safety and the reliability of the system.

Description

Management system and management method of dynamic thread pool and thread task processing method
Technical Field
The present invention relates to the technical field of thread pool security, and more particularly, to a management system, a management method, and a thread task processing method for a dynamic thread pool.
Background
With the advance of the construction of an intelligent internet of things system, the perception range of the internet of things is continuously extended, and the edge internet of things agent is used as a convergence access node of the internet of things terminal and is gradually used in various service scenes. The method provides a new challenge for the edge Internet of things agent in the face of access of various service terminals with different protocols, especially in the scene of access of massive Internet of things terminals. Thread pools are a form of multi-threaded processing in which tasks are added to a queue and then automatically started after a thread is created. The thread pool threads are all background threads. Each thread uses a default stack size, runs at a default priority, and is in a multi-threaded unit. If a thread is idle in managed code, the thread pool will insert another helper thread to keep all processors busy. If all thread pool threads remain busy all the time, but pending work is contained in the queue, the thread pool will create another helper thread after a period of time but the number of threads never exceeds the maximum. Threads that exceed the maximum value may be queued, but must wait until other threads are completed before starting.
While thread pools are a powerful mechanism for building multi-threaded applications, it is not without risk to use it. When the thread pool is used, attention needs to be paid to the relationship between the size and the performance of the thread pool, and attention needs to be paid to the problems of concurrency risk, deadlock, insufficient resources, thread safety and the like.
Disclosure of Invention
The invention aims to provide a management system, a management method and a thread task processing method of a dynamic thread pool, which can prevent the problems of thread concurrency risk, deadlock, insufficient resources, data security and the like.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a management system for a dynamic thread pool, including a trusted checking module and a dynamic thread pool module;
the trusted inspection module is used for inspecting the task added into the thread pool;
and the dynamic thread pool module is used for acquiring the task which passes the inspection and is added into the thread pool and dynamically adjusting the size of the thread pool.
Preferably, the trusted verification module is specifically configured to,
verifying the application program of the task before starting the task added into the thread pool, wherein the verified reference value is an application signature issued by an application program developer or a verified reference value provided by a trusted verification module;
and the number of the first and second groups,
matching the application behaviors added into the thread pool task with a pre-constructed behavior rule base, adding the thread pool task into the dynamic thread pool module if the matching is passed, otherwise, checking the label of the thread pool task, and blocking or deleting if the label is not passed; and if the verification passes, adding the information into the dynamic thread pool module.
Preferably, the first and second liquid crystal materials are,
and pre-constructing a white list of the trusted application program, and establishing a behavior rule base by collecting historical behavior data of the application program in the white list.
Preferably, the trusted verification module is further configured to,
putting the checked task which is added into the thread pool into a task list, and marking the state of the request task;
searching whether the previous task is executed or not when scanning the task list every time, executing the task again if the previous task is executed, and modifying the task state;
and deducing the logic sequence of task execution according to the task execution record, and establishing the dependency relationship of the tasks.
Preferably, the trusted verification module adopts a trusted chip TPM.
Preferably, the dynamic thread pool module is specifically configured to,
receiving a task request for joining a thread pool;
calling a thread to execute the requested task according to the number of the task requests, the running state of the current thread pool and the number of the running threads;
if the number of the task requests is smaller than the size of the thread pool, directly calling the corresponding threads to execute the tasks;
if the number of the task requests is larger than the size of the thread pool, the size of the thread pool is adjusted, and then a corresponding thread is called to execute the task;
and if the task request number is larger than the thread pool size and larger than the thread number upper limit Tmax, the request tasks exceeding the thread number upper limit Tmax are transferred into the self-learning dynamic thread buffer module.
Preferably, the dynamic thread pool module is further configured to generate a thread identifier for each thread in the thread pool.
Preferably, the dynamic thread pool module is specifically configured to,
initializing a thread pool upper limit Tmax and a thread pool lower limit Tmin based on the historical data amount of the task request verified by the white list;
if the current thread request number is in the rising stage, the size of the thread pool is adjusted upwards by using an adjustment factor, which specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the increment of the last request is used as the increment adjusting factor of the time, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the number which is larger than the average value and smaller than the number of the thread requests as a reference value, taking the last request increment as an increment adjustment factor, and adjusting the size of the thread pool; the current thread request number is in a rising stage, namely the thread request numbers of the last three times are all larger than a threshold value of the size of a set thread pool;
if the number of the current thread requests is in a stable stage, when the number of the thread requests is smaller than the number of the working threads in the thread pool, the size of the current thread pool is not adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, adjusting the size of the thread pool by taking the current number of the thread requests as a reference; the thread request number in the stable stage means that the thread request number of the last three times is within a threshold value of the size of a set thread pool;
if the current thread request number is in a descending stage, the size of the thread pool is adjusted downwards by using an adjusting factor, and the method specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the last request increment is used as an adjusting factor, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the average value of the current thread pool as a reference, taking the last request increment as an adjusting factor, and adjusting the size of the thread pool; the thread request number in the descending stage means that the thread request numbers in the last three times are all smaller than a threshold value of the size of a set thread pool;
respectively comparing the adjusted thread pool size with an upper limit Tmax and a lower limit Tmin, and if the thread pool size is larger than the Tmax, using the Tmax as the thread pool size; if the thread pool size is smaller than Tmin, using Tmin as the thread pool size; if between Tmax and Tmin, then the adjusted thread pool size is used.
In a second aspect, the present invention provides a method for managing a dynamic thread pool, including:
acquiring a task request for joining a thread pool;
if the number of the task requests is smaller than the size of the thread pool, the thread pool is not adjusted;
and if the task request number is larger than the thread pool size, adjusting the thread pool size.
Preferably, the adjusting the size of the thread pool includes:
initializing a thread pool upper limit Tmax and a thread pool lower limit Tmin based on the historical data amount of the task request verified by the white list;
if the current thread request number is in the rising stage, the size of the thread pool is adjusted upwards by using an adjustment factor, which specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the increment of the last request is used as the increment adjusting factor of the time, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the number which is larger than the average value and smaller than the number of the thread requests as a reference value, taking the last request increment as an increment adjustment factor, and adjusting the size of the thread pool; the current thread request number is in a rising stage, namely the thread request numbers of the last three times are all larger than a threshold value of the size of a set thread pool;
if the number of the current thread requests is in a stable stage, when the number of the thread requests is smaller than the number of the working threads in the thread pool, the size of the current thread pool is not adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, adjusting the size of the thread pool by taking the current number of the thread requests as a reference; the thread request number in the stable stage means that the thread request number of the last three times is within a threshold value of the size of a set thread pool;
if the current thread request number is in a descending stage, the size of the thread pool is adjusted downwards by using an adjusting factor, and the method specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the last request increment is used as an adjusting factor, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the average value of the current thread pool as a reference, taking the last request increment as an adjusting factor, and adjusting the size of the thread pool; the thread request number in the descending stage means that the thread request numbers in the last three times are all smaller than a threshold value of the size of a set thread pool;
respectively comparing the adjusted thread pool size with an upper limit Tmax and a lower limit Tmin, and if the thread pool size is larger than the Tmax, using the Tmax as the thread pool size; if the thread pool size is smaller than Tmin, using Tmin as the thread pool size; if between Tmax and Tmin, then the adjusted thread pool size is used.
In a third aspect, the present invention provides a method for processing a thread task, including:
and judging according to the number of the running threads of the current thread pool as follows:
if the number of the task requests is smaller than the size of the thread pool, directly calling the corresponding threads to execute the tasks;
if the number of the task requests is larger than the size of the thread pool, adjusting the size of the thread pool, and then calling the corresponding thread to execute the task;
and if the task request number is larger than the thread pool size and larger than the thread number upper limit Tmax, the request tasks exceeding the thread number upper limit Tmax are transferred into the self-learning dynamic thread buffer module.
Preferably, the adjusting the size of the thread pool includes:
if the current thread request number is in the rising stage, the size of the thread pool is adjusted upwards by using an adjustment factor, which specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the increment of the last request is used as the increment adjusting factor of the time, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the number which is larger than the average value and smaller than the number of the thread requests as a reference value, taking the last request increment as an increment adjustment factor, and adjusting the size of the thread pool; the current thread request number is in a rising stage, namely the thread request numbers of the last three times are all larger than a threshold value of the size of a set thread pool;
if the number of the current thread requests is in a stable stage, when the number of the thread requests is smaller than the number of the working threads in the thread pool, the size of the current thread pool is not adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, adjusting the size of the thread pool by taking the current number of the thread requests as a reference; the thread request number in the stable stage means that the thread request number of the last three times is within a threshold value of the size of a set thread pool;
if the current thread request number is in a descending stage, the size of the thread pool is adjusted downwards by using an adjusting factor, and the method specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the last request increment is used as an adjusting factor, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the average value of the current thread pool as a reference, taking the last request increment as an adjusting factor, and adjusting the size of the thread pool; the thread request number in the descending stage means that the thread request numbers in the last three times are all smaller than a threshold value of the size of a set thread pool;
respectively comparing the adjusted thread pool size with an upper limit Tmax and a lower limit Tmin, and if the thread pool size is larger than the Tmax, using the Tmax as the thread pool size; if the thread pool size is smaller than Tmin, using Tmin as the thread pool size; if between Tmax and Tmin, then the adjusted thread pool size is used.
The invention achieves the following beneficial effects:
the invention changes the size of the thread pool by analyzing the task request, can evenly distribute the system resources obtained by each application program, effectively utilizes the memory resources of the processor and improves the safety and the reliability of the system.
Drawings
FIG. 1 is a flow chart of a self-learning dynamic thread pool management method provided by an embodiment of the invention;
FIG. 2 is an example of a dependency relationship between processing tasks of a trusted checking module in an embodiment of the present invention;
fig. 3 is a flowchart of a work flow of a trusted checking module according to an embodiment of the present invention.
Detailed Description
The invention is further described below. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
One embodiment of the present invention provides a management system for a dynamic thread pool, the management system comprising: a trusted verification module and a dynamic thread pool module.
In the embodiment of the invention, the trusted checking module is realized by adopting a trusted chip TPM. The trusted chip TPM serves as a trusted root of the self-learning dynamic thread pool module, and trusted chain transmission from bottom to top is achieved through the most basic security functions such as secret key storage and security algorithm in hardware.
In the embodiment of the invention, the credible inspection module is used for inspecting the tasks added into the thread pool, and comprises the steps of carrying out static measurement and verification on the tasks added into the thread pool and carrying out dynamic measurement and verification on the tasks added into the thread pool.
The static measurement and verification of the task added into the thread pool are specifically implemented by verifying the task before starting the task and allowing the application program of the task to start and run only after the verification is correct. The verified reference value is an application signature issued by an application developer or a verified reference value provided by a trusted verification module.
Dynamic measurement and verification of tasks joining a thread pool is achieved by applying a behavioral white list. The application behavior that is measured and verified is software call behavior, including process launch, thread access, and the like. By monitoring the system call, the application exception can be effectively discovered, namely the application exception is not credible any more. The specific method for realizing credible application is that the normal behavior of the white list user is collected by analyzing the white list application, a behavior rule base is established according to the normal behavior, and then the application behavior rule base is compared for judgment according to the application behavior data collected in real time. If the application behavior cannot be matched with any rule, the behavior can be judged to be abnormal, and corresponding measures are taken according to the safety hazard level of the abnormal behavior, such as blocking the application, deleting the application, restarting the system and the like. And if the behavior rules of the white list are matched, adding a third application request into the dynamic thread pool module. Limiting applications running in the environment can reduce the possibility of attack on the cloud platform by unsafe applications, and monitoring the applications can discover the attack and respond in time.
The application white list can be constructed in two ways: firstly, the task of constructing a trusted application program database is delivered to an administrator, and the administrator is responsible for identifying, verifying and adding the application program into a white list; the second is relying on a verification database of trusted applications maintained by third parties, which the administrator only needs to select and approve.
In the embodiment of the invention, the credible inspection module is also used for judging the application behavior characteristics by adopting a dynamic correlation perception technology, and can find that the application is abnormal under the condition that the application is not called out of a white list. The dynamic association perception generates an application behavior baseline through machine learning, collects the behavior of application for a period of time when the application runs, forms application behavior characteristics through big data analysis and machine learning, and judges the abnormality of the application behavior characteristics according to the application behavior characteristics.
In the embodiment of the invention, the credible checking module is also used for checking the thread data in the thread, and if the check is not passed, the thread data is blocked/deleted to ensure the safety of the thread pool. And if the label passes the verification, adding a dynamic thread pool module.
In the embodiment of the invention, the credible inspection module is also used for analyzing whether each thread task has a task which is interdependent, wherein the task which is interdependent refers to whether the task is dependent on the completion of another task before the current task is executed, so that the problem that the tasks which have interdependent relation are executed in a thread pool at the same time or in a staggered way, and the resource waste is caused is avoided.
Aiming at the tasks verified by the white list, the states of all the tasks are placed in a credible inspection module, whether the last task is executed and completed is searched when a task list is scanned every time, then the task is executed, the state of the task in the credible inspection module is required to be modified when a thread is executed, namely whether the task is completed, and the logical sequence of task execution can be judged and deduced according to the record, so that the local dependency relationship of a task set is established.
The credibility inspection can also determine a global execution sequence through a topological sorting algorithm, such as a Kahn algorithm, a DFS algorithm, and the inferred local dependency relationship, so that any complex dependent task is abstracted into a directed acyclic graph, the topological sorting algorithm automatically combs the dependency relationship, and the concurrent computation and processing of the related dependency relationship among the tasks are automatically completed. For example, a task set includes 9 logics (N, a1, a2, A3, a4, B1, B2, B3, M) as shown in fig. 2, and the sequence of the topology is: (N- > Ax- > Bx- > M). When the task is executed, the dependency relationship between the topological sequence and the program is determined through credible inspection, the system automatically completes the concurrent execution of the A and the B, and the effective calculation speed of the system is improved.
The workflow of the trusted checking module is shown in fig. 3.
In an embodiment of the invention, the dynamic thread pool module is configured to,
when a task request including a thread identifier is received, the running state, the running thread number and the running strategy of the current thread pool are checked, and the next executed flow is determined to be directly applying for thread execution, or the values of the upper limit Tmax and the lower limit Tmin of the thread number are changed firstly and then executed, or the task is directly rejected.
And if the number of the threads in the current thread pool reaches the set upper limit Tmax of the thread number, calling a self-learning dynamic thread buffer to process the task, and adopting a strategy of the self-learning dynamic thread buffer to protect the integrity and reliability of the thread pool and data. The self-learning dynamic thread buffer is a common interface and can be expanded according to service customization. And if the number of the threads in the current thread pool does not reach the set thread number upper limit Tmax, submitting the task request identified by the thread to execute.
In the embodiment of the present invention, the dynamic thread pool module is further configured to generate a thread identifier for each thread, that is, mark the thread ID of each thread, and apply for a semaphore for each thread, where the semaphore is a facility used in a multi-thread environment, and is used to ensure that two or more key code segments are not concurrently called, and may be used to perform flow control, and dynamically adjust the size of the thread pool according to the thread execution efficiency. The size of the thread pool is dynamically adjusted, specifically,
i, analyzing the past data volume of the task request verified by the white list, and adjusting an initial upper limit Tmax and a lower limit Tmin of a thread pool according to the historical thread running number of the task;
and II, if the request number of the thread is in a rising stage (the request number of the last three times is larger than a threshold value of the size of the thread pool set by the system), using an adjusting factor to adjust the size of the thread pool upwards, and specifically comprising the following steps: when the number of the thread requests is smaller than the number of the threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the last request increment is used as the increment adjustment factor of the time, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of threads in the thread pool, using the number which is larger than the average value and smaller than the thread request value as a reference value, taking the last request increment as an adjusting factor, and adjusting the size of the thread pool; and if the number of the thread requests is larger than the maximum bearing capacity of the system, putting the thread requests into a self-learning dynamic thread register. And transferring the data of the self-learning dynamic thread buffer to the thread pool through the heartbeat wire of the self-learning dynamic thread so as to ensure the integrity of the task.
If the thread request number is in a stable stage (the last three times of request number is within a threshold value of the thread pool size set by the system), when the thread request number is smaller than the thread number in the thread pool, the size of the thread pool meets the requirement of processing the user request, and the size of the current thread pool is not adjusted; when the number of thread requests is greater than the number of worker threads in the thread pool, the thread pool size is adjusted to accommodate the current thread request amount.
And IV, if the number of the thread requests is in a descending stage (the number of the latest three requests is smaller than a threshold value of the size of the thread pool set by the system), using an adjusting factor to downwards regulate the size of the thread pool, and specifically comprising the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the latest request increment is used as an adjusting factor, and the size of the thread pool is adjusted; and when the number of the thread requests is larger than the number of the working threads in the thread pool, adjusting the size of the thread pool by taking the thread pool size above the average level as a reference and taking the latest request increment as an adjusting factor.
V, respectively comparing the adjusted thread pool size with an upper limit Tmax and a lower limit Tmin, if the thread pool size is larger than the Tmax, using the Tmax as the thread pool size, and putting the task into a self-learning dynamic thread buffer; if the thread pool size is smaller than Tmin, using Tmin as the thread pool size; if between Tmax and Tmin, then the adjusted thread pool size is used.
Another embodiment of the present invention provides a method for dynamically adjusting a thread pool, which generates a thread identifier for each thread, applies for a semaphore for each thread, and dynamically adjusts the size of the thread pool according to the efficiency of thread execution. The specific process is as follows:
acquiring a current task request;
if the number of the task requests is smaller than the size of the thread pool, the thread pool is not adjusted;
and if the task request number is larger than the thread pool size, adjusting the thread pool size.
The specific implementation process for adjusting the size of the thread pool comprises the following steps:
if the number of the requests of the thread is in a rising stage (the number of the requests of the last three times is larger than a threshold value of the size of the thread pool set by the system), the size of the thread pool is adjusted upwards by using an adjustment factor, and the method specifically comprises the following steps: when the number of the thread requests is smaller than the number of the threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the last request increment is used as the increment adjustment factor of the time, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of threads in the thread pool, using the number which is larger than the average value and smaller than the thread request value as a reference value, taking the last request increment as an adjusting factor, and adjusting the size of the thread pool; and if the number of the thread requests is larger than the maximum bearing capacity of the system, putting the thread requests into a self-learning dynamic thread register. And transferring the data of the self-learning dynamic thread buffer to the thread pool through the heartbeat wire of the self-learning dynamic thread so as to ensure the integrity of the task.
If the thread request number is in a stable stage (the last three times of request number is within the threshold value of the thread pool size set by the system), when the thread request number is smaller than the thread number in the thread pool, the size of the thread pool meets the requirement of processing the user request, and the size of the current thread pool is not adjusted; when the number of thread requests is greater than the number of worker threads in the thread pool, the thread pool size is adjusted to accommodate the current thread request amount.
If the number of the thread requests is in a descending stage (the number of the last three requests is smaller than a threshold value of the size of the thread pool set by the system), the size of the thread pool is adjusted downwards by using an adjusting factor, and the method specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the latest request increment is used as an adjusting factor, and the size of the thread pool is adjusted; and when the number of the thread requests is larger than the number of the working threads in the thread pool, adjusting the size of the thread pool by taking the thread pool size above the average level as a reference and taking the latest request increment as an adjusting factor.
After the size of the thread pool is adjusted, the thread pool is respectively compared with an upper limit Tmax and a lower limit Tmin, if the thread pool is larger than the Tmax, the Tmax is used as the size of the thread pool, and the task is put into a self-learning dynamic thread buffer; if the thread pool size is smaller than Tmin, using Tmin as the thread pool size; if between Tmax and Tmin, then the adjusted thread pool size is used.
When the size of the dynamic thread pool is adjusted, the user request is firstly oriented, and the user request is responded as much as possible within a limited time, namely the average response time of the system is short. Secondly, the system is oriented, the number of the working threads in the dynamic thread pool is certain in a certain time slice, but the number of the requests received by the system is an unknown random number, and the processing capacity of the system needs to have certain flexibility so as to reduce the consumption of resources on the premise of ensuring high throughput rate of the system. This requires that the following requirements be satisfied in the algorithm for dynamically adjusting the size of the thread pool:
1) when a user request reaches a system, the size of a thread pool is required to have a stable fluctuation form within a certain time;
2) because the dynamic thread pool consumes certain system resources when creating and destroying the working threads, and the system performance is reduced, the lower limit Tmin of the dynamic thread pool is set to be slightly larger than the thread number required by the running of each task in the current working state;
3) and correspondingly adjusting the upper limit Tmax of the thread pool according to the bearing capacity of the system in different periods, wherein although the access of the gateway front-end processor is random, the upper limit Tmax can be easily predicted according to the number of users accessing the intranet and scientific statistics on the access amount in different periods in the past. Firstly, the scheduling of all tasks is triggered and developed by the credible checking module, and the part of the completed work is as follows: checking the running state, the number of running threads and the running strategy of the current thread pool, and determining whether the next flow is directly applied for thread execution, or buffered in a queue for execution, or directly refusing the task. The operation logic is simple, if the set thread number exceeds the thread number upper limit Tmax, the self-learning dynamic thread buffer is handed to the self-learning dynamic thread buffer for processing, the self-learning dynamic thread buffer exceeds the thread number upper limit Tmax and is stored in a memory, and when the thread number received in the thread pool is smaller than the thread number upper limit Tmax, the task is handed to the thread pool again for operation.
And if the number of threads in the current thread pool reaches the set upper limit Tmax of the number of threads, calling a self-learning dynamic thread buffer to process the task, and adopting a strategy of a self-learning dynamic thread buffer module to protect the integrity and reliability of the thread pool and data. The self-learning dynamic thread buffer module is a public interface and can be expanded according to service customization. And if the number of the threads in the current thread pool does not reach the set thread number upper limit Tmax, adopting the thread for calling the submission task to process.
A third embodiment of the present invention provides a method for processing a thread task, which includes:
and judging according to the number of the running threads of the current thread pool as follows:
if the number of the task requests is smaller than the size of the thread pool, directly calling the corresponding threads to execute the tasks;
if the number of the task requests is larger than the size of the thread pool, adjusting the size of the thread pool, and then calling the corresponding thread to execute the task;
if the task request number is larger than the thread pool size and larger than the thread number upper limit Tmax, the request tasks exceeding the thread number upper limit Tmax are transferred to the self-learning dynamic thread buffer module
The task request carries a thread identifier, and the corresponding thread can be called to execute the task according to the thread identifier.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (12)

1. The management system of the dynamic thread pool is characterized by comprising a credible checking module and a dynamic thread pool module;
the trusted inspection module is used for inspecting the task added into the thread pool;
and the dynamic thread pool module is used for acquiring the task which passes the inspection and is added into the thread pool and dynamically adjusting the size of the thread pool.
2. The system for managing a dynamic thread pool as claimed in claim 1, wherein the trusted verification module is specifically configured to,
verifying the application program of the task before starting the task added into the thread pool, wherein the verified reference value is an application signature issued by an application program developer or a verified reference value provided by a trusted verification module;
and the number of the first and second groups,
matching the application behaviors added into the thread pool task with a pre-constructed behavior rule base, adding the thread pool task into the dynamic thread pool module if the matching is passed, otherwise, checking the label of the thread pool task, and blocking or deleting if the label is not passed; and if the verification passes, adding the information into the dynamic thread pool module.
3. The system for managing a dynamic thread pool as claimed in claim 2,
and pre-constructing a white list of the trusted application program, and establishing a behavior rule base by collecting historical behavior data of the application program in the white list.
4. The system for managing a dynamic thread pool of claim 2, wherein the trusted verification module is further configured to,
putting the checked task which is added into the thread pool into a task list, and marking the state of the request task;
searching whether the previous task is executed or not when scanning the task list every time, executing the task again if the previous task is executed, and modifying the task state;
and deducing the logic sequence of task execution according to the task execution record, and establishing the dependency relationship of the tasks.
5. The system according to claim 2, wherein the trusted verification module employs a trusted chip TPM.
6. The system for managing a dynamic thread pool as claimed in claim 1, wherein the dynamic thread pool module is specifically configured to,
receiving a task request for joining a thread pool;
calling a thread to execute the requested task according to the number of the task requests, the running state of the current thread pool and the number of the running threads;
if the number of the task requests is smaller than the size of the thread pool, directly calling the corresponding threads to execute the tasks;
if the number of the task requests is larger than the size of the thread pool, the size of the thread pool is adjusted, and then a corresponding thread is called to execute the task;
and if the task request number is larger than the thread pool size and larger than the thread number upper limit Tmax, the request tasks exceeding the thread number upper limit Tmax are transferred into the self-learning dynamic thread buffer module.
7. The system for managing a dynamic thread pool of claim 6, wherein the dynamic thread pool module is further configured to generate a thread identifier for each thread in the thread pool.
8. The system for managing a dynamic thread pool as claimed in claim 6, wherein the dynamic thread pool module is specifically configured to,
initializing a thread pool upper limit Tmax and a thread pool lower limit Tmin based on the historical data amount of the task request verified by the white list;
if the current thread request number is in the rising stage, the size of the thread pool is adjusted upwards by using an adjustment factor, which specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the increment of the last request is used as the increment adjusting factor of the time, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the number which is larger than the average value and smaller than the number of the thread requests as a reference value, taking the last request increment as an increment adjustment factor, and adjusting the size of the thread pool; the current thread request number is in a rising stage, namely the thread request numbers of the last three times are all larger than a threshold value of the size of a set thread pool;
if the number of the current thread requests is in a stable stage, when the number of the thread requests is smaller than the number of the working threads in the thread pool, the size of the current thread pool is not adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, adjusting the size of the thread pool by taking the current number of the thread requests as a reference; the thread request number in the stable stage means that the thread request number of the last three times is within a threshold value of the size of a set thread pool;
if the current thread request number is in a descending stage, the size of the thread pool is adjusted downwards by using an adjusting factor, and the method specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the last request increment is used as an adjusting factor, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the average value of the current thread pool as a reference, taking the last request increment as an adjusting factor, and adjusting the size of the thread pool; the thread request number in the descending stage means that the thread request numbers in the last three times are all smaller than a threshold value of the size of a set thread pool;
respectively comparing the adjusted thread pool size with an upper limit Tmax and a lower limit Tmin, and if the thread pool size is larger than the Tmax, using the Tmax as the thread pool size; if the thread pool size is smaller than Tmin, using Tmin as the thread pool size; if between Tmax and Tmin, then the adjusted thread pool size is used.
9. A method for managing a dynamic thread pool, comprising:
acquiring a task request for joining a thread pool;
if the number of the task requests is smaller than the size of the thread pool, the thread pool is not adjusted;
and if the task request number is larger than the thread pool size, adjusting the thread pool size.
10. The method according to claim 9, wherein said adjusting the size of the thread pool comprises:
initializing a thread pool upper limit Tmax and a thread pool lower limit Tmin based on the historical data amount of the task request verified by the white list;
if the current thread request number is in the rising stage, the size of the thread pool is adjusted upwards by using an adjustment factor, which specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the increment of the last request is used as the increment adjusting factor of the time, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the number which is larger than the average value and smaller than the number of the thread requests as a reference value, taking the last request increment as an increment adjustment factor, and adjusting the size of the thread pool; the current thread request number is in a rising stage, namely the thread request numbers of the last three times are all larger than a threshold value of the size of a set thread pool;
if the number of the current thread requests is in a stable stage, when the number of the thread requests is smaller than the number of the working threads in the thread pool, the size of the current thread pool is not adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, adjusting the size of the thread pool by taking the current number of the thread requests as a reference; the thread request number in the stable stage means that the thread request number of the last three times is within a threshold value of the size of a set thread pool;
if the current thread request number is in a descending stage, the size of the thread pool is adjusted downwards by using an adjusting factor, and the method specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the last request increment is used as an adjusting factor, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the average value of the current thread pool as a reference, taking the last request increment as an adjusting factor, and adjusting the size of the thread pool; the thread request number in the descending stage means that the thread request numbers in the last three times are all smaller than a threshold value of the size of a set thread pool;
respectively comparing the adjusted thread pool size with an upper limit Tmax and a lower limit Tmin, and if the thread pool size is larger than the Tmax, using the Tmax as the thread pool size; if the thread pool size is smaller than Tmin, using Tmin as the thread pool size; if between Tmax and Tmin, then the adjusted thread pool size is used.
11. A method for processing a thread task, comprising:
and judging according to the number of the running threads of the current thread pool as follows:
if the number of the task requests is smaller than the size of the thread pool, directly calling the corresponding threads to execute the tasks;
if the number of the task requests is larger than the size of the thread pool, adjusting the size of the thread pool, and then calling the corresponding thread to execute the task;
and if the task request number is larger than the thread pool size and larger than the thread number upper limit Tmax, the request tasks exceeding the thread number upper limit Tmax are transferred into the self-learning dynamic thread buffer module.
12. The method of claim 11, wherein the adjusting the size of the thread pool comprises:
if the current thread request number is in the rising stage, the size of the thread pool is adjusted upwards by using an adjustment factor, which specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the increment of the last request is used as the increment adjusting factor of the time, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the number which is larger than the average value and smaller than the number of the thread requests as a reference value, taking the last request increment as an increment adjustment factor, and adjusting the size of the thread pool; the current thread request number is in a rising stage, namely the thread request numbers of the last three times are all larger than a threshold value of the size of a set thread pool;
if the number of the current thread requests is in a stable stage, when the number of the thread requests is smaller than the number of the working threads in the thread pool, the size of the current thread pool is not adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, adjusting the size of the thread pool by taking the current number of the thread requests as a reference; the thread request number in the stable stage means that the thread request number of the last three times is within a threshold value of the size of a set thread pool;
if the current thread request number is in a descending stage, the size of the thread pool is adjusted downwards by using an adjusting factor, and the method specifically comprises the following steps: when the number of the thread requests is smaller than the number of the working threads in the thread pool, the average value of the number of the thread requests and the size of the current thread pool is used as a reference value, the last request increment is used as an adjusting factor, and the size of the thread pool is adjusted; when the number of the thread requests is larger than the number of the working threads in the thread pool, using the average value of the current thread pool as a reference, taking the last request increment as an adjusting factor, and adjusting the size of the thread pool; the thread request number in the descending stage means that the thread request numbers in the last three times are all smaller than a threshold value of the size of a set thread pool;
respectively comparing the adjusted thread pool size with an upper limit Tmax and a lower limit Tmin, and if the thread pool size is larger than the Tmax, using the Tmax as the thread pool size; if the thread pool size is smaller than Tmin, using Tmin as the thread pool size; if between Tmax and Tmin, then the adjusted thread pool size is used.
CN202111092624.7A 2021-09-17 2021-09-17 Management system and management method of dynamic thread pool and thread task processing method Pending CN113961338A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111092624.7A CN113961338A (en) 2021-09-17 2021-09-17 Management system and management method of dynamic thread pool and thread task processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111092624.7A CN113961338A (en) 2021-09-17 2021-09-17 Management system and management method of dynamic thread pool and thread task processing method

Publications (1)

Publication Number Publication Date
CN113961338A true CN113961338A (en) 2022-01-21

Family

ID=79461888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111092624.7A Pending CN113961338A (en) 2021-09-17 2021-09-17 Management system and management method of dynamic thread pool and thread task processing method

Country Status (1)

Country Link
CN (1) CN113961338A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726798A (en) * 2022-02-28 2022-07-08 福建星云电子股份有限公司 Lithium battery test channel current limiting method and system
CN118051346A (en) * 2024-04-11 2024-05-17 恒生电子股份有限公司 Method, device, electronic equipment and readable storage medium for requesting dynamic current limiting

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726798A (en) * 2022-02-28 2022-07-08 福建星云电子股份有限公司 Lithium battery test channel current limiting method and system
CN114726798B (en) * 2022-02-28 2023-07-18 福建星云电子股份有限公司 Lithium battery test channel current limiting method and system
CN118051346A (en) * 2024-04-11 2024-05-17 恒生电子股份有限公司 Method, device, electronic equipment and readable storage medium for requesting dynamic current limiting

Similar Documents

Publication Publication Date Title
US11188392B2 (en) Scheduling system for computational work on heterogeneous hardware
CN112162865B (en) Scheduling method and device of server and server
US8424007B1 (en) Prioritizing tasks from virtual machines
WO2020181813A1 (en) Task scheduling method based on data processing and related device
US8739171B2 (en) High-throughput-computing in a hybrid computing environment
US8914805B2 (en) Rescheduling workload in a hybrid computing environment
EP2434401A1 (en) Method and system for managing thread pool
WO2017067586A1 (en) Method and system for code offloading in mobile computing
US9875141B2 (en) Managing pools of dynamic resources
US20120096457A1 (en) System, method and computer program product for preprovisioning virtual machines
CN110308980A (en) Batch processing method, device, equipment and the storage medium of data
CN113961338A (en) Management system and management method of dynamic thread pool and thread task processing method
CN106557369A (en) A kind of management method and system of multithreading
US11656902B2 (en) Distributed container image construction scheduling system and method
CN108228330B (en) Serialized multiprocess task scheduling method and device
JP2005108220A (en) Real-time sla impact analysis
CN113157411A (en) Reliable configurable task system and device based on Celery
Varga et al. Deadline scheduling algorithm for sustainable computing in Hadoop environment
CN107368324A (en) A kind of component upgrade methods, devices and systems
Bertogna et al. Static-priority scheduling and resource hold times
Marinescu et al. An auction-driven self-organizing cloud delivery model
Nivitha et al. Self-regulatory fault forbearing and recuperation scheduling model in uncertain cloud context
CN110659108A (en) Cloud system virtual machine task migration method and device and server
CN113065055B (en) News information capturing method and device, electronic equipment and storage medium
Xu et al. Towards autonomic virtual applications in the in-vigo system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination