CN111694669A - Task processing method and device - Google Patents

Task processing method and device Download PDF

Info

Publication number
CN111694669A
CN111694669A CN202010534285.2A CN202010534285A CN111694669A CN 111694669 A CN111694669 A CN 111694669A CN 202010534285 A CN202010534285 A CN 202010534285A CN 111694669 A CN111694669 A CN 111694669A
Authority
CN
China
Prior art keywords
task
memory
concurrency
utilization rate
concurrency number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010534285.2A
Other languages
Chinese (zh)
Inventor
吴聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010534285.2A priority Critical patent/CN111694669A/en
Publication of CN111694669A publication Critical patent/CN111694669A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4418Suspend and resume; Hibernate and awake
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A task processing method and a device are used for dynamically adjusting the concurrency quantity of task processing requests so as to strictly control the concurrency quantity of tasks and avoid the problem of insufficient resources. The method comprises the following steps: the server acquires a first reference task concurrency number of a downstream task processing node in a first time period, then determines the utilization rate of a CPU and the utilization rate of a memory in the first time period, and further determines a second reference task concurrency number. The server adjusts the number of the token to a first value according to the first reference task concurrency number and the second reference task concurrency number. And then, after the server receives the first task processing request at the first moment, judging whether the number of the used tokens at the first moment is smaller than a first numerical value, if so, allocating the first token for the first task processing request and submitting the first token to a thread pool for processing, and if not, refusing to allocate the token for the first task processing request, thereby realizing strict control of task concurrency.

Description

Task processing method and device
Technical Field
The present application relates to the field of concurrent processing control, and in particular, to a task processing method and apparatus.
Background
At present, a thread pool technology can be adopted for simultaneous processing of multiple tasks, and different threads are utilized to process different tasks at the same time, so that the purpose of simultaneously processing multiple tasks is achieved. The method comprises the steps of adopting a thread pool technology, creating a plurality of threads and a task queue in a thread pool during initialization, pushing a plurality of tasks into the task queue when the tasks need to be executed, selecting a plurality of idle threads from the thread pool to execute the tasks respectively, setting the states of the plurality of threads executing the tasks to be idle after the tasks are executed, and putting the threads into the thread pool again to be dispatched to execute new tasks.
In the prior art, the number of threads in a thread pool is a fixed value, when the number of task requests is too large, subsequent requests are directly executed by a caller of a task, so that the actual concurrency number is increased along with the increase of the number of requests, resources are greatly occupied, and the problem of insufficient resources is caused.
Therefore, a task processing method is needed to achieve strict control of the amount of task processing requests and avoid the problem of insufficient resources.
Disclosure of Invention
The embodiment of the application provides a task processing method and device, which are used for dynamically controlling the number of task requests entering a thread pool and avoiding the problems that the number of task concurrences is too large and resources are greatly occupied.
In a first aspect, an embodiment of the present application provides a task processing method, where the method includes:
the client side obtains the first reference task concurrency number from the downstream task processing node in the first time period, then determines the utilization rate of a CPU and the utilization rate of a memory in the time period, and further determines the second reference task concurrency number. The client adjusts the number of the token to be a first value according to the first reference task concurrency number and the second reference task concurrency number. When the client receives a first task processing request at a first time after a first time interval, and when the number of the used tokens at the first time is judged to be smaller than the first numerical value, a first token is distributed to the first task processing request and the task is submitted to a thread pool for processing.
In the embodiment, the appropriate number of tokens in the future time period is determined by using the first reference task concurrency number reported by the downstream task processing node in the past time period and according to the utilization rate of the CPU and the memory in the past time period, so that the task concurrency amount is dynamically controlled, and the problems of large resource occupation and system blockage caused by too large concurrency amount are avoided.
In one possible embodiment, the client receives the second task processing request within a second time, the second time occurring after the first time; and when the number of the used tokens at the second moment is not less than the first value, sleeping the thread where the second task processing request is positioned, and submitting the thread to a waiting queue for processing.
In the above embodiment, after the client receives the second task processing request at the second time after the first time, the client sleeps as the second task processing request thread and is submitted to the wait queue for waking up by comparing the number of the currently used tokens with the current first value if the number of the currently used tokens is not smaller than the first value. The method and the device have the advantages that the number of tokens in a future period is determined according to the utilization rate of the CPU and the memory in the past period, the processing number of tasks is controlled according to the number of the tokens, the effect of dynamically and reasonably controlling the concurrency of the tasks is achieved, and the problem that the system is down due to the fact that system resources are used excessively is avoided.
In one possible embodiment, after the first task processing request is submitted to the thread pool and processed, the client updates the first token from the used state to the available state, and reduces the number of the current used tokens from 1; when the number of the used tokens at the third moment is smaller than the first numerical value, awakening the thread where the third task processing request in the waiting queue is located, wherein the third moment occurs after the second moment; and distributing a token for the thread where the third task processing request is located, and submitting the third task processing request to the thread pool for processing.
In the above embodiment, after the first task processing request is processed, the client releases the corresponding first token and subtracts 1 from the number of currently used tokens, and allocates tokens for the executed task processing requests and releases the tokens after each task processing request is completed, so that the processing number of the task processing requests is adjusted according to the number of the tokens, an effect of dynamically controlling task concurrency is achieved, and a problem that a system is down due to overuse of system resources is avoided.
In one possible embodiment, the client determines a minimum reference task concurrency number from the first reference task concurrency number and the second reference task concurrency number; and adjusting the number of the tokens for controlling the task concurrency number to be a first value according to the minimum reference task concurrency number.
In the above embodiment, the client determines the minimum reference task concurrency number according to the first reference task concurrency number and the second reference task concurrency number, and further determines the corresponding token number, that is, the first numerical value, so as to determine the optimal concurrency processing number of the current system according to the downstream task processing node, the utilization rate of the current CPU of the system, and the utilization rate of the current memory of the system, thereby flexibly adjusting the task concurrency number according to various indexes of the system, avoiding the risk of excessive use of resources of the system or the downstream task processing node in advance, ensuring that the system does not waste processing resources under the condition of normally processing task requests, and realizing efficient task processing.
In one possible embodiment, the client determining the usage of the CPU over the first time period comprises: acquiring the time slice occupation times of different task instance types; the task instances at least comprise idle type task instances, and the total time slice times in the first time period are obtained by adding the time slice occupation times of each task instance type in the first time period. Calculating the utilization rate of the CPU in the first time period according to the time slice occupation times and the total time slice times of the idle type task instance, wherein the utilization rate of the CPU meets a formula I;
Figure BDA0002536493680000031
wherein, A is the utilization rate of the CPU, B is the time slice occupation times of the idle type task instance, and C is the total time slice times.
In the embodiment, the client firstly obtains the time slice occupation times of different task instance types of the system in a first time period, determines the total time slice times in the time period, further determines the utilization rate of a CPU in the time period according to the time slice occupation times corresponding to the task instance of the idle type, and realizes the adjustment of the total number of the current tokens according to the state of the system in the current time period, thereby controlling the task concurrent processing amount, realizing the flexible adjustment of the task concurrent amount according to various indexes of the system, and avoiding the risk of overuse of system or downstream task processing node resources in advance.
In a possible embodiment, the determining, by the client, the second reference task concurrency number according to at least one of a CPU utilization and a memory utilization by the client includes: when the utilization rate of the CPU is greater than a first threshold value, determining that the concurrency number of the second reference task is n1 times of the concurrency number of the original task, wherein n1 is greater than 0 and less than or equal to 1; and when the utilization rate of the CPU is not greater than the first threshold value, determining that the concurrency number of the second reference task is m1 times of the concurrency number of the original task, wherein m1 is greater than 1.
In the embodiment, the client side adjusts the total number of the current tokens according to the state of the system at the current time period, so that the task concurrency processing amount is controlled, the task concurrency amount is flexibly adjusted according to the state of the system at the current time period, and the risk of overuse of system resources is avoided in advance.
In one possible embodiment, the determining, by the client, the usage rate of the memory during the first period of time includes: acquiring a total Java virtual machine JVM memory and a current idle memory; calculating the utilization rate of the memory in the first time period according to the total JVM memory and the current idle memory, wherein the utilization rate of the memory meets a second formula;
Figure BDA0002536493680000032
wherein D is the utilization rate of the memory, E is the current idle memory, and F is the total JVM memory.
In the embodiment, the client determines the memory usage rate in the first time period through the formula two, and helps determine the first value of the token in the subsequent steps, that is, helps to adjust the total number of the current token according to the state of the system in the current time period, thereby controlling the task concurrency processing amount, flexibly adjusting the task concurrency amount according to the state of the system in the current time period, and avoiding the risk of overuse of system resources in advance.
In a possible embodiment, the determining, by the client, the second reference task concurrency number according to at least one of a CPU usage rate and a memory usage rate includes: when the memory utilization rate is greater than a third threshold value, determining that the concurrency number of the second reference task is n2 times of the concurrency number of the original task, wherein n2 is greater than 0 and less than or equal to 1; and when the memory utilization rate is not greater than a fourth threshold value, determining that the concurrency number of the second reference task is m2 times of the concurrency number of the original task, wherein m2 is greater than 1.
In the embodiment, the client adjusts the total number of the current tokens according to the state of the system at the current time period, so that the task concurrency processing amount is controlled, the task concurrency amount is flexibly adjusted according to the state of the system at the current time period, and the risk of overuse of system resources is avoided in advance.
In a second aspect, an embodiment of the present application provides a task processing apparatus, which includes a unit or means (means) for performing each step of the above first aspect, and specifically includes:
the receiving unit is used for acquiring a first reference task concurrency number from a downstream task processing node in a first period;
a processing unit for determining at least one of a CPU usage rate and a memory usage rate in a first period of time;
the processing unit is further used for determining a second reference task concurrency number according to at least one of the CPU utilization rate and the memory utilization rate;
the processing unit is further used for adjusting the number of tokens for controlling the task concurrency number to be a first numerical value according to the first reference task concurrency number and the second reference task concurrency number;
a receiving unit, further configured to receive a first task processing request at a first time, the first time occurring after a first period of time;
and the processing unit is further used for distributing a first token for the first task processing request and submitting the first task processing request to the thread pool for processing when the number of the used tokens at the first moment is smaller than a first numerical value.
In a possible embodiment, the receiving unit is further configured to receive a second task processing request within a second time, the second time occurring after the first time;
and the processing unit is also used for sleeping the thread where the second task processing request is positioned and submitting the thread to a waiting queue for processing when the number of the used tokens at the second moment is not less than the first numerical value.
In a possible embodiment, the processing unit is further configured to update the first token from the used state to the available state and reduce the number of currently used tokens by 1 after the first task processing request is submitted to the thread pool and is processed;
when the number of the used tokens at the third moment is smaller than the first numerical value, awakening the thread where the third task processing request in the waiting queue is located, wherein the third moment occurs after the second moment;
and distributing a token for the thread where the third task processing request is located, and submitting the third task processing request to the thread pool for processing.
In a possible embodiment, the processing unit is specifically configured to:
determining a minimum reference task concurrency number from the first reference task concurrency number and the second reference task concurrency number;
and adjusting the number of the tokens for controlling the task concurrency number to be a first value according to the minimum reference task concurrency number.
In a possible embodiment, the processing unit is specifically configured to:
acquiring the time slice occupation times of different task instance types; the task instances at least comprise task instances of an idle type;
and adding the time slice occupation times of each task instance type in the first time period to obtain the total time slice times in the first time period. Further, calculating the utilization rate of the CPU in the first time period according to the time slice occupation times and the total time slice times of the idle type task instances, wherein the utilization rate of the CPU meets a formula I;
Figure BDA0002536493680000041
wherein, A is the utilization rate of the CPU, B is the time slice occupation times of the idle type task instance, and C is the total time slice times.
In a possible embodiment, the processing unit is specifically configured to:
when the utilization rate of the CPU is greater than a first threshold value, determining that the concurrency number of the second reference task is n1 times of the concurrency number of the original task, wherein n1 is greater than 0 and less than or equal to 1;
and when the utilization rate of the CPU is not greater than the first threshold value, determining that the concurrency number of the second reference task is m1 times of the concurrency number of the original task, wherein m1 is greater than 1.
In a possible embodiment, the receiving unit is further configured to obtain a total Java virtual machine JVM memory and a current free memory;
a processing unit, specifically configured to:
calculating the utilization rate of the memory in the first time period according to the total JVM memory and the current idle memory, wherein the utilization rate of the memory meets a second formula;
Figure BDA0002536493680000051
wherein D is the utilization rate of the memory, E is the current idle memory, and F is the total JVM memory.
In a possible embodiment, the processing unit is specifically configured to:
when the memory utilization rate is greater than a third threshold value, determining that the concurrency number of the second reference task is n2 times of the concurrency number of the original task, wherein n2 is greater than 0 and less than or equal to 1;
and when the memory utilization rate is not greater than a fourth threshold value, determining that the concurrency number of the second reference task is m2 times of the concurrency number of the original task, wherein m2 is greater than 1.
A fourth aspect provides a computer-readable storage medium, on which a program is stored, which, when invoked by a computer, causes the computer to perform the method as claimed in any one of the possible embodiments of the first aspect.
A fifth aspect provides a computer program product enabling, when called by a computer, to carry out the method according to any possible design of the first aspect.
Drawings
Fig. 1 is a schematic diagram of a thread pool task processing technique according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a system architecture for controlling task request throughput according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a task processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a task processing method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating a method for operating a token facility according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of an index collection method according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a concurrency control flow provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of a task processing device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Hereinafter, some terms in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
(1) Thread pool technology refers to technology that can process different tasks by using different threads at the same time. Referring to fig. 1, a thread pool implementation scheme is shown, where n threads and a task queue may be created in a thread pool during initialization by using a thread pool technology, where n is a fixed value set by a user according to actual application, and n in fig. 1 is an integer greater than or equal to 6. When a plurality of tasks need to be executed, the plurality of tasks are pushed into the task queue, for example, task 1, task 2 and task 3 are pushed into the task queue in fig. 1, when 3 tasks need to be executed, 3 idle threads, for example, thread 1, thread 2 and thread 3, can be selected from the thread pool to execute the 3 tasks, respectively, after the 3 tasks are executed, the states of thread 1, thread 2 and thread 3 are set to be idle, and thread 1, thread 2 and thread 3 are placed into the thread pool again, so that the tasks can be dispatched to execute new tasks. It should be noted that fig. 1 illustrates an example in which one thread processes one task, and in practical applications, one thread may also process multiple tasks, for example, one thread may also process two or three tasks.
(2) A thread is the smallest unit of program execution and is the basic unit that is independently scheduled by the system. The threads in the thread pool may include two states, an idle state and a working state. The idle state is a state in which a thread has a condition for processing a task, and logically can process the task but does not currently process the task. The working state refers to the state in which a thread is processing a task.
(3) "plurality" means two or more, and other terms are analogous. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The terms "first," "second," "third," and "fourth," etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance, nor order.
With the development of computer technology, there is a higher demand for multitask simultaneous processing. Currently, thread pool techniques may be employed for multitasking. However, with the existing thread pool technique, the number of threads created after the thread pool is initialized is a fixed value.
In the prior art, for example, when the to-be-executed task processing requests in the task queue in fig. 1 reach the bearer threshold of the thread pool, the task processing requests from the upstream node of the thread pool still continuously enter the thread pool, and at this time, the number of task processing requests that can be carried by the thread pool exceeds the bearer threshold of the thread pool, and then the task processing requests that exceed the bearer threshold are directly executed, so the actual concurrency in the prior art increases with the increase of the number of actual task processing requests, which finally causes a large amount of occupied resources, and affects the system and the downstream task processing node.
In order to solve the above problem, an embodiment of the present application further provides a schematic diagram of a system architecture for controlling task request processing amount, as shown in fig. 2, the system architecture includes a client 10 and a server 20, where the server 20 is connected to the client 10, the client 10 sends a task processing request (e.g. a transaction request) to the server 20, and after the server 20 receives the task processing request from the client 10, a thread pool in the server 20 concurrently processes the task processing request, and specifically, a processing method may refer to a method flow described in fig. 3.
Fig. 3 is a schematic flowchart of a task processing method according to an embodiment of the present application, where the method is used to dynamically adjust a task request throughput according to parameters (including memory usage rate, CPU usage rate, and the like) of a task request processing system and a suggestion of task processing quantity reported by a downstream task processing node, and specifically includes the following steps.
In step 301, the server 20 obtains a first reference task concurrency number from a downstream task processing node within a first period of time.
Illustratively, server 20 is on month 5, 2020, day 20, 1: 5/month 20/day 2 from 00 to 2020: 00, acquiring a first reference task concurrency number of a downstream task processing node of the task request processing system, for example, 20, and periodically adjusting according to the first reference task concurrency number of the downstream task processing node in the above technical scheme, so as to avoid the risk of resource overuse of the downstream task processing node.
At step 302, server 20 determines at least one of CPU usage and memory usage during a first time period.
Illustratively, server 20 is on month 5, 2020, day 20, 1: 5/month 20/day 2 from 00 to 2020: 00 obtains the memory utilization rate, the CPU utilization rate and the like of the task request processing system. In the above steps, by periodically obtaining the parameters of the task request processing system, the number of task processing requests is adjusted and controlled according to the conditions of the system, the concurrency of tasks is strictly controlled, and the risk of overuse of resources of the system or downstream task processing nodes is avoided.
In step 303, the server 20 determines a second reference task concurrency number according to at least one of the CPU utilization rate and the memory utilization rate.
As an example. The server 20 determines the corresponding second reference task concurrency number, for example, 10, according to the obtained CPU utilization rate and memory utilization rate. The second reference task concurrency number is determined through the steps, the quantity of the task processing requests is further adjusted and controlled according to the self condition of the system, the task concurrency amount is strictly controlled, and the risk that system resources are excessively used is avoided.
In step 304, the server 20 adjusts the number of tokens for controlling the task concurrency number to a first value according to the first reference task concurrency number and the second reference task concurrency number.
In one possible embodiment, the server 20 determines a minimum reference task concurrency number from the first reference task concurrency number and the second reference task concurrency number, and further adjusts the number of tokens for controlling the task concurrency number to a first value according to the minimum reference task concurrency number.
Illustratively, the server 20 determines the minimum reference task concurrency number to be 10, and further determines the number of corresponding tokens, and the first value may be 10. Through the steps, the number of the tokens is periodically adjusted according to the preset rule, and then the number of the tokens is used for effectively controlling the concurrency of the tasks, so that the risk of resource overuse is avoided.
Server 20 receives a first task processing request at a first time, step 305, the first time occurring after a first period of time.
Illustratively, server 20 at a first time 2020, 5 months, 20 days 2 after the first time period: 01, a first task processing request is received.
In step 306, when the number of used tokens at the first time is smaller than the first value, the server 20 allocates a first token for the first task processing request, and submits the first task processing request to the thread pool for processing.
Illustratively, the server 20 compares the first value determined in the first period with the number of currently used tokens, and when the number of used tokens is smaller than the first value, allocates a token to the first task processing request and submits the task processing request to the thread pool for processing. The method and the device realize the control of the number of task processing requests entering the thread pool according to the number of the tokens, strictly control the concurrency of the tasks and avoid the risk of excessive use of system resources.
In the steps, the dynamic collection of the index data of the task request processing system is realized, the concurrency of the task processing is flexibly adjusted according to the preset rule, the risk of overuse of system resources of the task request processing system and the downstream processing nodes thereof is avoided, and the task request processing efficiency is improved.
Further, after step 306, that is, after the first task processing request is submitted to the thread pool and is processed, the server 20 updates the first token from the used state to the available state, and decreases the number of the currently used tokens by 1. Further, when the number of used tokens at a third time after the second time is smaller than the first numerical value, waking up the thread where the third task processing request is located in the waiting queue, allocating tokens for the thread where the third task processing request is located, and submitting the third task processing request to the thread pool for processing.
Illustratively, the server 20, after each time a task processing request is processed, returns the corresponding token release and decrements the number of tokens currently used by 1. Further, at a third time after the second time, the server 20 may compare the number of the currently used tokens with the first value, and if the number of the used tokens is smaller than the first value, may wake up the third task processing request in the waiting queue, allocate the tokens to the third task processing request, and submit the third task processing request to the thread pool for processing; otherwise, the task processing request in the waiting queue is not awakened. Through the steps, the concurrency of the tasks is strictly controlled, and the risk that a large amount of system resources are occupied is avoided.
Further, after step 306, the server 20 receives the second task processing request at a second time after the first time, and sleeps the thread where the second task processing request is located and submits the thread to the waiting queue for processing when the number of the used tokens at the second time is not less than the first value.
Illustratively, the server 20 receives the second task processing request at a second time after the first time, compares the number of the currently used tokens with the first value, and puts the thread of the second task processing request into a waiting queue to wait for waking up when the number of the used tokens is not less than the first value. The method and the device realize the flexible adjustment of the number of the task processing requests entering the thread pool according to the used number of the tokens at the current moment and the first value determined in the previous period, strictly control the concurrency of the tasks and avoid the risk of overuse of resources of the system.
In one possible embodiment, in the step 302, the server 20 determines the usage rate of the CPU in the first period, including:
and acquiring the time slice occupation times of different types of task instances in the task processing system in a first time period, and adding to obtain the total time slice times in the first time period. Further, calculating the utilization rate of the CPU in the first time period according to the time slice occupation times and the total time slice times of the idle type task instances, wherein the utilization rate of the CPU meets a formula I;
Figure BDA0002536493680000091
wherein A is the utilization rate of the CPU, B is the time slice occupation times of the idle type task instance, and C is the total time slice times.
Illustratively, the CPU usage of the server 20 is viewed using the cross-platform server information query tool OSH, wherein the slot occupancy times of task instances of different types, such as user, kernel Sys, idle, wait iowait, etc., are first obtained. Further, the server 20 adds the time slice occupation times of the different types of task instances to obtain the total time slice times.
Such as in server 20 where the task processing system is on first period 2020, 5 month, 20 day 1: 5/month 20/day 2 from 00 to 2020: 00, the number of times of occupation of the time slice of the task instance of the user type is 5, the number of times of occupation of the time slice of the task instance of the kernel type is 10, the number of times of occupation of the time slice of the task instance of the idle type is 15, and the number of times of occupation of the time slice of the task instance of the waiting type is 20. Further, it is found that the total number of time slices C in the first period is 5+10+15+20 — 50.
Still further, the server 20:
Figure BDA0002536493680000092
calculating CPU utilization
Figure BDA0002536493680000093
Figure BDA0002536493680000094
In a possible embodiment, the step 303 further includes:
when the usage rate of the CPU is greater than the first threshold, the server 20 determines that the second reference task concurrency number is n1 times the original task concurrency number, where n1 is greater than 0 and less than or equal to 1; and when the utilization rate of the CPU and the memory utilization rate are not greater than a second threshold value, determining that the concurrency number of the second reference task is m1 times of the concurrency number of the original task, wherein m1 is greater than 1.
Illustratively, the server 20 determines 1/2 of the original task concurrency number 20 as the new second reference concurrency number 10 when the CPU utilization exceeds the preset first threshold of 60%. The method is ready for determining the first numerical value in the subsequent steps, helps to control the concurrency of tasks according to the first numerical value of the token, and avoids the problem that system resources are greatly occupied.
In a possible embodiment, in the step 302, the server 20 determines the usage rate of the memory in the first time period, including:
acquiring a total Java Virtual Machine (JVM) memory and a current idle memory, and further calculating the utilization rate of the memory in a first time period by the server 20 according to the total JVM memory and the current idle memory, wherein the utilization rate of the memory meets a formula II;
Figure BDA0002536493680000101
wherein D is the utilization rate of the memory, E is the current idle memory, and F is the total JVM memory.
Illustratively, the server 20 determines the total JVM memory 50 and the currently free memory 10 through a Runtime object provided by JDK (Java Development ToolKit). Calculating the memory usage rate according to a formula
Figure BDA0002536493680000102
Further, in step 303, the determining, by the server 20, the second reference task concurrency number according to at least one of a CPU utilization rate and a memory utilization rate, further includes:
when the utilization rate of the memory is greater than a third threshold value, determining that the concurrency number of the second reference task is n2 times of the concurrency number of the original task, wherein n2 is greater than 0 and less than or equal to 1; and when the utilization rate of the memory is not greater than a fourth threshold value, determining that the concurrency number of the second reference task is m2 times of the concurrency number of the original task, wherein m2 is greater than 1.
Illustratively, the memory usage rate 80% is greater than the third threshold 60%, and the 4/5 second reference task concurrency number 16 of the original task concurrency number 20 is determined.
Optionally, the smaller value of the two second reference task concurrency numbers 10 and 16 is determined to be the final second reference task concurrency number, and exemplarily, the final second reference task concurrency number is 10.
In one possible embodiment, server 20 may set the total number of tokens as the sum of the number of core threads, the number of non-core threads, and the length of the fixed-length queue in the thread pool, so as to strictly control the concurrency number of task processing by the number of tokens. For example, each task processing request must acquire a token before entering the thread pool; and if one task processing request does not acquire the token, the task processing request is put into a waiting queue at the moment, and a thread corresponding to the task processing request sleeps and waits to be awakened.
Based on the above description, an embodiment of the present invention further provides a flowchart of a task processing method as shown in fig. 4, where the flowchart includes:
in step 401, the server 20 receives a task processing request, and optionally, the server 20 applies for obtaining a token for the task processing request.
Step 402, server 20 determines whether the number of tokens currently used is less than a first value, and if so, proceeds to step 403; otherwise, go to step 404.
In step 404, server 20 puts the thread corresponding to the task request to sleep and submits the request to the wait queue.
Server 20 assigns a token to the task processing request and submits the request to a thread pool for processing, step 403.
After the task processing request is processed, server 20 returns the token, that is, updates the token from the used state to the available state, and decrements the number of currently used tokens by 1, step 405.
In step 406, the server 20 further determines whether the number of the currently used tokens is smaller than a first value, and if so, step 408 is performed, otherwise, step 407 is performed.
At step 407, the server 20 does not wake up the thread waiting for the task processing request in the queue.
In step 408, server 20 wakes up the thread of the task processing request in the wait queue, specifically, server 20 wakes up one thread in the wait queue in step 404 according to the bottom thread schedule, and then proceeds to step 409.
In step 409, server 20 applies for obtaining a token for the awakened task processing request and returns to step 403.
Alternatively, when all task processing requests in the waiting queue are processed, the process returns to step 401.
Based on the above description, the embodiment of the present invention further provides a working method of a token tool, as shown in fig. 5, where the token tool at least includes a token obtaining module 50, a token returning module 51, and a total token resetting module 52.
In the first part, the working method of the token acquisition module 50 includes:
step 501, a lock is obtained, where the lock is used to control that only one thread of a task processing request can enter a logic of token maintenance and verification at the same time, so as to ensure that there is no concurrency problem in subsequent steps.
Step 502, determine whether the number of used tokens is less than a first value, where the first value is the total number of current tokens. If yes, go to step 503 and add 1 to the number of used tokens, and go to step 505 to release the lock, step 506, return to success; otherwise, it indicates that the current token has been used completely and the maximum concurrency number has been reached, and step 504 is entered.
Step 504, the thread corresponding to the task processing request is put to sleep.
Step 507, submitting the task processing request to a waiting queue.
Alternatively, when a thread in the wait queue wakes up, step 508 is entered to fetch the thread from the wait queue.
Second, the operation method of the return token module 51 includes:
step 511, a lock is obtained, wherein the lock is used for controlling that only one thread of the task processing request can enter the logic of token maintenance and verification at the same time, and the concurrency problem in the subsequent steps is ensured.
Step 512, the number of used tokens is decremented by 1.
Step 513 determines whether the number of used tokens is less than a first value. If yes, go to step 514; otherwise, a first value indicating that the total number of tokens in the last period of time corresponds to a decrement is decremented and step 515 is entered.
At step 514, a dormant thread for processing a request by a task in the wait queue is awakened.
Step 515, release the lock, not wake up the thread in the wait queue, to achieve the effect of gradually reducing the concurrency number (applicable to the scenario of reducing the total number of tokens).
Step 517, removing the task processing request corresponding to the awakened thread from the waiting queue, optionally, allocating a token for the task processing request and submitting the token to the thread pool for processing.
In the third part, the method for adjusting the operation of the token module 52 includes:
step 521, acquiring a lock, wherein the lock is used for controlling a thread with only one task processing request to enter a logic of token maintenance and verification at the same time, so as to ensure that no concurrency problem exists in subsequent steps.
Step 522, modify the first value of the total number of tokens, and then enter step 523 to determine the following two situations:
case one, adjust the first value to increase: awakening the dormant threads in the waiting queue with the correspondingly increased number, synchronously increasing the number of the threads acquiring the task processing request of the token, and entering step 525 at this time to send an awakening signal; further, step 509 is entered to wake up the thread; further, step 524 may be entered to release the lock.
In case two, the first value is subtracted: the effect of continuously reducing the number of threads that are required to process the task that obtains the token is achieved by step 513 and related steps in the logic of the return token module described above, which proceeds to step 524 to release the lock.
Based on the above description, an embodiment of the present invention further provides an index collection method as shown in fig. 6, which is configured to determine a first numerical value corresponding to the total number of tokens according to the first reference task concurrency number of the downstream node of the task processing system, the CPU utilization rate of the task request processing system, and the memory utilization rate. Which comprises the following steps:
step 601, initializing parameters, wherein the server 20 configures system initial parameters, including configuring task concurrency number, setting memory usage threshold, and setting CPU usage threshold. For example, the number of concurrent tasks is 20, the set memory usage threshold is 60%, and the set CPU usage threshold is 60%.
In step 602, the server 20 obtains an index reported by the downstream task processing node, where the task processing system obtains, according to the Http interface, a first reference task concurrency number reported by the downstream task processing node. Further, the first reference task concurrency number is recorded. For example, the first reference task concurrency number is 10.
Step 603, the server 20 acquires the CPU utilization rate through the cross-platform server information query tool OSHI; further, the CPU suggested concurrency number is determined according to the set CPU utilization rate threshold.
Illustratively, the server 20 collects the time slice occupation times of different types of task instances in the first time period, and further adds the time slice occupation times of the different types to determine a total time slice number C. Still further, according to the formula
Figure BDA0002536493680000121
The CPU utilization A is calculated. Where B represents the number of time slice occupancies for the idle type task instance. Further, the server 20 determines whether the usage rate of the CPU is less than a set CPU usage rate threshold, and if not, determines 3/4 that the task is a suggested concurrency number for the CPU; otherwise, the CPU suggested concurrency number is not set.
For example, the server 20 collects that the time slice occupation times of the idle-type task instance in the first period is 15, the time slice occupation times of the user-type task instance is 10, the time slice occupation times of the system-type task instance is 20, and the time slice occupation times of the waiting-type task instance is 5. The total time slice occupation times totalCpu, obtained by adding the time slice occupation times of the above types of task instances, is 50. According to the formula
Figure BDA0002536493680000122
Whereby the server 20 determines that the CPU usage a in the first time period is 70% and greater than the set CPU usage threshold of 60%, 3/4 for the task concurrency number, i.e., 15 suggests a concurrency number for the CPU.
Step 604, the server 20 collects the usage rate of the memory through the Runtime object provided by the JDK; further, the memory suggested concurrency number is determined according to the memory usage rate threshold.
Illustratively, the server 20 obtains the total memory totalMemory and the free memory freeMemory through the Runtime object provided by the JDK. Further, the server 20 is based on a formula
Figure BDA0002536493680000131
And calculating the utilization rate of the memory. Still further, the server 20 determines whether the usage rate of the memory exceeds a set memory usage rate threshold, and if so, determines 3/4 of the task concurrency number as a memory suggested concurrency number; otherwise, the memory suggestion concurrency number is not set.
For example, the server 20 determines that the usage rate of the memory in the first time period is 30% and does not exceed the set memory usage threshold value of 60%, and then does not set the suggested concurrency number of the memory.
In step 605, the server 20 compares the first reference task concurrency number, the memory suggested concurrency number, and the CPU suggested concurrency number, and takes the minimum value thereof as a first numerical value. Illustratively, the server 20 compares the first reference task concurrency number 10 and the CPU suggested concurrency number 15 to determine that the first value is 10.
Server 20 resets 606 the total number of tokens based on the first value. Illustratively, the total number of tokens is reset to 10.
Server 20 resets the thread pool parameter, illustratively, the number of task concurrencies for the thread pool is 10, step 607.
Optionally, the above method is periodically executed, so that the number of tokens is dynamically adjusted in real time according to parameters of the system and the downstream task processing node, the task concurrency is strictly controlled, and the resource overuse is avoided.
In a possible embodiment, for the token tool, Samepore may also be used as the token tool, where the concurrent control flow is shown in fig. 7 and includes:
in step 701, the server 20 obtains a task processing request.
Step 702, server 20 determines whether the number of tokens remaining at present is greater than 0, if yes, step 703 is entered; otherwise, go to step 704.
Server 20 puts the thread that the task processed the request to sleep and into a wait queue, step 704.
In step 703, the server 20 allocates a token for the task processing request and submits the task processing request to the thread pool for processing.
Server 20 releases the token and increments the remaining number of tokens by 1, step 705.
Step 706, the server 20 determines whether the number of remaining tokens is greater than 0, if yes, step 707 is entered; otherwise step 708 is entered.
In step 707, the server 20 schedules and wakes up a thread of a dormant task processing request in the waiting queue through the bottom thread, and then proceeds to step 709, allocates a token for the task processing application, and then proceeds to step 703, acquires the token and submits the request to the thread pool for processing.
Optionally, after all the dormant task processing requests in the waiting queue are processed, the process returns to step 701 to obtain a new task processing request.
Optionally, step 708, the task processing request in the wait queue is not awakened.
Based on the same inventive concept as the method embodiment, the embodiment of the application also provides a task processing device. It is understood that the task processing device includes hardware structures and/or software modules for performing the respective functions in order to realize the above functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 8 is a schematic diagram illustrating a possible structure of a task processing device according to an embodiment of the present application. As shown in fig. 8, the task processing device 800 includes a receiving unit 801 and a processing unit 802.
The receiving unit is used for acquiring a first reference task concurrency number from a downstream task processing node in a first period.
A processing unit to determine at least one of a CPU usage rate and a memory usage rate within a first time period. And further, the method is also used for determining the second reference task concurrency number according to at least one of the utilization rate of the CPU and the utilization rate of the memory.
And the processing unit is also used for adjusting the number of the tokens for controlling the task concurrency number to be a first numerical value according to the first reference task concurrency number and the second reference task concurrency number.
The apparatus further includes a receiving unit to receive a first task processing request at a first time, the first time occurring after a first period of time.
And the processing unit is further used for distributing a first token for the first task processing request and submitting the first task processing request to the thread pool for processing when the number of the used tokens at the first moment is smaller than a first numerical value.
In a possible embodiment, the receiving unit is further configured to receive a second task processing request within a second time, the second time occurring after the first time.
And the processing unit is also used for sleeping the thread where the second task processing request is positioned and submitting the thread to a waiting queue for processing when the number of the used tokens at the second moment is not less than the first numerical value.
In a possible embodiment, the processing unit is further configured to update the first token from the used state to the available state and reduce the number of currently used tokens by 1 after the first task processing request is submitted to the thread pool and is processed.
And when the number of the used tokens at the third moment is less than the first numerical value, awakening the thread where the third task processing request in the waiting queue is located, wherein the third moment occurs after the second moment.
And distributing a token for the thread where the third task processing request is located, and submitting the third task processing request to the thread pool for processing.
In a possible embodiment, the processing unit is specifically configured to:
and determining the minimum reference task concurrency number from the first reference task concurrency number and the second reference task concurrency number. And adjusting the number of tokens for controlling the task concurrency number to be a first numerical value according to the minimum reference task concurrency number.
In a possible embodiment, the processing unit is specifically configured to:
acquiring the time slice occupation times of different task instance types; the task instances include at least task instances of the idle type. Adding the time slice occupation times of each task instance type in the first time period to obtain the total time slice times in the first time period;
calculating the utilization rate of the CPU in the first time period according to the time slice occupation times and the total time slice times of the idle type task instance, wherein the utilization rate of the CPU meets a formula I;
Figure BDA0002536493680000151
wherein, A is the utilization rate of the CPU, B is the time slice occupation times of the idle type task instance, and C is the total time slice times.
In a possible embodiment, the processing unit is specifically configured to:
when the utilization rate of the CPU is greater than a first threshold value, determining that the concurrency number of the second reference task is n1 times of the concurrency number of the original task, wherein n1 is greater than 0 and less than or equal to 1;
and when the utilization rate of the CPU is not greater than the first threshold value, determining that the concurrency number of the second reference task is m1 times of the concurrency number of the original task, wherein m1 is greater than 1.
In a possible embodiment, the receiving unit is further configured to obtain a total Java virtual machine JVM memory and a current free memory;
a processing unit, specifically configured to:
calculating the utilization rate of the memory in the first time period according to the total JVM memory and the current idle memory, wherein the utilization rate of the memory meets a second formula;
Figure BDA0002536493680000152
wherein D is the utilization rate of the memory, E is the current idle memory, and F is the total JVM memory.
In a possible embodiment, the processing unit is specifically configured to:
when the memory utilization rate is greater than a third threshold value, determining that the concurrency number of the second reference task is n2 times of the concurrency number of the original task, wherein n2 is greater than 0 and less than or equal to 1;
and when the memory utilization rate is not greater than a fourth threshold value, determining that the concurrency number of the second reference task is m2 times of the concurrency number of the original task, wherein m2 is greater than 1.
The apparatus configuration shown in fig. 8 does not constitute a limitation of the task processing device, and may include more or less components than those shown, or combine some components, or a different arrangement of components.
Based on the same technical concept, an embodiment of the present invention provides a computing device, as shown in fig. 9, including at least one processor 901 and a memory 902 connected to the at least one processor, where a specific connection medium between the processor 901 and the memory 902 is not limited in the embodiment of the present invention, and the processor 901 and the memory 902 are connected through a bus in fig. 9 as an example. The bus may be divided into an address bus, a data bus, a control bus, etc.
In the embodiment of the present invention, the memory 902 stores instructions executable by the at least one processor 901, and the at least one processor 901 may execute the steps included in the foregoing vulnerability detection method by executing the instructions stored in the memory 902.
The processor 901 is a control center of the terminal device, and can connect various parts of the terminal device by using various interfaces and lines, and process data by executing or executing instructions stored in the memory 902 and calling data stored in the memory 902. Optionally, the processor 901 may include one or more processing units, and the processor 901 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application program, and the like, and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 901. In some embodiments, the processor 901 and the memory 902 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 901 may be a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, and may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor.
Memory 902, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 902 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 902 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 902 of embodiments of the present invention may also be circuitry or any other device capable of performing a storage function to store program instructions and/or data.
Based on the same inventive concept, an embodiment of the present invention further provides a computer-readable non-volatile storage medium, which includes a computer-readable program, and when the computer reads and executes the computer-readable program, the computer is enabled to execute the method for vulnerability detection in the foregoing embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer programs. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the programs, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer programs may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the programs stored in the computer-readable memory produce an article of manufacture including program means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer programs may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the programs that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (11)

1. A task processing method, comprising:
acquiring a first reference task concurrency number from a downstream task processing node in a first period;
determining at least one of the utilization rate of a Central Processing Unit (CPU) and the utilization rate of a memory in the first time period;
determining a second reference task concurrency number according to at least one of the CPU utilization rate and the memory utilization rate;
adjusting the number of tokens for controlling the task concurrency number to be a first numerical value according to the first reference task concurrency number and the second reference task concurrency number;
receiving a first task processing request at a first time, the first time occurring after the first period of time;
and when the number of the used tokens at the first moment is less than the first numerical value, allocating a first token for the first task processing request, and submitting the first task processing request to a thread pool for processing.
2. The method of claim 1, further comprising:
receiving a second task processing request at a second time, the second time occurring after the first time;
and when the number of the used tokens at the second moment is not less than the first numerical value, sleeping the thread where the second task processing request is positioned, and submitting the thread to a waiting queue for processing.
3. The method of claim 2, further comprising:
after the first task processing request is submitted to a thread pool and is processed, updating the first token from a used state to an available state, and reducing the number of the current used tokens from 1;
when the number of the used tokens at a third moment is smaller than the first numerical value, awakening a thread where a third task processing request in the waiting queue is located, wherein the third moment occurs after the second moment;
and distributing a token for the thread where the third task processing request is located, and submitting the third task processing request to a thread pool for processing.
4. The method of claim 1, wherein adjusting the number of tokens used to control the concurrency of tasks to a first value based on the first reference concurrency of tasks and the second reference concurrency of tasks comprises:
determining a minimum reference task concurrency number from the first reference task concurrency number and the second reference task concurrency number;
and adjusting the number of tokens for controlling the task concurrency number to be a first numerical value according to the minimum reference task concurrency number.
5. The method of claim 1, wherein determining the usage rate of the CPU during the first time period comprises:
acquiring the time slice occupation times of different task instance types; the task instances at least comprise task instances of an idle type;
adding the time slice occupation times of each task instance type in the first time period to obtain the total time slice times in the first time period;
calculating the utilization rate of the CPU in the first time period according to the time slice occupation times and the total time slice times of the idle type task instance, wherein the utilization rate of the CPU meets a formula I;
Figure FDA0002536493670000021
wherein, A is the utilization rate of the CPU, B is the time slice occupation times of the idle type task instance, and C is the total time slice times.
6. The method of claim 5, wherein determining a second reference task concurrency number according to at least one of a CPU usage rate and a memory usage rate comprises:
when the utilization rate of the CPU is greater than a first threshold value, determining that the concurrency number of a second reference task is n1 times of the concurrency number of the original task, wherein n1 is greater than 0 and less than or equal to 1;
and when the utilization rate of the CPU is not greater than a second threshold value, determining that the concurrency number of the second reference task is m1 times of the concurrency number of the original task, wherein m1 is greater than 1.
7. The method of claim 1, wherein determining the usage of memory during the first time period comprises:
acquiring a total Java virtual machine JVM memory and a current idle memory;
calculating the utilization rate of the memory in the first time period according to the total JVM memory and the current idle memory, wherein the utilization rate of the memory meets a second formula;
Figure FDA0002536493670000022
wherein D is the utilization rate of the memory, E is the current free memory, and F is the total JVW memory.
8. The method of claim 1, wherein determining a second reference task concurrency number according to at least one of a CPU usage rate and a memory usage rate comprises:
when the utilization rate of the memory is greater than a third threshold value, determining that the concurrency number of the second reference task is n2 times of the concurrency number of the original task, wherein n2 is greater than 0 and less than or equal to 1;
and when the utilization rate of the memory is not greater than a fourth threshold value, determining that the concurrency number of the second reference task is m2 times of the concurrency number of the original task, wherein m2 is greater than 1.
9. A task processing apparatus, comprising:
the receiving unit is used for acquiring a first reference task concurrency number from a downstream task processing node in a first period;
a processing unit for determining at least one of a CPU usage rate and a memory usage rate in the first period;
the processing unit is further configured to determine a second reference task concurrency number according to at least one of a utilization rate of the CPU and a memory utilization rate;
the processing unit is further configured to adjust the number of tokens used for controlling the task concurrency number to be a first numerical value according to the first reference task concurrency number and the second reference task concurrency number;
the receiving unit is further configured to receive a first task processing request at a first time, the first time occurring after the first period of time;
the processing unit is further configured to allocate a first token for the first task processing request and submit the first task processing request to a thread pool for processing when the number of used tokens at the first time is smaller than the first numerical value.
10. A computing device, comprising:
a memory for storing a computer program;
a processor for calling a computer program stored in said memory and executing the method of any one of claims 1 to 8 in accordance with the obtained program.
11. A computer-readable non-transitory storage medium including a computer-readable program which, when read and executed by a computer, causes the computer to perform the method of any one of claims 1 to 8.
CN202010534285.2A 2020-06-12 2020-06-12 Task processing method and device Pending CN111694669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010534285.2A CN111694669A (en) 2020-06-12 2020-06-12 Task processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010534285.2A CN111694669A (en) 2020-06-12 2020-06-12 Task processing method and device

Publications (1)

Publication Number Publication Date
CN111694669A true CN111694669A (en) 2020-09-22

Family

ID=72480728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010534285.2A Pending CN111694669A (en) 2020-06-12 2020-06-12 Task processing method and device

Country Status (1)

Country Link
CN (1) CN111694669A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256432A (en) * 2020-10-29 2021-01-22 北京达佳互联信息技术有限公司 Service overload processing method and device, electronic equipment and storage medium
CN112543420A (en) * 2020-11-03 2021-03-23 深圳前海微众银行股份有限公司 Task processing method and device and server
CN112882818A (en) * 2021-03-30 2021-06-01 中信银行股份有限公司 Task dynamic adjustment method, device and equipment
CN113159957A (en) * 2021-05-17 2021-07-23 深圳前海微众银行股份有限公司 Transaction processing method and device
CN114338816A (en) * 2021-12-22 2022-04-12 阿里巴巴(中国)有限公司 Concurrency control method, device, equipment and storage medium under server-free architecture

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256432A (en) * 2020-10-29 2021-01-22 北京达佳互联信息技术有限公司 Service overload processing method and device, electronic equipment and storage medium
CN112543420A (en) * 2020-11-03 2021-03-23 深圳前海微众银行股份有限公司 Task processing method and device and server
CN112543420B (en) * 2020-11-03 2024-04-16 深圳前海微众银行股份有限公司 Task processing method, device and server
CN112882818A (en) * 2021-03-30 2021-06-01 中信银行股份有限公司 Task dynamic adjustment method, device and equipment
CN113159957A (en) * 2021-05-17 2021-07-23 深圳前海微众银行股份有限公司 Transaction processing method and device
CN114338816A (en) * 2021-12-22 2022-04-12 阿里巴巴(中国)有限公司 Concurrency control method, device, equipment and storage medium under server-free architecture

Similar Documents

Publication Publication Date Title
CN111694669A (en) Task processing method and device
US9535736B2 (en) Providing service quality levels through CPU scheduling
US8510741B2 (en) Computing the processor desires of jobs in an adaptively parallel scheduling environment
CN111767134A (en) Multitask dynamic resource scheduling method
CN107851039A (en) System and method for resource management
CN110413412B (en) GPU (graphics processing Unit) cluster resource allocation method and device
CN110308982B (en) Shared memory multiplexing method and device
US10467054B2 (en) Resource management method and system, and computer storage medium
CN109117279B (en) Electronic device, method for limiting inter-process communication thereof and storage medium
CN114265679A (en) Data processing method and device and server
CN111506398A (en) Task scheduling method and device, storage medium and electronic device
CN116048721A (en) Task allocation method and device for GPU cluster, electronic equipment and medium
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
CN115586961A (en) AI platform computing resource task scheduling method, device and medium
CN114936086A (en) Task scheduler, task scheduling method and task scheduling device under multi-computing center scene
CN112860387A (en) Distributed task scheduling method and device, computer equipment and storage medium
CN112817722A (en) Time-sharing scheduling method based on priority, terminal and storage medium
CN113326114A (en) Batch task processing method and device
CN115695330B (en) Scheduling system, method, terminal and storage medium for shreds in embedded system
CN109634812A (en) Process CPU usage control method, terminal device and the storage medium of linux system
US11743200B2 (en) Techniques for improving resource utilization in a microservices architecture via priority queues
CN111176848B (en) Cluster task processing method, device, equipment and storage medium
CN114661415A (en) Scheduling method and computer system
CN109062707B (en) Electronic device, method for limiting inter-process communication thereof and storage medium
CN112612583A (en) Data synchronization method and device, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination