CN112363834A - Task processing method, device, terminal and storage medium - Google Patents

Task processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN112363834A
CN112363834A CN202011249736.4A CN202011249736A CN112363834A CN 112363834 A CN112363834 A CN 112363834A CN 202011249736 A CN202011249736 A CN 202011249736A CN 112363834 A CN112363834 A CN 112363834A
Authority
CN
China
Prior art keywords
task
processing
thread
abnormal
working thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011249736.4A
Other languages
Chinese (zh)
Inventor
侯腾飞
何冠宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202011249736.4A priority Critical patent/CN112363834A/en
Publication of CN112363834A publication Critical patent/CN112363834A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the invention discloses a task processing method, a task processing device, a terminal and a storage medium, wherein the method comprises the steps of receiving at least one task, generating a scheduling instance according to the at least one task, distributing a corresponding working thread for each task in the at least one task based on the scheduling instance, monitoring the processing progress of each working thread on the task based on the scheduling instance, and obtaining the processing progress aiming at each task; if the abnormal task exists in at least one task based on the processing progress of each task, determining the load value of each working thread based on the scheduling instance, and determining a target processing strategy for the abnormal task according to the load value of each working thread; and processing the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task. The tasks are distributed and monitored through the generated scheduling examples, and the abnormal tasks are processed by making a strategy in time, so that the task processing efficiency and the fault tolerance are improved.

Description

Task processing method, device, terminal and storage medium
Technical Field
The present invention relates to the field of load scheduling technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for task processing.
Background
A worker thread is the smallest unit that an operating system can perform computational scheduling. One working thread refers to a single-sequence control flow, a plurality of working threads can be included in a terminal (a mobile phone, a computer, a host and the like), and when the terminal receives a task, the task can be distributed to the working threads, so that the working threads process the task to obtain a task processing result.
At present, when the terminal receives a task, for this task allocation work thread promptly, be directed against N tasks, then need allocate N threads and handle it, because the quantity of thread is limited in the terminal, when the task is more, then the terminal can't allocate sufficient thread and handle the task, then can lead to some tasks not to get in time to handle, when terminal long-time operation multiple threads, can increase the load at terminal, and lead to the thread to block up when unusual task appears easily, the treatment effeciency to the task is lower.
Disclosure of Invention
Embodiments of the present invention provide a method, an apparatus, a terminal, and a storage medium for task processing, which can implement allocation and monitoring of tasks based on a generated scheduling instance, and improve efficiency and fault tolerance of task processing.
In one aspect, an embodiment of the present invention provides a task processing method, where the method includes:
receiving at least one task and generating a scheduling instance according to the at least one task;
distributing a corresponding working thread for each task in the at least one task based on the scheduling instance, so that each working thread processes the corresponding task in the at least one task;
monitoring the processing progress of each working thread to the tasks based on the scheduling instance to obtain the processing progress aiming at each task;
if the abnormal task exists in the at least one task based on the processing progress of each task, determining the load value of each working thread based on the scheduling instance, and determining a target processing strategy aiming at the abnormal task according to the load value of each working thread;
and processing the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task.
In one aspect, an embodiment of the present invention provides a task processing device, where the task processing device includes:
a receiving module for receiving at least one task;
the generating module is used for generating a scheduling instance according to the at least one task;
the allocation module is used for allocating a corresponding working thread to each task in the at least one task based on the scheduling instance, so that each working thread processes the corresponding task in the at least one task;
the monitoring module is used for monitoring the processing progress of each working thread on the tasks based on the scheduling instance to obtain the processing progress aiming at each task;
a determining module, configured to determine, if it is determined that an abnormal task exists in the at least one task based on the processing progress of each task, a load value of each worker thread based on the scheduling instance, and determine a target processing policy for the abnormal task according to the load value of each worker thread;
and the processing module is used for processing the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task.
In one aspect, an embodiment of the present invention provides a terminal, including a processor, an input interface, an output interface, and a memory, where the processor, the input interface, the output interface, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the task processing method.
In one aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processor to execute the task processing method.
In the embodiment of the invention, a terminal receives at least one task and generates a scheduling instance according to the at least one task, the terminal allocates a corresponding working thread for each task in the at least one task based on the scheduling instance and monitors the processing progress of each working thread on the task based on the scheduling instance to obtain the processing progress aiming at each task; if the abnormal task exists in at least one task based on the processing progress of each task, the terminal determines the load value of each working thread based on the scheduling instance and determines a target processing strategy for the abnormal task according to the load value of each working thread; and the terminal processes the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task. The tasks are distributed and monitored through the generated scheduling examples, and the abnormal tasks are processed by making a strategy in time, so that the task processing efficiency and the fault tolerance are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a task processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating another task processing method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a task processing device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The task processing method provided by the embodiment of the invention is realized on a terminal, and the terminal comprises electronic equipment such as a smart phone, a tablet computer, a digital audio and video player, an electronic reader, a handheld game machine or vehicle-mounted electronic equipment.
Fig. 1 is a schematic flowchart of a task processing method in an embodiment of the present invention, and as shown in fig. 1, a flowchart of the task processing method in this embodiment may include:
s101, the terminal receives at least one task and generates a scheduling instance according to the at least one task.
In the embodiment of the invention, the terminal can receive at least one task, and each task can be generated based on the operation input by the user in the terminal, or can be sent by other equipment, and the terminal receives the task.
Further, the terminal can store the at least one task in a database, wherein the at least one task can be stored in the database in a queue manner, and the queue can be, for example, a kafka queue. It should be noted that each task in the at least one task may correspond to a different type, after receiving each task, the terminal may determine the type corresponding to each task, and determine a storage area of each task in the database based on the type of each task, and the terminal stores each task in the database according to the storage area of each task in the database, so that the scheduling instance acquires the at least one task from the database and completes allocation of the at least one task. The type of the task may be a calculation type, a query type or a detection type, and the specific way for the terminal to determine the type of the task may be based on keyword determination in the task, where if the keyword "calculation" exists in the task, the task is determined to be the calculation type, or the terminal may also determine the type of the task based on the format type of data in the task, or the task received by the terminal includes a type flag in advance, and the terminal determines the type of each task based on the type flag. Optionally, the type of the task is determined based on the number of the detachable nodes in the task, wherein the task is detached at one detachable node, so that two sub-tasks corresponding to the task can be obtained. By the aid of the area-distinguished storage, after processing priorities and processing modes of the tasks are determined, the corresponding tasks can be directly acquired from the corresponding areas, and task acquisition efficiency is improved.
Further, the terminal may generate a scheduling instance based on the at least one task, where the scheduling instance may occupy one work thread in the terminal, and is specifically configured to allocate the at least one task, so that each work thread in the terminal receives a corresponding task and processes the received task, each work thread may correspond to a different load capacity, and a work thread with a high load capacity may process a task with a large task amount, or simultaneously process a plurality of tasks with a small task amount.
The specific way of the terminal generating the scheduling instance based on at least one task may be that the terminal obtains the number of tasks stored in the database, selects the working threads corresponding to the number from at least one working thread based on the corresponding relationship between the number and the working threads, and establishes the scheduling instance in the working thread, wherein the larger the number is, the higher the load capacity corresponding to the working thread is, that is, when the number of tasks is larger, the scheduling instance is generated in the working thread with the larger load capacity to complete the distribution of the tasks, and when the number of tasks is smaller, the scheduling instance is generated in the working thread with the smaller load capacity to complete the distribution of the tasks. The method can better select the working thread, so that the performance of the working thread in the terminal can be fully exerted.
S102, the terminal allocates a corresponding working thread for each task in the at least one task based on the scheduling instance, so that each working thread processes the at least one task.
In the embodiment of the invention, after the terminal generates the scheduling instance according to the at least one task, a corresponding working thread can be allocated to each task in the at least one task based on the scheduling instance, so that each working thread processes the at least one task.
In one implementation, different types of tasks have different storage areas in the database, and each storage area may correspond to one or more worker threads. The specific way that the terminal allocates the corresponding work thread to each task in the at least one task based on the scheduling instance may be that the terminal acquires a target task stored in a target storage area in the database, and allocates the target task to a target work thread corresponding to the target storage area, so that the target work thread processes the target task stored in the target storage area, for example, if a storage area 1 in the database corresponds to a work thread 1 and a storage area 1 stores a task 1, the scheduling instance may acquire the task 1 from the storage area 1 and allocate the task 1 to the work thread 1, so that the work thread 1 processes the task 1.
In an implementation manner, a specific manner in which the terminal allocates a corresponding work thread to each task in the at least one task based on the scheduling instance may be that the terminal obtains a residual load value corresponding to each work thread in the at least one work thread, and preprocesses the at least one task according to the residual load value of each work thread to obtain a plurality of preprocessing tasks, a task amount of each preprocessing task is correspondingly matched with the residual load value of each work thread, the preprocessing manner includes splitting processing or merging processing, the terminal determines the work thread corresponding to each preprocessing task based on a principle that the task amount is matched with the residual load value, and allocates each preprocessing task in the plurality of preprocessing tasks to the corresponding work thread based on the scheduling instance. For example, if the remaining load value of the worker thread 1 is 100, the task amount of the task 1 is 50, and the task amount of the task 2 is 40, the terminal may combine the task 1 and the task 2 and allocate the combined task to the worker thread 1. When the task amount is smaller than the residual load value and the difference between the residual load value and the task amount is smaller than the preset difference, the residual load value and the task amount can be considered to be matched.
It should be noted that, a specific manner of preprocessing at least one task by the terminal according to the remaining load values of the respective working threads may be that the terminal obtains a target working thread and a target remaining load value corresponding to the target working thread, and detects whether a target task amount matching the target remaining load value exists in task amounts corresponding to the respective tasks, if so, determines the task corresponding to the target task amount as the task corresponding to the target working thread, and if not, screens out a plurality of tasks from the at least one task, merges the plurality of tasks, matches the task amounts of the merged plurality of tasks with the target remaining load value, and determines the plurality of tasks as the preprocessing tasks corresponding to the target working thread; or, the terminal screens one task from at least one task, splits the task, enables the task quantity of a target subtask in a plurality of split subtasks to be matched with the target residual load value, and determines the target subtask as a preprocessing task corresponding to the target working thread. The target worker thread may be any worker thread in the at least one worker thread.
In one implementation manner, a specific manner in which the terminal allocates a corresponding work thread to each task in the at least one task based on the scheduling instance may be that the terminal determines a first number of idle work threads and a second number of the at least one task based on the scheduling instance; if the first number is larger than the second number, M tasks are screened from at least one task to be subjected to splitting processing to obtain a first number of split tasks, the first number of split tasks are distributed to corresponding idle working threads based on a scheduling example, if the first number is smaller than the second number, K tasks are screened from at least one task to be subjected to merging processing to obtain a first number of merged tasks, the first number of merged tasks are distributed to corresponding idle working threads based on the scheduling example, wherein K and M are positive integers, and the load value of the idle working threads is smaller than a preset load value.
Specifically, the terminal determines the number of idle working threads based on a scheduling instance, determines a preprocessing strategy aiming at least one task according to the number of the idle working threads, and preprocesses at least one task according to the preprocessing strategy to obtain a plurality of preprocessing tasks; and the terminal allocates each preprocessing task in the plurality of preprocessing tasks to the corresponding work thread based on the scheduling instance. The preprocessing strategy comprises merging processing or splitting processing, the load value of the idle working threads is smaller than a preset load value, and the specific mode that the terminal determines the preprocessing strategy for at least one task based on the number of the idle working threads can be that the terminal detects the size relation between the number of the idle working threads and the number of the tasks of the at least one task, if the number of the idle working threads is larger than the number of the tasks, the terminal determines the preprocessing strategy to split M tasks with larger task quantity in the at least one task, so that the number of the preprocessed tasks obtained by the splitting processing is the same as the number of the idle working threads, and M can be the difference value between the number of the idle working threads and the number of the tasks. Or, if the number of idle working threads is less than the number of tasks, the terminal determines that the preprocessing strategy is to perform merging processing on K tasks with smaller task amounts in at least one task, so that the number of the preprocessed tasks obtained by the merging processing is the same as the number of the idle working threads, and K may be a difference value between the number of the tasks and the number of the idle working threads. Further, the scheduling instance may allocate each of the plurality of pre-processing tasks to a corresponding worker thread based on a load balancing principle. The load balancing principle is a principle aiming at a state where the load values of the respective working threads are relatively balanced. The load value may be used herein to represent a ratio between a current load and a maximum load, which may also be referred to as a load rate. Load balancing principles may include, but are not limited to: the ratio of the current load value to the load capacity of each working thread is smaller than a preset threshold, and the preset threshold can be set according to actual conditions, such as 50%, 60% and the like; or the ratio of the current load value to the load capacity of each working thread tends to be equal; and so on.
S103, the terminal monitors the processing progress of each working thread for at least one task based on the scheduling thread to obtain the processing progress for each task.
In the embodiment of the invention, after the terminal allocates the corresponding working thread for each task in the at least one task based on the scheduling instance, the processing progress of each working thread for the at least one task can be monitored based on the scheduling thread, so that the processing progress for each task is obtained.
In specific implementation, the terminal can call the scheduling instance to send a progress query instruction to each working thread when the monitoring condition is met, the progress query instruction is used for acquiring the processing progress of each working thread on at least one task, further, the terminal calls the scheduling instance to receive progress information returned by each working thread, the working thread state table recorded in the scheduling instance is updated based on the progress information, the current processing progress of each working thread on the task is recorded in the working thread state table, and the terminal queries the processing progress of each working thread on at least one task from the updated working thread state table. When the monitoring period comes, it can be determined that the monitoring condition is met, that is, the terminal regularly calls the scheduling instance to send the progress query instruction to each working thread, and the monitoring period can be specifically preset by research and development personnel, or determined by the number of the working threads currently processing the task, that is, there is a corresponding relationship between the number of the working threads processing the task and the monitoring period, specifically, the monitoring period can be in an inverse relationship, that is, the number of the working threads currently processing the task is larger, the monitoring period is shorter, that is, the number of the working threads currently processing the task is smaller, and the monitoring period is longer, so that the processing progress of the task can be monitored in time, and the waste of monitoring resources is avoided. Or, the terminal may also determine that the monitoring condition is satisfied when receiving the monitoring instruction. The progress information specifically indicates processing time, the amount of processed tasks, the amount of tasks remaining to be completed, the processing progress, and the like for the tasks returned by each worker thread. The work thread state table may specifically include a current processing progress of each task in the at least one task, where the processing progress includes a task completion rate and a processing duration, for example, the at least one task includes task 1, task 2, and task 3, the task 1 and the task 2 are combined to obtain the pre-processing task 1, and then the pre-processing task 1 is processed by the work thread 1, and the task 3 is processed by the work thread 2, so that the processing progress for the task 1 returned by the work thread 1 is 45% and 20 seconds, the processing progress for the task 2 is 50% and 15 seconds, and the processing progress for the task 3 returned by the work thread 2 is 80% and 15 seconds, and the scheduling instance updates the work thread state table based on the information, and the updated work thread state table is shown in table 1.
TABLE 1
Figure BDA0002769762060000071
S104, if the abnormal task exists in at least one task based on the processing progress of each task, the terminal determines the load value of each working thread based on the scheduling example, and determines a target processing strategy aiming at the abnormal task according to the load value of each working thread.
In the embodiment of the invention, after the terminal obtains the processing progress aiming at each task, whether an abnormal task exists in at least one task can be detected based on the processing progress of each task, if the abnormal task exists, the load value of each working thread is determined based on the scheduling example, and the target processing strategy aiming at the abnormal task is determined according to the load value of each working thread.
Specifically, the specific determination manner of the abnormal task may be that the terminal obtains a task type corresponding to the target task, and determines an average processing time length corresponding to the task type, where the target task is any one of the at least one task, the terminal calculates a difference value between the target processing time length for the target task and the average processing time length indicated in the processing progress, and if the difference value is greater than a preset difference value, the target task is determined to be the abnormal task. Or the terminal acquires a history and a recorded working thread state table at different time nodes, determines that the target task is an abnormal task when the processing progress aiming at the target task recorded for multiple times in the working thread state table is the same processing progress, specifically, the abnormality is the processing stagnation of the target task, or acquires the average processing duration corresponding to other tasks of the same type as the target task from the history, detects the processing progress aiming at the target task when the average processing duration is reached, and determines that the target task is an abnormal task if the processing progress is smaller than the preset progress, wherein the target task can be any one or more tasks in at least one task.
Further, the terminal determines the load value of each working thread based on the scheduling instance, in a specific implementation, the terminal may obtain the load value of each working thread from a dynamic load table recorded in the scheduling instance, and the scheduling instance may manage and maintain a dynamic load table, which is shown in table 2.
TABLE 2
Work thread Load value
Work thread 1 45%
Work thread 2 50%
Work thread N 60%
As shown in table 2 above, the dynamic load table records the load value of each work thread, for example: in an embodiment, the data in the table 2 is changed in real time, and each working thread may report its load value to a scheduling instance at regular time (every 2 seconds, every 5 seconds, etc.), and the scheduling instance determines whether the load value of each working thread changes, and if so, updates the table 2 in real time according to the reported data; in another embodiment, when the self-checking of the working thread finds that the load value of the working thread changes, the changed load value is reported to the scheduling instance, and the scheduling instance updates the table 2 in real time according to the reported data.
Further, a target processing policy for the abnormal task is determined according to the load value of each working thread, where the target processing policy may specifically indicate a target working thread for processing the abnormal task, in an implementation manner, the number of the target working threads may be 1, and then the terminal may select the working thread with the smallest load value from the working threads as the target working thread, and the target processing policy determined by the terminal is to allocate the abnormal task to the target working thread for processing. Optionally, the number of the target working threads may be multiple, and the multiple target working threads may be working threads with a load value smaller than a preset load value, and the target processing policy determined by the terminal may be to split the abnormal task into N abnormal subtasks, and allocate each abnormal subtask to the corresponding target working thread.
And S105, the terminal processes the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task.
In the embodiment of the invention, after the terminal determines the target processing strategy for the abnormal task, the abnormal task can be processed according to the target processing strategy to obtain the processing result for the abnormal task.
Specifically, the target processing strategy is to split the abnormal task into N abnormal subtasks, and allocate the N abnormal subtasks to the corresponding N target work threads, so that the terminal can execute the target processing strategy, allocate each abnormal subtask to the corresponding target work thread, so that each work thread processes the allocated abnormal subtask, and return the processing result for each abnormal subtask to the scheduling instance, the terminal obtains the processing result for each abnormal subtask from the scheduling instance, and detects whether each subtask has a corresponding subtask processing result, if yes, the processing results of each abnormal subtask are merged to obtain the processing result for the abnormal task, if not, the terminal determines the target abnormal subtask that does not return the processing result, and determines that the target abnormal subtask is the cause of task processing abnormality, and when the target abnormal subtask can be continuously split, the target abnormal subtask is continuously split and then the thread is dispatched to process. By means of the method, a feedback mechanism can be provided for the abnormal task, the processing result corresponding to the abnormal task can be obtained as far as possible, the reason for abnormal task processing can be determined based on the processing result aiming at the abnormal subtask in a more refined mode, the terminal can intelligently correct errors, other tasks can be processed in time after the abnormal task is processed, and the processing efficiency of the tasks is improved.
In the embodiment of the invention, a terminal receives at least one task and generates a scheduling instance according to the at least one task, the terminal allocates a corresponding working thread for each task in the at least one task based on the scheduling instance and monitors the processing progress of each working thread on the task based on the scheduling instance to obtain the processing progress aiming at each task; if the abnormal task exists in at least one task based on the processing progress of each task, the terminal determines the load value of each working thread based on the scheduling instance and determines a target processing strategy for the abnormal task according to the load value of each working thread; and the terminal processes the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task. The tasks are distributed and monitored through the generated scheduling examples, and the abnormal tasks are processed by making a strategy in time, so that the task processing efficiency and the fault tolerance are improved.
Fig. 2 is a schematic flowchart of a task processing method in an embodiment of the present invention, and as shown in fig. 2, the flowchart of the task processing method in this embodiment may include:
s201, the terminal receives at least one task and generates a scheduling instance according to the at least one task.
In the embodiment of the present invention, a terminal may receive at least one task, where each task may be generated based on an operation input by a user in the terminal, or may also be sent by another terminal, and the terminal receives the task.
S202, the terminal allocates a corresponding working thread for each task in at least one task based on the scheduling instance, so that each working thread processes at least one task.
In the embodiment of the invention, after the terminal generates the scheduling instance according to the at least one task, a corresponding working thread can be allocated to each task in the at least one task based on the scheduling instance, so that each working thread processes the at least one task.
In an implementation scenario, the terminal may start a work thread pool in the scheduling instance, extract a task to be processed from the database, place the task into the work thread pool, and further allocate a corresponding work thread to the task in the work thread pool for processing. The terminal can screen out a corresponding number of working threads according to the number of tasks to be processed, and the rest of the working threads are placed in the working thread pool in a waiting state without occupying CPU resources.
In an implementation scenario, a scheduling instance determines an allocation manner for tasks in a database based on a current running state of a terminal, where the running state may include the number of currently running work threads, and specifically, when the number of the work threads running in the terminal is greater than a first preset number, that is, is busy, the scheduling instance may detect whether there are tasks that can be merged in the database, and merge the tasks that can be merged, and allocate the tasks that are merged to idle work threads when idle work threads occur, so that one work thread pair can process multiple tasks, and improve the utilization rate of the work threads, where a rule of merging may be merging based on the same task type, merging based on the same task source, and the like. When the number of the working threads running in the terminal is smaller than a second preset number, namely when the terminal is idle, the scheduling instance can detect whether a task capable of being split exists in the database or not, split the task capable of being split into a plurality of subtasks, and respectively allocate one working thread to each subtask, so that the plurality of working threads jointly process the task, and the processing efficiency of the task is improved. When the number of the working threads running in the terminal is between the first preset number and the second preset number, namely in a conventional state, running time can be configured for each task in the database independently, and the scheduling instance allocates the working threads to the tasks when the running time of the tasks is reached so as to process the tasks.
In an implementation scenario, working thread occupation durations (i.e., task processing durations) corresponding to different types of tasks are different, and therefore, when a scheduling instance allocates working threads to tasks stored in a database, the tasks may also be allocated based on task types, for example, when the average working thread occupation duration of an inquiry-type task is 1s, and the average working thread occupation duration of an operation-type task is 10s, the scheduling instance preferentially allocates working threads to the inquiry-type task when detecting that the inquiry-type task and the operation-type task exist in the database at the same time, so that the inquiry-type task may obtain preferential feedback, and the waiting duration of the inquiry-type task may be reduced. The task type can be determined by a special field in the task instruction, if the instruction comprises "retrieve", the task type is determined to be a query type, and if the instruction comprises "calculate", the task type is determined to be a calculation type. The corresponding work thread occupation time of each type can be determined by the work thread occupation time of tasks of various types recorded in history. Optionally, in order to make the occupied time of each working thread relatively balanced, for each task in the database, the scheduling instance may detect the task type of each task, split the task corresponding to the task type of the working thread occupied time, obtain a plurality of subtasks, allocate a corresponding working thread to each subtask, and combine the processing results of each working thread to serve as the processing result for the task. The method can reduce the time length of the terminal for operating the working thread.
S203, the terminal monitors the processing progress of each working thread for at least one task based on the scheduling thread to obtain the processing progress for each task.
In the embodiment of the invention, after the terminal allocates the corresponding working thread for each task in the at least one task based on the scheduling instance, the processing progress of each working thread for the at least one task can be monitored based on the scheduling thread, so that the processing progress for each task is obtained. The terminal can call the scheduling instance to send a progress query instruction to each working thread when monitoring conditions are met, the progress query instruction is used for obtaining the processing progress of each working thread to at least one task, further, the terminal calls the scheduling instance to receive progress information returned by each working thread, the working thread state table recorded in the scheduling instance is updated based on the progress information, the current processing progress of each working thread to the task is recorded in the working thread state table, and the terminal queries the processing progress of each working thread to at least one task from the updated working thread state table.
S204, if the abnormal task exists in at least one task based on the processing progress of each task, the terminal determines the load value of each working thread based on the scheduling example, and determines a target processing strategy aiming at the abnormal task according to the load value of each working thread.
In an implementation manner, the specific determination manner of the abnormal task may be that the terminal obtains a task type corresponding to the target task, and determines an average processing duration corresponding to the task type, where the target task is any one of the at least one task, the terminal calculates a difference value between the target processing duration and the average processing duration for the target task, which are indicated in the processing progress, and if the difference value is greater than a preset difference value, the target task is determined to be the abnormal task. Or the terminal acquires the historical and recorded work thread state tables under different time nodes.
In one implementation, the task may also be determined to be abnormal based on the feedback of the worker threads, for example, when one worker thread fails to process the task, the task is determined to be abnormal, or when a plurality of different worker threads fail to process one task (i.e., the number of failures is greater than N times), the task is determined to be abnormal. In other implementation manners, the CPU occupancy rate, the hard disk index, the memory variation range, and the like of the working thread during processing the task may also be monitored to determine whether the task is an abnormal task.
Further, the terminal can determine a target processing strategy for the abnormal task according to the load value of each working thread. In one implementation, historical load values of the work threads of the history record and corresponding historical processing strategies are stored in the blockchain network, and the terminal can find a matched historical load value from the blockchain based on the load value of each work thread and determine the historical processing strategy corresponding to the historical load value as a target processing strategy for the abnormal task. The target processing strategy comprises the steps of re-calling a working thread to process an abnormal task, splitting the abnormal task and then calling the working thread to process the abnormal task, merging the abnormal task and then calling the working thread to process the abnormal task, and the like. Optionally, if the terminal does not find a matched historical load value from the block chain, the terminal may obtain a work thread with a load value smaller than a preset load value, and specify the target processing policy as a work thread with a load value smaller than the preset load value after splitting the abnormal task into a plurality of abnormal subtasks, so that each work thread with a load value smaller than the preset load value processes the abnormal subtasks.
And S205, the terminal processes the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task.
In one implementation mode, the terminal detects whether the abnormal task is detachable; if the abnormal processing result cannot be split, the abnormal processing result is fed back; if the task can be split, splitting the abnormal task to obtain at least one abnormal subtask, and distributing a working thread for each abnormal subtask to process; the scheduling instance receives the processing result of each worker thread for the abnormal subtask, if the processing result of each abnormal subtask indicates that the processing is successful, merging the processing results of the abnormal subtasks to obtain a processing result aiming at the abnormal task, if the abnormal subtasks with unsuccessful processing exist, the terminal determines the target abnormal subtask with no processing result and determines that the target abnormal subtask is the cause of the task processing abnormality, can provide a feedback mechanism for the abnormal task, obtain the processing result corresponding to the abnormal task as much as possible, and can determine the reason causing the task processing abnormity based on the processing result of the abnormal subtask which is more refined, so that the terminal can intelligently correct the error, and after the abnormal task is processed, other tasks are processed in time, and the processing efficiency of the tasks is improved.
S206, the terminal broadcasts the load value and the target processing strategy of each working thread in the block chain, so that each node in the block chain performs consensus check on the load value and the target processing strategy of each working thread.
In the embodiment of the present invention, the terminal may be a node in the block chain, and has a capability of broadcasting a message in the block chain, and after the terminal acquires the load value and the target processing policy of each working thread, the terminal may establish a correspondence between the load value and the target processing policy of each working thread, and further broadcast the load value and the target processing policy of each working thread in the block chain, so that each node in the block chain performs consensus check on the load value and the target processing policy of each working thread.
And S207, if the result that the consensus check passes is received, packaging the load value and the target processing strategy of each working thread into a block by the terminal.
In the embodiment of the invention, after the terminal broadcasts the load value and the target processing strategy of each working thread in the block chain, each node in the block chain can carry out consensus check on the received broadcast information and return a consensus check result to the terminal, and if the terminal receives a result that the consensus check passes, the terminal packs the load value and the target processing strategy of each working thread into a block. And when the number of the indication verification passes in the consensus verification result returned by each node is greater than the preset number, determining that the terminal receives the result of the consensus verification passing.
And S208, the terminal stores the blocks into the block chain, so that the nodes in the block chain refer to the load values of the working threads and the target processing strategies.
In the embodiment of the present invention, after the terminal packs the load value and the target processing policy of each worker thread into a block, the block may be stored in a block chain, so that a node in the block chain refers to the load value and the target processing policy of each worker thread. By the method, when other terminals meet the same load condition, the corresponding abnormal task processing strategy can be inquired from the block chain, the abnormal task is processed in time, and the processing efficiency of the abnormal task is improved.
In the embodiment of the invention, a terminal receives at least one task and generates a scheduling instance according to the at least one task, the terminal allocates a corresponding working thread for each task in the at least one task based on the scheduling instance and monitors the processing progress of each working thread on the task based on the scheduling instance to obtain the processing progress aiming at each task; if the abnormal task exists in at least one task based on the processing progress of each task, the terminal determines the load value of each working thread based on the scheduling instance and determines a target processing strategy for the abnormal task according to the load value of each working thread; and the terminal processes the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task. The tasks are distributed and monitored through the generated scheduling examples, and the abnormal tasks are processed by making a strategy in time, so that the task processing efficiency and the fault tolerance are improved.
A task processing device according to an embodiment of the present invention will be described in detail with reference to fig. 3. It should be noted that the task processing device shown in fig. 3 is used for executing the method according to the embodiment of the present invention shown in fig. 1-2, and for convenience of description, only the portion related to the embodiment of the present invention is shown, and details of the specific technology are not disclosed, and reference is made to the embodiment of the present invention shown in fig. 1-2.
Referring to fig. 3, which is a schematic structural diagram of a task processing device according to the present invention, the task processing device 30 may include: a receiving module 301, a generating module 302, an assigning module 303, a monitoring module 304, a determining module 305 and a processing module 306.
A receiving module for receiving at least one task;
the generating module is used for generating a scheduling instance according to the at least one task;
the allocation module is used for allocating a corresponding working thread to each task in the at least one task based on the scheduling instance, so that each working thread processes the corresponding task in the at least one task;
the monitoring module is used for monitoring the processing progress of each working thread on the tasks based on the scheduling instance to obtain the processing progress aiming at each task;
a determining module, configured to determine, if it is determined that an abnormal task exists in the at least one task based on the processing progress of each task, a load value of each worker thread based on the scheduling instance, and determine a target processing policy for the abnormal task according to the load value of each worker thread;
and the processing module is used for processing the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task.
In one implementation, the determining module 305 is further configured to:
determining a type corresponding to each task in the at least one task;
determining a storage area of each task in a database based on the type of each task;
and storing each task in the database according to the storage area of each task in the database, so that the scheduling instance acquires at least one task from the database and completes the distribution of the at least one task.
In one implementation, the allocating module 303 is specifically configured to:
obtaining a residual load value corresponding to each working thread in at least one working thread;
preprocessing the at least one task according to the residual load value of each working thread to obtain a plurality of preprocessing tasks, wherein the task quantity of each preprocessing task is correspondingly matched with the residual load value of each working thread, and the preprocessing mode comprises splitting processing or merging processing;
and determining a working thread corresponding to each preprocessing task based on the principle that the task amount is matched with the residual load value, and distributing each preprocessing task in the plurality of preprocessing tasks to the corresponding working thread based on the scheduling example.
In one implementation, the allocating module 303 is specifically configured to:
determining a first number of idle worker threads and a second number of the at least one task based on the scheduling instance, the load value of the idle worker threads being less than a preset load value;
if the first number is larger than the second number, screening M tasks from the at least one task for splitting processing to obtain a first number of split tasks, and distributing the first number of split tasks to corresponding idle working threads based on the scheduling instance, wherein M is a positive integer;
if the first number is smaller than the second number, screening K tasks from the at least one task for merging processing to obtain a first number of merged tasks, and distributing the first number of merged tasks to corresponding idle working threads based on the scheduling instance, wherein K is a positive integer;
in one implementation, the monitoring module 304 is specifically configured to:
when monitoring conditions are met, the scheduling instance is called to send a progress query instruction to each working thread, and the progress query instruction is used for acquiring the processing progress of each working thread on the at least one task;
calling the scheduling instance to receive progress information returned by each working thread;
updating a working thread state table recorded in the scheduling instance based on the progress information, wherein the working thread state table records the current processing progress of each working thread to the task;
and the terminal inquires the processing progress of each working thread for the at least one task from the updated working thread state table.
In an implementation manner, the processing progress indicates a processing duration for each task, and the determining module 305 is specifically configured to:
acquiring a task type corresponding to a target task, and determining an average processing time corresponding to the task type, wherein the target task is any one of the at least one task;
calculating a difference value between a target processing time length for a target task indicated in the processing progress and the average processing time length;
and if the difference value is larger than a preset difference value, determining that the target task is an abnormal task.
In one implementation, the processing module 306 is specifically configured to:
broadcasting the load value of each working thread and the target processing strategy in a block chain, so that each node in the block chain performs consensus check on the load value of each working thread and the target processing strategy;
if the result that the consensus check passes is received, packing the load values of the working threads and the target processing strategy into blocks;
and storing the blocks into a block chain, so that nodes in the block chain refer to the load values of the work threads and the target processing strategies.
In the embodiment of the present invention, a receiving module 301 receives at least one task, a generating module 302 generates a scheduling instance according to the at least one task, an allocating module 303 allocates a corresponding work thread to each task in the at least one task based on the scheduling instance, and a monitoring module 304 monitors the processing progress of each work thread on the task based on the scheduling instance to obtain the processing progress for each task; if it is determined that an abnormal task exists in at least one task based on the processing progress of each task, the determining module 305 determines a load value of each worker thread based on the scheduling instance, and determines a target processing policy for the abnormal task according to the load value of each worker thread; the processing module 306 processes the abnormal task according to the target processing policy to obtain a processing result for the abnormal task. The tasks are distributed and monitored through the generated scheduling examples, and the abnormal tasks are processed by making a strategy in time, so that the task processing efficiency and the fault tolerance are improved.
Fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention. As shown in fig. 4, the terminal includes: at least one processor 401, input devices 403, output devices 404, memory 405, at least one communication bus 402. Wherein a communication bus 402 is used to enable connective communication between these components. The input device 403 may be a control panel or a microphone, and the output device 404 may be a display screen. The memory 405 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 405 may alternatively be at least one storage device located remotely from the aforementioned processor 401. Wherein the processor 401 may be combined with the apparatus described in fig. 3, the memory 405 stores a set of program codes, and the processor 401, the input device 403, and the output device 404 call the program codes stored in the memory 405 to perform the following operations:
a processor 401, configured to receive at least one task and generate a scheduling instance according to the at least one task;
a processor 401, configured to allocate a corresponding work thread to each task in the at least one task based on the scheduling instance, so that each work thread processes the corresponding task in the at least one task;
the processor 401 is configured to monitor the processing progress of each worker thread on the task based on the scheduling instance, so as to obtain a processing progress for each task;
a processor 401, configured to determine, based on the scheduling instance, a load value of each worker thread if it is determined that an abnormal task exists in the at least one task based on the processing progress of each task, and determine, according to the load value of each worker thread, a target processing policy for the abnormal task;
and the processor 401 is configured to process the abnormal task according to the target processing policy to obtain a processing result for the abnormal task.
In one implementation, the processor 401 is specifically configured to:
determining a type corresponding to each task in the at least one task;
determining a storage area of each task in a database based on the type of each task;
and storing each task in the database according to the storage area of each task in the database, so that the scheduling instance acquires at least one task from the database and completes the distribution of the at least one task.
In one implementation, the processor 401 is specifically configured to:
obtaining a residual load value corresponding to each working thread in at least one working thread;
preprocessing the at least one task according to the residual load value of each working thread to obtain a plurality of preprocessing tasks, wherein the task quantity of each preprocessing task is correspondingly matched with the residual load value of each working thread, and the preprocessing mode comprises splitting processing or merging processing;
and determining a working thread corresponding to each preprocessing task based on the principle that the task amount is matched with the residual load value, and distributing each preprocessing task in the plurality of preprocessing tasks to the corresponding working thread based on the scheduling example.
In one implementation, the processor 401 is specifically configured to:
determining a first number of idle working threads and a second number of at least one task based on a scheduling instance, wherein a load value of the idle working threads is smaller than a preset load value;
if the first number is larger than the second number, screening M tasks from the at least one task for splitting processing to obtain a first number of split tasks, and distributing the first number of split tasks to corresponding idle working threads based on the scheduling instance, wherein M is a positive integer;
if the first number is smaller than the second number, screening K tasks from the at least one task for merging processing to obtain a first number of merged tasks, and distributing the first number of merged tasks to corresponding idle working threads based on the scheduling instance, wherein K is a positive integer;
in one implementation, the processor 401 is specifically configured to:
when monitoring conditions are met, the scheduling instance is called to send a progress query instruction to each working thread, and the progress query instruction is used for acquiring the processing progress of each working thread on the at least one task;
calling the scheduling instance to receive progress information returned by each working thread;
updating a working thread state table recorded in the scheduling instance based on the progress information, wherein the working thread state table records the current processing progress of each working thread to the task;
and the terminal inquires the processing progress of each working thread for the at least one task from the updated working thread state table.
In one implementation, the processor 401 is specifically configured to:
acquiring a task type corresponding to a target task, and determining an average processing time corresponding to the task type, wherein the target task is any one of the at least one task;
calculating a difference value between a target processing time length for a target task indicated in the processing progress and the average processing time length;
and if the difference value is larger than a preset difference value, determining that the target task is an abnormal task.
In one implementation, the processor 401 is specifically configured to:
broadcasting the load value of each working thread and the target processing strategy in a block chain, so that each node in the block chain performs consensus check on the load value of each working thread and the target processing strategy;
if the result that the consensus check passes is received, packing the load values of the working threads and the target processing strategy into blocks;
and storing the blocks into a block chain, so that nodes in the block chain refer to the load values of the work threads and the target processing strategies.
In the embodiment of the present invention, a processor 401 receives at least one task and generates a scheduling instance according to the at least one task, the processor 401 allocates a corresponding work thread to each task in the at least one task based on the scheduling instance, and monitors the processing progress of each work thread on the task based on the scheduling instance to obtain the processing progress for each task; if it is determined that an abnormal task exists in at least one task based on the processing progress of each task, the processor 401 determines a load value of each working thread based on the scheduling instance, and determines a target processing strategy for the abnormal task according to the load value of each working thread; and processing the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task. The tasks are distributed and monitored through the generated scheduling examples, and the abnormal tasks are processed by making a strategy in time, so that the task processing efficiency and the fault tolerance are improved.
The module in the embodiment of the present invention may be implemented by a general-purpose Integrated Circuit, such as a CPU (central Processing Unit), or an ASIC (application Specific Integrated Circuit).
It should be understood that, in the embodiments of the present invention, the Processor 401 may be a Central Processing Unit (CPU), and the Processor may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The bus 402 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like, and the bus 402 may be divided into an address bus, a data bus, a control bus, and the like, where fig. 4 only shows one thick line for convenience of illustration, but does not show only one bus or one type of bus.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program, which may be stored in a computer storage medium and may include the processes of the embodiments of the methods described above when executed. The computer storage medium may be a magnetic disk, an optical disk, a Read-only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (10)

1. A method for processing a task, the method comprising:
receiving at least one task and generating a scheduling instance according to the at least one task;
distributing a corresponding working thread for each task in the at least one task based on the scheduling instance, so that each working thread processes the corresponding task in the at least one task;
monitoring the processing progress of each working thread to the tasks based on the scheduling instance to obtain the processing progress aiming at each task;
if the abnormal task exists in the at least one task based on the processing progress of each task, determining the load value of each working thread based on the scheduling instance, and determining a target processing strategy aiming at the abnormal task according to the load value of each working thread;
and processing the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task.
2. The method of claim 1, wherein after receiving at least one task, the method further comprises:
determining a type corresponding to each task in the at least one task;
determining a storage area of each task in a database based on the type of each task;
and storing each task in the database according to the storage area of each task in the database, so that the scheduling instance acquires at least one task from the database and completes the distribution of the at least one task.
3. The method of claim 1, wherein assigning a corresponding worker thread to each task of the at least one task based on the schedule instance comprises:
obtaining a residual load value corresponding to each working thread in at least one working thread;
preprocessing the at least one task according to the residual load value of each working thread to obtain a plurality of preprocessing tasks, wherein the task quantity of each preprocessing task is correspondingly matched with the residual load value of each working thread, and the preprocessing mode comprises splitting processing or merging processing;
and determining a working thread corresponding to each preprocessing task based on the principle that the task amount is matched with the residual load value, and distributing each preprocessing task in the plurality of preprocessing tasks to the corresponding working thread based on the scheduling example.
4. The method of claim 1, wherein assigning a corresponding worker thread to each task of the at least one task based on the schedule instance comprises:
determining a first number of idle worker threads and a second number of the at least one task based on the scheduling instance, the load value of the idle worker threads being less than a preset load value;
if the first number is larger than the second number, screening M tasks from the at least one task for splitting processing to obtain a first number of split tasks, and distributing the first number of split tasks to corresponding idle working threads based on the scheduling instance, wherein M is a positive integer;
if the first number is smaller than the second number, screening K tasks from the at least one task for merging to obtain a first number of merged tasks, and distributing the first number of merged tasks to corresponding idle working threads based on the scheduling instance, wherein K is a positive integer.
5. The method according to claim 1, wherein the monitoring the processing progress of the tasks by the respective worker threads based on the scheduling instance to obtain the processing progress for each task comprises:
when monitoring conditions are met, the scheduling instance is called to send a progress query instruction to each working thread, and the progress query instruction is used for acquiring the processing progress of each working thread on the at least one task;
calling the scheduling instance to receive progress information returned by each working thread;
updating a working thread state table recorded in the scheduling instance based on the progress information, wherein the working thread state table records the current processing progress of each working thread to the task;
and inquiring the processing progress of each worker thread for the at least one task from the updated worker thread state table.
6. The method of claim 1, wherein the processing progress indicates a processing duration for each task, and wherein the determining that an exception task exists in the at least one task based on the processing progress of each task comprises:
acquiring a task type corresponding to a target task, and determining an average processing time corresponding to the task type, wherein the target task is any one of the at least one task;
calculating a difference value between a target processing time length for a target task indicated in the processing progress and the average processing time length;
and if the difference value is larger than a preset difference value, determining that the target task is an abnormal task.
7. The method of claim 1, wherein after determining the target processing policy for the exception task based on the load values of the respective worker threads, the method further comprises:
broadcasting the load value of each working thread and the target processing strategy in a block chain, so that each node in the block chain performs consensus check on the load value of each working thread and the target processing strategy;
if the result that the consensus check passes is received, packing the load values of the working threads and the target processing strategy into blocks;
and storing the blocks into a block chain, so that nodes in the block chain refer to the load values of the work threads and the target processing strategies.
8. A task processing apparatus, characterized in that the apparatus comprises:
a receiving module for receiving at least one task;
the generating module is used for generating a scheduling instance according to the at least one task;
the allocation module is used for allocating a corresponding working thread to each task in the at least one task based on the scheduling instance, so that each working thread processes the corresponding task in the at least one task;
the monitoring module is used for monitoring the processing progress of each working thread on the tasks based on the scheduling instance to obtain the processing progress aiming at each task;
a determining module, configured to determine, if it is determined that an abnormal task exists in the at least one task based on the processing progress of each task, a load value of each worker thread based on the scheduling instance, and determine a target processing policy for the abnormal task according to the load value of each worker thread;
and the processing module is used for processing the abnormal task according to the target processing strategy to obtain a processing result aiming at the abnormal task.
9. A terminal, comprising a processor, an input interface, an output interface, and a memory, the processor, the input interface, the output interface, and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-7.
CN202011249736.4A 2020-11-10 2020-11-10 Task processing method, device, terminal and storage medium Pending CN112363834A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011249736.4A CN112363834A (en) 2020-11-10 2020-11-10 Task processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011249736.4A CN112363834A (en) 2020-11-10 2020-11-10 Task processing method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN112363834A true CN112363834A (en) 2021-02-12

Family

ID=74509812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011249736.4A Pending CN112363834A (en) 2020-11-10 2020-11-10 Task processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112363834A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785874A (en) * 2022-06-16 2022-07-22 成都中科合迅科技有限公司 Method for providing high-availability transmission channel based on multi-network protocol
CN115658269A (en) * 2022-11-01 2023-01-31 上海玫克生储能科技有限公司 Heterogeneous computing terminal for task scheduling
CN116860436A (en) * 2023-06-15 2023-10-10 重庆智铸达讯通信有限公司 Thread data processing method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785874A (en) * 2022-06-16 2022-07-22 成都中科合迅科技有限公司 Method for providing high-availability transmission channel based on multi-network protocol
CN115658269A (en) * 2022-11-01 2023-01-31 上海玫克生储能科技有限公司 Heterogeneous computing terminal for task scheduling
CN115658269B (en) * 2022-11-01 2024-02-27 上海玫克生储能科技有限公司 Heterogeneous computing terminal for task scheduling
CN116860436A (en) * 2023-06-15 2023-10-10 重庆智铸达讯通信有限公司 Thread data processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112363834A (en) Task processing method, device, terminal and storage medium
CN107688496B (en) Task distributed processing method and device, storage medium and server
CN107832126B (en) Thread adjusting method and terminal thereof
CN109104336B (en) Service request processing method and device, computer equipment and storage medium
CN108268372B (en) Mock test processing method and device, storage medium and computer equipment
CN111049705A (en) Method and device for monitoring distributed storage system
CN111459659A (en) Data processing method, device, scheduling server and medium
CN111400008A (en) Computing resource scheduling method and device and electronic equipment
CN111209110B (en) Task scheduling management method, system and storage medium for realizing load balancing
CN110928655A (en) Task processing method and device
CN111061570B (en) Image calculation request processing method and device and terminal equipment
CN112231108A (en) Task processing method and device, computer readable storage medium and server
CN111262783B (en) Dynamic routing method and device
CN111538572A (en) Task processing method, device, scheduling server and medium
CN112162839A (en) Task scheduling method and device, computer equipment and storage medium
CN111538585A (en) Js-based server process scheduling method, system and device
CN111274017B (en) Resource processing method and device, electronic equipment and storage medium
CN117032987A (en) Distributed task scheduling method, system, equipment and computer readable medium
CN112596880A (en) Data processing method, device, equipment and storage medium
CN109670932B (en) Credit data accounting method, apparatus, system and computer storage medium
CN107832140B (en) RPC request control method, storage medium, electronic device and system
CN107369088B (en) Method and device for processing account transaction
CN115061796A (en) Execution method and system for calling between subtasks and electronic equipment
CN113608847A (en) Task processing method, device, equipment and storage medium
CN109101260B (en) Node software upgrading method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination