CN117827114A - Task processing method, device, equipment and computer readable storage medium - Google Patents

Task processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN117827114A
CN117827114A CN202410021881.9A CN202410021881A CN117827114A CN 117827114 A CN117827114 A CN 117827114A CN 202410021881 A CN202410021881 A CN 202410021881A CN 117827114 A CN117827114 A CN 117827114A
Authority
CN
China
Prior art keywords
task
write
writing
thread
carrier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410021881.9A
Other languages
Chinese (zh)
Inventor
梁欣玲
孙昊
邸忠辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN202410021881.9A priority Critical patent/CN117827114A/en
Publication of CN117827114A publication Critical patent/CN117827114A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The invention relates to the technical field of data storage, and discloses a task processing method, device, equipment and a computer readable storage medium, wherein an operation instruction issued by a server is acquired; when the operation instruction is a writing task, the writing task is added into a task linked list of a writing task carrier, and a thread scheduling execution unit of the writing task carrier is added into a thread execution queue. When a new writing task is acquired, the new writing task is added into a task linked list of a writing task carrier. When the thread scheduling execution unit of each write task carrier writes the task carrier into the thread execution queue, the tasks corresponding to the thread scheduling execution units in the thread execution queue are sequentially executed according to the time sequence and the write upper limit value corresponding to each write task carrier. The batch processing of a plurality of writing tasks can be realized through the task linked list, the operation that a thread scheduling execution unit of a writing task carrier is added into a thread execution queue is not needed for executing the writing tasks each time, and the writing time delay is effectively reduced.

Description

Task processing method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of data storage technologies, and in particular, to a method, an apparatus, a device, and a computer readable storage medium for processing a task.
Background
The storage system is a complex set of theoretical models that construct a series of independent redundant disks into a disk array (Redundant Array of Independent Disks, RAID) for storing data. The current evaluation standard for the storage system is not limited to safety, and the performance of the storage system is more emphasized.
The current storage system model used is a random 73 read-write small block model, and the server issues small block IOs with 70% read and 30% write for Input/Output (IO) applications. The IO flow is divided into small stages, and the task of each stage has a task carrier (batch) for processing the task of the stage. When IO is issued, a task is added to the task (task) of the corresponding batch, and then a thread scheduling execution unit (fabric) in the batch structure is added to the thread execution queue. And calling the fiber to execute corresponding tasks in turn according to the joining sequence.
However, in practical application, a write amplification situation exists for a write task, that is, a write-once small IO is amplified to 6 disk flows, including 3 disk reading flows and 3 disk writing flows. Every time a write disk flow is executed, a corresponding fiber is added to the thread execution queue, and after the write disk flow is completed, the fiber is removed from the thread execution queue. When the next disk writing process is needed to be executed, the corresponding fiber is needed to be added into the thread execution queue again, after the disk writing process is completed, the fiber is removed from the thread execution queue, and so on until all operations corresponding to the current task are completed. The fiber can be frequently added into the thread execution queue in the whole implementation process, and the fiber can generate lock time when being added into the thread execution queue each time, so that the write time delay generated in the current processing process aiming at the write task is very high, and the performance of the storage system is reduced.
It can be seen that how to reduce the write latency generated during the write task processing to improve the performance of the storage system is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the invention aims to provide a task processing method, device, equipment and computer readable storage medium, which can solve the problem of too high write time delay generated in the task writing processing process.
In order to solve the above technical problems, an embodiment of the present invention provides a task processing method, including:
acquiring an operation instruction issued by a server;
when the operation instruction is a writing task, adding the writing task into a task linked list of a writing task carrier, and adding a thread scheduling execution unit of the writing task carrier into a thread execution queue;
under the condition that a new writing task is acquired, adding the new writing task into a task linked list of the writing task carrier, and judging whether a thread scheduling execution unit of the writing task carrier is in the thread execution queue or not;
and under the condition that the thread scheduling execution units of the write task carriers are in the thread execution queue, sequentially executing tasks corresponding to the thread scheduling execution units in the thread execution queue according to time sequence and the write upper limit value corresponding to each write task carrier.
In one aspect, after the adding the new write task to the task linked list of the write task carrier, the method further includes:
and under the condition that the thread scheduling execution units of the write task carriers are not in the thread execution queue, adding the thread scheduling execution units of the write task carriers into the thread execution queue, and executing the task corresponding to each thread scheduling execution unit in the thread execution queue according to the time sequence and the write upper limit value corresponding to each write task carrier.
In one aspect, the sequentially executing the tasks corresponding to the thread scheduling execution units in the thread execution queue according to the time sequence and the write upper limit value corresponding to each write task carrier includes:
determining a current thread scheduling execution unit of the thread execution queue according to the time sequence;
judging whether the task type corresponding to the current thread scheduling execution unit is a read type or not;
when the task type corresponding to the current thread scheduling and executing unit is a read type, calling a first executing function bound with the current thread scheduling and executing unit to execute a read task; removing the current thread scheduling execution unit from the thread execution queue when the read task is completed;
When the task type corresponding to the current thread scheduling and executing unit is a writing type, calling a second executing function bound with the current thread scheduling and executing unit to read a target writing task from a task linked list of a corresponding writing task carrier, and executing the target writing task;
deleting the target writing task from the corresponding task linked list and adding one to the writing task count after each target writing task is completed;
judging whether the write task count reaches a corresponding write upper limit value;
returning to call a second execution function bound with the current thread scheduling execution unit to read a target write task from a task linked list of a corresponding write task carrier and execute the target write task under the condition that the write task count does not reach the corresponding write upper limit value;
removing the current thread scheduling execution unit from the thread execution queue under the condition that the write task count reaches the corresponding write upper limit value;
judging whether a thread scheduling execution unit exists in the thread execution queue;
returning to the step of determining the current thread scheduling execution unit of the thread execution queue according to the time sequence under the condition that the thread scheduling execution unit exists in the thread execution queue;
And ending the operation when the thread scheduling execution unit does not exist in the thread execution queue.
In one aspect, before the current thread scheduling execution unit is removed from the thread execution queue if the write task count reaches the corresponding write upper limit value, the method further includes:
judging whether the residual writing tasks exist in the task linked list or not under the condition that the writing task count reaches the corresponding writing upper limit value;
executing the step of removing the current thread scheduling execution unit from the thread execution queue under the condition that no residual writing task exists in the task linked list;
setting a occupation mark for the current thread scheduling execution unit under the condition that the residual writing task exists in the task linked list, and judging whether the residual thread scheduling execution unit exists in the thread execution queue except the thread scheduling execution unit with the occupation mark;
under the condition that no residual thread scheduling execution units exist in the thread execution queue except the thread scheduling execution units with the occupation marks, a second execution function bound with the current thread scheduling execution unit is called to read a target write task from a task linked list of a corresponding write task carrier, and the target write task is executed; deleting the target writing task from a corresponding task linked list when one target writing task is completed until no residual writing task exists in the task linked list, and executing the step of removing the current thread scheduling execution unit from the thread execution queue;
And under the condition that the thread execution queue has residual thread scheduling execution units except the thread scheduling execution unit with the occupation mark, taking the next thread scheduling execution unit adjacent to the current thread scheduling execution unit as the latest current thread scheduling execution unit, and returning to the step of judging whether the task type corresponding to the current thread scheduling execution unit is a read type.
In one aspect, the method further comprises:
and under the condition that the operation instruction is a read task, adding the read task into a default read task carrier of a storage system, and adding a thread scheduling execution unit of the read task carrier into a thread execution queue.
In one aspect, the method further comprises:
and adjusting the writing upper limit value corresponding to each writing task carrier according to the using frequency of each writing task carrier.
In one aspect, adjusting the write upper limit value corresponding to each write task carrier according to the use frequency of each write task carrier includes:
counting the number of writing tasks added to a task linked list of each writing task carrier in a preset time period;
determining a theoretical writing upper limit value matched with the writing task number of the target writing task carrier based on a corresponding relation between the set writing task number range and the theoretical writing upper limit value; the target write task carrier is any write task carrier in all the write task carriers;
Judging whether the write upper limit value of the target write task carrier is smaller than the matched theoretical write upper limit value;
and when the write upper limit value of the target write task carrier is smaller than the matched theoretical write upper limit value, adjusting the write upper limit value of the target write task carrier to be the theoretical write upper limit value.
The embodiment of the invention also provides a task processing device, which comprises an acquisition unit, a first adding unit, a second adding unit, a judging unit and an executing unit;
the acquisition unit is used for acquiring an operation instruction issued by the server;
the first adding unit is used for adding the writing task to a task linked list of a writing task carrier and adding a thread scheduling executing unit of the writing task carrier to a thread executing queue under the condition that the operation instruction is the writing task;
the second adding unit is used for adding the new writing task to a task linked list of the writing task carrier under the condition that the new writing task is acquired;
the judging unit is used for judging whether the thread scheduling executing unit of the write task carrier is in the thread executing queue or not;
the execution unit is used for sequentially executing tasks corresponding to the thread scheduling execution units in the thread execution queue according to the time sequence and the write upper limit value corresponding to each write task carrier under the condition that the thread scheduling execution unit of the write task carrier is in the thread execution queue.
In one aspect, the method further comprises a third adding unit;
the third adding unit is configured to add the thread scheduling execution unit of the write task carrier to the thread execution queue when the thread scheduling execution unit of the write task carrier is not in the thread execution queue, and trigger the execution unit to execute the task corresponding to each thread scheduling execution unit in the thread execution queue according to the time sequence and the write upper limit value corresponding to each write task carrier.
In one aspect, the execution unit includes a determination subunit, a first execution subunit, a first removal subunit, a second execution subunit, a deletion subunit, a count subunit, a second determination subunit, a second removal subunit, and a third determination subunit;
the determining subunit is used for determining the current thread scheduling execution unit of the thread execution queue according to the time sequence;
the first judging subunit is configured to judge whether a task type corresponding to the current thread scheduling execution unit is a read type;
the first execution subunit is configured to invoke a first execution function bound to the current thread scheduling execution unit to execute a read task when the task type corresponding to the current thread scheduling execution unit is a read type;
The first removing subunit is configured to remove the current thread scheduling execution unit from the thread execution queue when the read task is completed;
the second execution subunit is configured to, when the task type corresponding to the current thread scheduling execution unit is a write type, invoke a second execution function bound with the current thread scheduling execution unit to read a target write task from a task linked list of a corresponding write task carrier, and execute the target write task;
the deleting subunit is used for deleting the target writing task from the corresponding task linked list when each target writing task is completed;
the counting subunit is used for adding one to the write task count;
the second judging subunit is configured to judge whether the write task count reaches a write upper limit value corresponding to the write task count; triggering the second execution subunit to execute the second execution function bound with the current thread scheduling execution unit to read a target write task from a task linked list of a corresponding write task carrier and execute the target write task under the condition that the write task count does not reach the corresponding write upper limit value;
The second removing subunit is configured to remove the current thread scheduling execution unit from the thread execution queue when the write task count reaches a write upper limit value corresponding to the write task count;
the third judging subunit is configured to judge whether a thread scheduling execution unit exists in the thread execution queue; triggering the determining subunit to execute the step of determining the current thread scheduling execution unit of the thread execution queue according to the time sequence under the condition that the thread scheduling execution unit exists in the thread execution queue; and ending the operation when the thread scheduling execution unit does not exist in the thread execution queue.
In one aspect, before the current thread scheduling execution unit is removed from the thread execution queue, the method further includes a first judgment unit, a setting unit, a second judgment unit and a unit as the units when the write task count reaches the corresponding write upper limit value;
the first judging unit is used for judging whether the residual writing task exists in the task linked list or not under the condition that the writing task count reaches the corresponding writing upper limit value; triggering the second removing subunit to execute the step of removing the current thread scheduling execution unit from the thread execution queue under the condition that no residual writing task exists in the task linked list;
The setting unit is used for setting a occupation mark for the current thread scheduling execution unit under the condition that the residual writing task exists in the task linked list;
the second judging unit is used for judging whether the thread execution queue has residual thread scheduling executing units except the thread scheduling executing units with the occupation marks; triggering the second execution subunit to execute the target write task from a task linked list of a corresponding write task carrier by calling a second execution function bound with the current thread scheduling execution unit under the condition that no residual thread scheduling execution units exist in the thread execution queue except the thread scheduling execution unit with the occupation mark, and executing the target write task; triggering the deleting subunit to execute the step of deleting the target writing task from the corresponding task linked list until no residual writing task exists in the task linked list, and triggering the second removing subunit to execute the step of removing the current thread scheduling execution unit from the thread execution queue;
the unit is configured to, when there are remaining thread scheduling execution units in the thread execution queue other than the thread scheduling execution unit with the set occupation identifier, take a next thread scheduling execution unit adjacent to the current thread scheduling execution unit as a current thread scheduling execution unit, and trigger the first determination subunit to execute the step of determining whether the task type corresponding to the current thread scheduling execution unit is a read type.
In one aspect, the method further comprises a fourth adding unit;
the fourth adding unit is configured to add the read task to a default read task carrier of a storage system when the operation instruction is a read task, and add a thread scheduling execution unit of the read task carrier to a thread execution queue.
In one aspect, the device further comprises an adjusting unit;
the adjusting unit is used for adjusting the writing upper limit value corresponding to each writing task carrier according to the using frequency of each writing task carrier.
In one aspect, the adjusting unit is configured to count the number of write tasks added to the task linked list of each write task carrier in a preset time period; determining a theoretical writing upper limit value matched with the writing task number of the target writing task carrier based on a corresponding relation between the set writing task number range and the theoretical writing upper limit value; the target write task carrier is any write task carrier in all the write task carriers; judging whether the write upper limit value of the target write task carrier is smaller than the matched theoretical write upper limit value; and when the write upper limit value of the target write task carrier is smaller than the matched theoretical write upper limit value, adjusting the write upper limit value of the target write task carrier to be the theoretical write upper limit value.
The embodiment of the invention also provides a task processing device, which comprises:
a memory for storing a computer program;
a processor for executing the computer program to carry out the steps of the processing method for the tasks as described above.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the processing method of the task when being executed by a processor.
According to the technical scheme, the operation instruction issued by the server is acquired; and when the operation instruction is a writing task, adding the writing task into a task linked list of a writing task carrier, and adding a thread scheduling execution unit of the writing task carrier into a thread execution queue. And under the condition that a new writing task is acquired, adding the new writing task into a task linked list of the writing task carrier, and judging whether a thread scheduling execution unit of the writing task carrier is in a thread execution queue. Under the condition of a thread scheduling execution unit writing task carriers in a thread execution queue, tasks corresponding to the thread scheduling execution units in the thread execution queue are sequentially executed according to time sequence and a writing upper limit value corresponding to each writing task carrier. The invention has the advantages that a special writing task carrier is arranged for writing tasks, and a task linked list is arranged on the writing task carrier, so that a plurality of writing tasks can be stored through the task linked list. For a plurality of writing tasks, the operation of adding a thread scheduling execution unit of a writing task carrier into a thread execution queue is not needed for executing the writing tasks each time, so that batch processing of the writing tasks is realized. Compared with the traditional mode that each writing task needs to be added into a thread execution queue by a thread scheduling execution unit, the method can realize the storage of a plurality of writing tasks by setting the task linked list, reduce the frequency of frequently adding the thread scheduling execution unit of a writing task carrier corresponding to the writing task into the thread execution queue, and effectively reduce the writing time delay. In addition, the invention can effectively avoid the problem that the tasks corresponding to other task carriers are not executed in a short time by limiting the writing upper limit value of each writing task carrier.
Drawings
For a clearer description of embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a flow chart of a task processing method provided by an embodiment of the invention;
FIG. 2 is a flowchart of a method for sequentially executing tasks corresponding to each thread scheduling execution unit in a thread execution queue according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a task processing device according to an embodiment of the present invention;
fig. 4 is a block diagram of a task processing device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present invention.
The terms "comprising" and "having" in the description of the invention and the claims and in the above-mentioned figures, as well as any variations thereof that relate to "comprising" and "having", are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may include other steps or elements not expressly listed.
In order to better understand the aspects of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and detailed description.
The storage system is a complex theoretical model, a series of independent redundant disks form a RAID array, and the RAID array model has a plurality of types, namely, RAID 0, RAID1, RAID5 and RAID6.Raid 0 distributes data in a disk array, has no redundancy, and once a disk fails, the data of a user is lost, so that the data is rarely popularized and used. Raid1 is to backup the data of the user to an additional disk, and the data of two disks are mirror images of each other. When data of one disc is lost, the other disc can be made up. The defect is that space of a disk is wasted in a scene of reduced probability of disk failure, and when io is written, double disk dropping is needed, which is time-consuming. Raid5 is aimed at making up the defect of the former, has the characteristic of redundancy 1, and when any disk of the disk array is lost, the data of the fault disk can be calculated through the data of other disks. Through continuous development, a model with higher redundancy is created, and the current social use is Raid6.
Because Raid6 can allow any two disks in the disk array to fail at the same time, the method has higher tolerance, and is widely used, and the task processing scheme provided by the embodiment of the invention is realized based on Raid 6.
Next, a task processing method provided by the embodiment of the present invention is described in detail. Fig. 1 is a flowchart of a task processing method provided in an embodiment of the present invention, where the method includes:
s101: and acquiring an operation instruction issued by the server.
In practical application, the server can send an operation instruction to the storage system to realize the reading and writing operation of data. The operation instructions may include a read task and a write task.
Under the condition that the storage system receives the operation instruction issued by the server, the storage system can distinguish whether the operation instruction is a writing task or a reading task according to the format of the instruction or the carried identifier. In the case that the operation instruction is a read task, the read task may be directly executed according to a conventional read task processing manner. Namely, under the condition that the operation instruction is a read task, the read task can be added into a default read task carrier of the storage system, and a thread scheduling execution unit of the read task carrier is added into a thread execution queue.
S102: and when the operation instruction is a writing task, adding the writing task into a task linked list of a writing task carrier, and adding a thread scheduling execution unit of the writing task carrier into a thread execution queue.
Considering that in practical applications, the main reason for the high write latency is that the lock time is generated each time a thread scheduling execution unit is added to a thread execution queue. According to the current write task processing mode, the thread scheduling execution unit is automatically removed from the thread execution queue after one write task is executed. The write-once process includes a 3-time disk reading process and a 3-time disk writing process, so that 6 operations of adding the thread scheduling execution unit to the thread execution queue are required to be executed to execute one write-once process, the time consumption of lock generated in the process is very high, the write time delay is far higher than the read time delay, and the performance of the storage system is reduced.
Therefore, in the embodiment of the invention, in order to reduce the write time delay generated by the execution of the write task, a task carrier is newly created, and a task linked list is added on the task carrier. The task linked list may be a first-in first-out (First In First Out, FIFO) linked list.
The newly created task carrier can adopt a batch structure type, the batch write is a type corresponding to the write batch, and the batch write can correspond to 8 write batches. The method comprises the steps of adding a task_fifo member of a FIFO linked list type on the original basis, adding the task into the linked list structure when a task (task) issued to a write batch exists, adding a write upper limit value (task_write_max) member of an Int type, and restricting threads to execute the write batch once. The maximum task number of the task_fifo linked list of the batch write is limited, the task is initialized to 0 by adding member tasks, and the number of the task to be written currently by the marked thread is increased. The structure of the Batchwrite is as follows:
Struct batchwrite
{
Int tasks_write_max;
Int tasks
Plfb fibre;
Fifo task_fifo;
};
Where fifo is a linked list structure type.
The batch types of the design write batch are Struct batchwrite, and the specific structure types are as follows:
Rd6_Vol_IoDone3batch_write
rd_agt_submit_start_tcb_from_batch_write
Rd6_StartPhase1FromBatch_write
Rd6_FinishStripOperationsBatch_write
Rd6_Xor_PlfbToXorDonebatch_write
Rd6_StartPhase2_Xorbatch_write
plmm_xor_fbrbatchbatch_write
Rd6_StartPhase2writebatch_write。
to facilitate differentiation from the default task bearers of the storage system, the newly created task bearers for write tasks may be referred to as write task bearers. The number of the write task carriers can be multiple, and each write task carrier is processed in a similar way, so that the write task carrier is taken as an example for development and introduction.
Each write task carrier is provided with a corresponding FIFO linked list, and the FIFO linked list can realize the storage of a plurality of write tasks. Each write task carrier has its corresponding one of the thread scheduling execution units.
In the case that the acquired operation instruction is a write task, the write task may be added to a task linked list of the write task carrier, and a thread scheduling execution unit of the write task carrier may be added to a thread execution queue
S103: and under the condition that a new writing task is acquired, adding the new writing task into a task linked list of the writing task carrier, and judging whether a thread scheduling execution unit of the writing task carrier is in a thread execution queue.
When a new write task is acquired, the new write task can be continuously added into a task linked list of the write task carrier, and at the moment, whether a thread scheduling execution unit of the write task carrier is in a thread execution queue can be judged.
When the thread scheduling execution unit of the task carrier writes in the thread execution queue, the thread scheduling execution unit does not need to be repeatedly added in the thread execution queue, and the task writing is directly executed according to the function corresponding to the thread scheduling execution unit, so that the time consumption of adding the lock generated by the thread scheduling execution unit in the thread execution queue is reduced.
Thus, S104 may be performed while the thread scheduling execution unit writing the task carrier is in the thread execution queue.
S104: and sequentially executing tasks corresponding to each thread scheduling execution unit in the thread execution queue according to the time sequence and the write upper limit value corresponding to each write task carrier.
Under the condition of the thread scheduling execution unit in the thread execution queue, tasks corresponding to the thread scheduling execution units in the thread execution queue can be sequentially executed according to time sequence and the write upper limit value corresponding to each write task carrier. Therefore, after the execution of adding the thread scheduling execution unit writing the task carrier to the thread execution queue in S102, S104 may be executed; after determining in S103 that the thread scheduling execution unit of the task carrier is writing to the thread execution queue, S104 may be executed.
In practical applications, the storage system has a default task carrier in addition to the newly created write task carrier. For ease of description, the write task carrier and the default task carrier may be collectively referred to as a task carrier. Each task carrier has its corresponding one of the thread scheduling execution units.
The thread execution queue contains a plurality of storage locations which can be used for recording different thread scheduling execution units. Each task carrier has its corresponding one of the thread scheduling execution units.
And adding the read task into the task carrier by using a default task carrier under the condition that the task carrier receives the read task, and adding a thread scheduling execution unit corresponding to the task carrier into a thread execution queue. After the reading task is executed by using the function bound by the task carrier, the thread scheduling execution unit corresponding to the task carrier can be removed from the thread execution queue.
Taking a write task carrier as an example, when the write task carrier receives a write task, the write task may be added to a task linked list corresponding to the write task carrier, and a thread scheduling execution unit corresponding to the write task carrier may be added to a thread execution queue. And carrying out batch processing on the writing tasks recorded in the task linked list by utilizing the function bound by the writing task carrier. When the number of the write tasks which are completed in execution reaches the corresponding write upper limit value, the thread scheduling execution unit corresponding to the write task carrier can be removed from the thread execution queue.
Under the condition that the thread scheduling execution units of the write task carriers are not in the thread execution queue, the thread scheduling execution units of the write task carriers can be added into the thread execution queue, and the steps of sequentially executing tasks corresponding to the thread scheduling execution units in the thread execution queue according to the time sequence and the write upper limit value corresponding to each write task carrier are executed.
The task processing method provided by the embodiment of the invention can be applied to different types of storage system models, such as a random 73 read-write small block model, a random 82 read-write small block model and the like.
According to the technical scheme, the operation instruction issued by the server is acquired; and when the operation instruction is a writing task, adding the writing task into a task linked list of a writing task carrier, and adding a thread scheduling execution unit of the writing task carrier into a thread execution queue. And under the condition that a new writing task is acquired, adding the new writing task into a task linked list of the writing task carrier, and judging whether a thread scheduling execution unit of the writing task carrier is in a thread execution queue. Under the condition of a thread scheduling execution unit writing task carriers in a thread execution queue, tasks corresponding to the thread scheduling execution units in the thread execution queue are sequentially executed according to time sequence and a writing upper limit value corresponding to each writing task carrier. The invention has the advantages that a special writing task carrier is arranged for writing tasks, and a task linked list is arranged on the writing task carrier, so that a plurality of writing tasks can be stored through the task linked list. For a plurality of writing tasks, the operation of adding a thread scheduling execution unit of a writing task carrier into a thread execution queue is not needed for executing the writing tasks each time, so that batch processing of the writing tasks is realized. Compared with the traditional mode that each writing task needs to be added into a thread execution queue by a thread scheduling execution unit, the method can realize the storage of a plurality of writing tasks by setting the task linked list, reduce the frequency of frequently adding the thread scheduling execution unit of a writing task carrier corresponding to the writing task into the thread execution queue, and effectively reduce the writing time delay. In addition, the invention can effectively avoid the problem that the tasks corresponding to other task carriers are not executed in a short time by limiting the writing upper limit value of each writing task carrier.
Fig. 2 is a flowchart of a method for sequentially executing tasks corresponding to each thread scheduling execution unit in a thread execution queue according to an embodiment of the present invention, where the method includes:
s201: and determining the current thread scheduling execution unit of the thread execution queue according to the time sequence.
The execution function bound by each thread scheduling execution unit is different, and the task corresponding to the thread scheduling execution unit can be completed by utilizing the execution function.
Thread execution queues often record multiple thread scheduling execution units. In practical application, tasks corresponding to the thread scheduling execution units can be sequentially executed according to the time sequence of the thread scheduling execution units added by the thread execution queue.
S202: and judging whether the task type corresponding to the current thread scheduling execution unit is a read type.
And if the task type corresponding to the current thread scheduling execution unit is a read type, S203 is executed. And if the task type corresponding to the current thread scheduling execution unit is a write type, executing S204.
S203: calling a first execution function bound with a current thread scheduling execution unit to execute a read task; in the event that a read task is completed, the current thread scheduling execution unit is removed from the thread execution queue.
For convenience of distinction, the execution function corresponding to the read task may be collectively referred to as a first execution function, and the execution function corresponding to the write task may be collectively referred to as a second execution function.
Under the condition that the task type corresponding to the current thread scheduling and executing unit is a read type, a first executing function bound with the current thread scheduling and executing unit can be called to execute the read task; after completion of the read task, the current thread scheduling execution unit may be removed from the thread execution queue and then S208 is performed.
S204: and calling a second execution function bound with the current thread scheduling execution unit to read the target write task from the task linked list of the corresponding write task carrier, and executing the target write task.
And under the condition that the task type corresponding to the current thread scheduling and executing unit is a read type, a second executing function bound with the current thread scheduling and executing unit can be called to read the target writing task from the task linked list of the corresponding writing task carrier, and execute the target writing task.
S205: and deleting the target writing task from the corresponding task linked list and adding one to the writing task count after completing one target writing task.
Considering that the number of writing tasks recorded in the task linked list may be more, the writing tasks may be sequentially read from the task linked list, and when one target writing task is completed per execution, the target writing task may be deleted from the corresponding task linked list, and the writing task count may be incremented by one.
S206: and judging whether the write task count reaches the corresponding write upper limit value.
In practical application, if the number of the writing tasks recorded in the task linked list is large, the writing tasks recorded in the task linked list are always executed, so that the tasks corresponding to other thread scheduling execution units cannot be executed in time.
And under the condition that the write task count does not reach the corresponding write upper limit value, the step of continuously executing the write task corresponding to the current thread scheduling execution unit, and at the moment, returning to S204 to execute the step of calling a second execution function bound with the current thread scheduling execution unit to read the target write task from the task linked list of the corresponding write task carrier and executing the target write task.
When the write task count reaches the write upper limit value corresponding to the write task count, it indicates that more write tasks corresponding to the current thread scheduling execution unit have been executed, and in order to ensure that tasks corresponding to other thread scheduling execution units can be executed in time, S207 can be executed at this time.
S207: the current thread scheduling execution unit is removed from the thread execution queue.
In the event that the write task count reaches its corresponding write upper limit, the current thread scheduling execution unit may be removed from the thread execution queue.
S208: and judging whether a thread scheduling execution unit exists in the thread execution queue.
After the current thread scheduling execution unit is removed from the thread execution queue, it may be determined whether a thread scheduling execution unit is present in the thread execution queue.
Under the condition that a thread scheduling execution unit exists in the thread execution queue, the task still exists to be executed, and at the moment, the step of determining the current thread scheduling execution unit of the thread execution queue according to the time sequence can be returned to S201.
In the case where there is no thread scheduling execution unit in the thread execution queue, this indicates that all tasks have been performed to completion, at which point the operation may end.
In the embodiment of the invention, tasks corresponding to each thread scheduling execution unit in the thread execution queue are sequentially executed according to the time sequence. And by setting the writing upper limit value, when the execution quantity of writing tasks reaches the writing upper limit value, the task corresponding to the next thread scheduling execution unit can be executed in time, so that the situation that the writing tasks corresponding to the thread scheduling execution unit are always executed because of more writing tasks corresponding to the thread scheduling execution unit, so that the tasks corresponding to other thread scheduling execution units cannot be executed in time is avoided, the balanced processing of different tasks is ensured, and the task processing performance is improved.
In practical applications, when the write task count reaches the corresponding write upper limit value, the write task corresponding to the current thread scheduling execution unit may still have a write task that is not executed, so before the current thread scheduling execution unit is removed from the thread execution queue, whether the task list has a remaining write task may be determined when the write task count reaches the corresponding write upper limit value.
In the event that there are no remaining write tasks in the task linked list, the step of removing the current thread scheduling execution unit from the thread execution queue is performed.
Under the condition that the residual writing task exists in the task linked list, in order to ensure that the next round of the residual writing task corresponding to the current thread scheduling execution unit can still be continuously executed, a occupation mark can be set for the current thread scheduling execution unit, and whether the residual thread scheduling execution unit exists in the thread execution queue except the thread scheduling execution unit with the occupation mark is judged.
The occupancy flag of the thread scheduling execution unit is used to characterize that the number of currently executing write tasks has reached the write upper limit, but that there are still remaining write tasks for the thread scheduling execution unit.
Under the condition that the thread execution queue has no residual thread scheduling execution units except the thread scheduling execution units with the occupation marks, the thread execution queue is indicated to have only the lower current thread scheduling execution unit, and at the moment, a second execution function bound with the current thread scheduling execution unit can be called to read a target write task from a task linked list of a corresponding write task carrier and execute the target write task; and deleting the target writing task from the corresponding task linked list until no residual writing task exists in the task linked list after one target writing task is completed, and executing the step of removing the current thread scheduling execution unit from the thread execution queue.
Under the condition that the thread execution queue is provided with a thread scheduling execution unit with a occupying mark and the rest thread scheduling execution units are also arranged in the thread execution queue, the fact that other thread scheduling execution units are arranged in the thread execution queue besides the current thread scheduling execution unit is indicated, at the moment, the next thread scheduling execution unit adjacent to the current thread scheduling execution unit can be used as the latest current thread scheduling execution unit, the step of judging whether the task type corresponding to the current thread scheduling execution unit is of a read type or not is returned, and the processing of the task corresponding to the new round of thread scheduling execution unit is started.
In the embodiment of the invention, if the write task corresponding to the current thread scheduling execution unit reaches the write upper limit value of execution, and if the current thread scheduling execution unit has no corresponding write task, the current thread scheduling execution unit can be reserved in the thread execution queue by setting the occupation mark for the current thread scheduling execution unit, so that the write task corresponding to the thread scheduling execution unit with the occupation mark can be directly executed in the next round of task execution process, the time-consuming locking generated by adding the thread scheduling execution unit into the thread execution queue again is reduced, and the execution efficiency of the write task corresponding to the thread scheduling execution unit is ensured to the greatest extent.
In consideration of practical application, some writing task carriers may be frequently used, and some writing task carriers are used less frequently, so that the writing upper limit value set more accords with practical application requirements, and the writing upper limit value corresponding to each writing task carrier can be adjusted according to the use frequency of each writing task carrier.
In the embodiment of the invention, the number of the writing tasks added to the task linked list of each writing task carrier in a preset time period can be counted; determining a theoretical writing upper limit value matched with the writing task number of the target writing task carrier based on a corresponding relation between the set writing task number range and the theoretical writing upper limit value; the target write task carrier is any write task carrier in all write task carriers; judging whether the writing upper limit value of the target writing task carrier is smaller than the matched theoretical writing upper limit value; when the write upper limit value of the target write task carrier is smaller than the matched theoretical write upper limit value, the write upper limit value of the target write task carrier is adjusted to be the theoretical write upper limit value.
For example, the preset time period is 1 hour, 100 writing tasks corresponding to the first writing task carrier are available in 1 hour, 5 writing tasks corresponding to the second writing task carrier are available in 5 writing tasks, and if the writing upper limit value corresponding to all writing task carriers is set to 10, the writing tasks corresponding to the first writing task carrier can be executed for 100/10=10 rounds to complete all writing tasks. Therefore, in practical application, the write upper limit value corresponding to the first write task carrier can be adjusted to be 100, so that all write tasks corresponding to the first write task carrier can be rapidly executed in a short time.
In the embodiment of the invention, the write upper limit value corresponding to each write task carrier is adjusted according to the use frequency of each write task carrier, so that the rationality of setting the write upper limit value corresponding to different task carriers is ensured.
Fig. 3 is a schematic structural diagram of a task processing device according to an embodiment of the present invention, which includes an obtaining unit 31, a first adding unit 32, a second adding unit 33, a judging unit 34, and an executing unit 35;
an obtaining unit 31, configured to obtain an operation instruction issued by the server;
a first adding unit 32, configured to add a write task to a task linked list of a write task carrier and add a thread scheduling execution unit of the write task carrier to a thread execution queue when the operation instruction is the write task;
A second adding unit 33, configured to add a new writing task to a task linked list of the writing task carrier when the new writing task is acquired;
a judging unit 34, configured to judge whether the thread scheduling executing unit writing the task carrier is in the thread execution queue;
the execution unit 35 is configured to sequentially execute, in the case of the thread scheduling execution units writing the task carriers into the thread execution queues, tasks corresponding to the thread scheduling execution units in the thread execution queues according to the time sequence and the write upper limit value corresponding to each writing task carrier.
In some embodiments, a third adding unit is further included;
and the third adding unit is used for adding the thread scheduling and executing unit of the write task carrier into the thread execution queue under the condition that the thread scheduling and executing unit of the write task carrier is not in the thread execution queue, triggering the executing unit to execute the task corresponding to each thread scheduling and executing unit in the thread execution queue according to the time sequence and the write upper limit value corresponding to each write task carrier.
In some embodiments, the execution unit includes a determination subunit, a first execution subunit, a first removal subunit, a second execution subunit, a deletion subunit, a count subunit, a second determination subunit, a second removal subunit, and a third determination subunit;
The determining subunit is used for determining the current thread scheduling execution unit of the thread execution queue according to the time sequence;
the first judging subunit is used for judging whether the task type corresponding to the current thread scheduling execution unit is a read type or not;
the first execution subunit is used for calling a first execution function bound with the current thread scheduling execution unit to execute a read task under the condition that the task type corresponding to the current thread scheduling execution unit is a read type;
a first removing subunit, configured to remove, when the read task is completed, the current thread scheduling execution unit from the thread execution queue;
the second execution subunit is used for calling a second execution function bound with the current thread scheduling execution unit to read a target write task from a task linked list of a corresponding write task carrier and executing the target write task under the condition that the task type corresponding to the current thread scheduling execution unit is a write type;
the deleting subunit is used for deleting the target writing task from the corresponding task linked list when one target writing task is completed;
a counting subunit for incrementing the write task count by one;
the second judging subunit is used for judging whether the write task count reaches the corresponding write upper limit value; triggering a second execution subunit to execute and call a second execution function bound with the current thread scheduling execution unit to read a target write task from a task linked list of a corresponding write task carrier and execute the target write task under the condition that the write task count does not reach the corresponding write upper limit value;
The second removing subunit is used for removing the current thread scheduling execution unit from the thread execution queue under the condition that the write task count reaches the corresponding write upper limit value;
the third judging subunit is used for judging whether a thread scheduling executing unit exists in the thread executing queue; triggering the determining subunit to execute the current thread scheduling executing unit of the thread execution queue according to the time sequence under the condition that the thread scheduling executing unit exists in the thread execution queue; in the event that there is no thread scheduling execution unit in the thread execution queue, then the operation ends.
In some embodiments, before the current thread scheduling execution unit is removed from the thread execution queue, the method further comprises a first judging unit, a setting unit, a second judging unit and a unit as a unit in the case that the write task count reaches the corresponding write upper limit value;
the first judging unit is used for judging whether the residual writing tasks exist in the task linked list or not under the condition that the writing task count reaches the corresponding writing upper limit value; triggering a second removing subunit to execute the step of removing the current thread scheduling execution unit from the thread execution queue under the condition that the residual writing task does not exist in the task linked list;
The setting unit is used for setting a occupation mark for the current thread scheduling execution unit under the condition that the residual writing tasks exist in the task linked list;
the second judging unit is used for judging whether the thread execution queue has residual thread scheduling executing units except the thread scheduling executing unit with the occupation mark; triggering a second execution subunit to execute and call a second execution function bound with the current thread scheduling execution unit to read a target write task from a task linked list of a corresponding write task carrier and execute the target write task under the condition that no residual thread scheduling execution unit exists in the thread execution queue except the thread scheduling execution unit with the occupation mark; triggering the deleting subunit to execute the step of deleting the target writing task from the corresponding task linked list when one target writing task is completed, and triggering the second removing subunit to execute the step of removing the current thread scheduling execution unit from the thread execution queue until no residual writing task exists in the task linked list;
and the unit is used for taking the next thread scheduling execution unit adjacent to the current thread scheduling execution unit as the latest current thread scheduling execution unit under the condition that the thread scheduling execution units except the thread scheduling execution unit with the occupation mark exist in the thread execution queue, and triggering the first judging subunit to execute the step of judging whether the task type corresponding to the current thread scheduling execution unit is the read type.
In some embodiments, a fourth adding unit is further included;
and the fourth adding unit is used for adding the read task into a default read task carrier of the storage system and adding a thread scheduling execution unit of the read task carrier into a thread execution queue under the condition that the operation instruction is the read task.
In some embodiments, the apparatus further comprises an adjustment unit;
and the adjusting unit is used for adjusting the writing upper limit value corresponding to each writing task carrier according to the use frequency of each writing task carrier.
In some embodiments, the adjusting unit is configured to count the number of write tasks added to the task linked list of each write task carrier in a preset period of time; determining a theoretical writing upper limit value matched with the writing task number of the target writing task carrier based on a corresponding relation between the set writing task number range and the theoretical writing upper limit value; the target write task carrier is any write task carrier in all write task carriers; judging whether the writing upper limit value of the target writing task carrier is smaller than the matched theoretical writing upper limit value; when the write upper limit value of the target write task carrier is smaller than the matched theoretical write upper limit value, the write upper limit value of the target write task carrier is adjusted to be the theoretical write upper limit value.
According to the technical scheme, the operation instruction issued by the server is acquired; and when the operation instruction is a writing task, adding the writing task into a task linked list of a writing task carrier, and adding a thread scheduling execution unit of the writing task carrier into a thread execution queue. And under the condition that a new writing task is acquired, adding the new writing task into a task linked list of the writing task carrier, and judging whether a thread scheduling execution unit of the writing task carrier is in a thread execution queue. Under the condition of a thread scheduling execution unit writing task carriers in a thread execution queue, tasks corresponding to the thread scheduling execution units in the thread execution queue are sequentially executed according to time sequence and a writing upper limit value corresponding to each writing task carrier. The invention has the advantages that a special writing task carrier is arranged for writing tasks, and a task linked list is arranged on the writing task carrier, so that a plurality of writing tasks can be stored through the task linked list. For a plurality of writing tasks, the operation of adding a thread scheduling execution unit of a writing task carrier into a thread execution queue is not needed for executing the writing tasks each time, so that batch processing of the writing tasks is realized. Compared with the traditional mode that each writing task needs to be added into a thread execution queue by a thread scheduling execution unit, the method can realize the storage of a plurality of writing tasks by setting the task linked list, reduce the frequency of frequently adding the thread scheduling execution unit of a writing task carrier corresponding to the writing task into the thread execution queue, and effectively reduce the writing time delay. In addition, the invention can effectively avoid the problem that the tasks corresponding to other task carriers are not executed in a short time by limiting the writing upper limit value of each writing task carrier.
Fig. 4 is a block diagram of a task processing device according to an embodiment of the present invention, where, as shown in fig. 4, the task processing device includes: a memory 40 for storing a computer program;
a processor 41 for carrying out the steps of the processing method for carrying out the tasks of the embodiments described above when executing a computer program.
The task processing device provided in this embodiment may include, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like.
Processor 41 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc., among others. The processor 41 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 41 may also comprise a main processor, which is a processor for processing data in an awake state, also called CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 41 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 41 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 40 may include one or more computer-readable storage media, which may be non-transitory. Memory 40 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 40 is at least used for storing a computer program 401, which, when loaded and executed by the processor 41, is capable of implementing the relevant steps of the task processing method disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 40 may further include an operating system 402, data 403, and the like, where the storage manner may be transient storage or permanent storage. Operating system 402 may include, among other things, windows, unix, linux. The data 403 may include, but is not limited to, write tasks, and the like.
In some embodiments, the task processing device may further include a display 42, an input/output interface 43, a communication interface 44, a power supply 45, and a communication bus 46.
Those skilled in the art will appreciate that the structure shown in fig. 4 is not limiting of the task processing device and may include more or fewer components than shown.
It will be appreciated that the processing methods of tasks in the embodiments described above may be stored in a computer readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on this understanding, the technical solution of the present invention may be embodied essentially or in part or in whole or in part in the form of a software product stored in a storage medium for performing all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), an electrically erasable programmable ROM, registers, a hard disk, a removable disk, a CD-ROM, a magnetic disk, or an optical disk, etc. various media capable of storing program codes.
Based on this, the embodiment of the present invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the processing method for tasks as described above.
The method, the device, the equipment and the computer readable storage medium for processing the task provided by the embodiment of the invention are described in detail. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above describes in detail a task processing method, device, apparatus and computer readable storage medium provided by the present invention. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (10)

1. A method for processing a task, comprising:
acquiring an operation instruction issued by a server;
when the operation instruction is a writing task, adding the writing task into a task linked list of a writing task carrier, and adding a thread scheduling execution unit of the writing task carrier into a thread execution queue;
under the condition that a new writing task is acquired, adding the new writing task into a task linked list of the writing task carrier, and judging whether a thread scheduling execution unit of the writing task carrier is in the thread execution queue or not;
and under the condition that the thread scheduling execution units of the write task carriers are in the thread execution queue, sequentially executing tasks corresponding to the thread scheduling execution units in the thread execution queue according to time sequence and the write upper limit value corresponding to each write task carrier.
2. The method of task processing according to claim 1, further comprising, after said adding the new write task to the task linked list of the write task carrier:
and under the condition that the thread scheduling execution units of the write task carriers are not in the thread execution queue, adding the thread scheduling execution units of the write task carriers into the thread execution queue, and executing the task corresponding to each thread scheduling execution unit in the thread execution queue according to the time sequence and the write upper limit value corresponding to each write task carrier.
3. The method for processing tasks according to claim 2, wherein sequentially executing the tasks corresponding to the thread scheduling execution units in the thread execution queue according to the chronological order and the write upper limit value corresponding to each write task carrier comprises:
determining a current thread scheduling execution unit of the thread execution queue according to the time sequence;
judging whether the task type corresponding to the current thread scheduling execution unit is a read type or not;
when the task type corresponding to the current thread scheduling and executing unit is a read type, calling a first executing function bound with the current thread scheduling and executing unit to execute a read task; removing the current thread scheduling execution unit from the thread execution queue when the read task is completed;
when the task type corresponding to the current thread scheduling and executing unit is a writing type, calling a second executing function bound with the current thread scheduling and executing unit to read a target writing task from a task linked list of a corresponding writing task carrier, and executing the target writing task;
deleting the target writing task from the corresponding task linked list and adding one to the writing task count after each target writing task is completed;
Judging whether the write task count reaches a corresponding write upper limit value;
returning to call a second execution function bound with the current thread scheduling execution unit to read a target write task from a task linked list of a corresponding write task carrier and execute the target write task under the condition that the write task count does not reach the corresponding write upper limit value;
removing the current thread scheduling execution unit from the thread execution queue under the condition that the write task count reaches the corresponding write upper limit value;
judging whether a thread scheduling execution unit exists in the thread execution queue;
returning to the step of determining the current thread scheduling execution unit of the thread execution queue according to the time sequence under the condition that the thread scheduling execution unit exists in the thread execution queue;
and ending the operation when the thread scheduling execution unit does not exist in the thread execution queue.
4. A method of task processing according to claim 3, wherein, in the case where the write task count reaches its corresponding write upper limit value, before removing the current thread scheduling execution unit from the thread execution queue, further comprising:
Judging whether the residual writing tasks exist in the task linked list or not under the condition that the writing task count reaches the corresponding writing upper limit value;
executing the step of removing the current thread scheduling execution unit from the thread execution queue under the condition that no residual writing task exists in the task linked list;
setting a occupation mark for the current thread scheduling execution unit under the condition that the residual writing task exists in the task linked list, and judging whether the residual thread scheduling execution unit exists in the thread execution queue except the thread scheduling execution unit with the occupation mark;
under the condition that no residual thread scheduling execution units exist in the thread execution queue except the thread scheduling execution units with the occupation marks, a second execution function bound with the current thread scheduling execution unit is called to read a target write task from a task linked list of a corresponding write task carrier, and the target write task is executed; deleting the target writing task from a corresponding task linked list when one target writing task is completed until no residual writing task exists in the task linked list, and executing the step of removing the current thread scheduling execution unit from the thread execution queue;
And under the condition that the thread execution queue has residual thread scheduling execution units except the thread scheduling execution unit with the occupation mark, taking the next thread scheduling execution unit adjacent to the current thread scheduling execution unit as the latest current thread scheduling execution unit, and returning to the step of judging whether the task type corresponding to the current thread scheduling execution unit is a read type.
5. The method for processing a task according to claim 1, further comprising:
and under the condition that the operation instruction is a read task, adding the read task into a default read task carrier of a storage system, and adding a thread scheduling execution unit of the read task carrier into a thread execution queue.
6. A method of processing a task according to any one of claims 1 to 5, further comprising:
and adjusting the writing upper limit value corresponding to each writing task carrier according to the using frequency of each writing task carrier.
7. The method according to claim 6, wherein adjusting the write upper limit value corresponding to each of the write task carriers according to the frequency of use of each of the write task carriers comprises:
Counting the number of writing tasks added to a task linked list of each writing task carrier in a preset time period;
determining a theoretical writing upper limit value matched with the writing task number of the target writing task carrier based on a corresponding relation between the set writing task number range and the theoretical writing upper limit value; the target write task carrier is any write task carrier in all the write task carriers;
judging whether the write upper limit value of the target write task carrier is smaller than the matched theoretical write upper limit value;
and when the write upper limit value of the target write task carrier is smaller than the matched theoretical write upper limit value, adjusting the write upper limit value of the target write task carrier to be the theoretical write upper limit value.
8. The task processing device is characterized by comprising an acquisition unit, a first adding unit, a second adding unit, a judging unit and an executing unit;
the acquisition unit is used for acquiring an operation instruction issued by the server;
the first adding unit is used for adding the writing task to a task linked list of a writing task carrier and adding a thread scheduling executing unit of the writing task carrier to a thread executing queue under the condition that the operation instruction is the writing task;
The second adding unit is used for adding the new writing task to a task linked list of the writing task carrier under the condition that the new writing task is acquired;
the judging unit is used for judging whether the thread scheduling executing unit of the write task carrier is in the thread executing queue or not;
the execution unit is used for sequentially executing tasks corresponding to the thread scheduling execution units in the thread execution queue according to the time sequence and the write upper limit value corresponding to each write task carrier under the condition that the thread scheduling execution unit of the write task carrier is in the thread execution queue.
9. A processing apparatus for a task, comprising:
a memory for storing a computer program;
a processor for executing the computer program to perform the steps of the processing method of tasks according to any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of the method for processing tasks according to any of claims 1 to 7.
CN202410021881.9A 2024-01-05 2024-01-05 Task processing method, device, equipment and computer readable storage medium Pending CN117827114A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410021881.9A CN117827114A (en) 2024-01-05 2024-01-05 Task processing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410021881.9A CN117827114A (en) 2024-01-05 2024-01-05 Task processing method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117827114A true CN117827114A (en) 2024-04-05

Family

ID=90505700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410021881.9A Pending CN117827114A (en) 2024-01-05 2024-01-05 Task processing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117827114A (en)

Similar Documents

Publication Publication Date Title
US20230138736A1 (en) Cluster file system-based data backup method and apparatus, and readable storage medium
CN106569891B (en) Method and device for scheduling and executing tasks in storage system
JP2006351004A (en) Memory management method of mobile terminal
CN110806925B (en) Audio playing method and equipment
CN109960589B (en) Method and device for realizing system software layer of embedded system and readable medium
CN110515917B (en) Method, device and medium for controlling reconstruction speed
CN112783831A (en) File migration method and device
CN116450287A (en) Method, device, equipment and readable medium for managing storage capacity of service container
WO2024119930A1 (en) Scheduling method and apparatus, and computer device and storage medium
CN113553216A (en) Data recovery method and device, electronic equipment and storage medium
CN117827114A (en) Task processing method, device, equipment and computer readable storage medium
CN116243868A (en) Task processing method, device, equipment and computer readable storage medium
CN116339643A (en) Formatting method, formatting device, formatting equipment and formatting medium for disk array
CN115934999A (en) Video stream data storage method, device and medium based on block file
CN114328280A (en) Log access method and device based on Flash, storage medium and terminal
CN112860376B (en) Snapshot chain manufacturing method and device, electronic equipment and storage medium
CN115328696A (en) Data backup method in database
CN109359093B (en) Rule file updating method and system
CN113836112A (en) Data migration method, system, device and medium
CN111176571A (en) Method, device, equipment and medium for managing local object
CN115794446B (en) Message processing method and device, electronic equipment and storage medium
CN107862095B (en) Data processing method and device
CN111625192B (en) Metadata object access method, device, equipment and medium
CN109634874A (en) A kind of data processing method, device, electronic equipment and storage system
CN108959499A (en) Distributed file system performance analysis method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination