CN110018880B - IO thread execution method and device - Google Patents

IO thread execution method and device Download PDF

Info

Publication number
CN110018880B
CN110018880B CN201810018487.4A CN201810018487A CN110018880B CN 110018880 B CN110018880 B CN 110018880B CN 201810018487 A CN201810018487 A CN 201810018487A CN 110018880 B CN110018880 B CN 110018880B
Authority
CN
China
Prior art keywords
thread
task
maintainer
delayed loading
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810018487.4A
Other languages
Chinese (zh)
Other versions
CN110018880A (en
Inventor
王晓龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810018487.4A priority Critical patent/CN110018880B/en
Publication of CN110018880A publication Critical patent/CN110018880A/en
Application granted granted Critical
Publication of CN110018880B publication Critical patent/CN110018880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an IO thread execution method, which comprises the following steps: receiving an uploading task or a delayed loading task; selecting a thread maintainer corresponding to the uploading task or the delayed loading task through a consistent Hash algorithm; adding the uploading or delayed loading task on the input and output thread maintained by the thread maintainer by a coroutine mechanism; and executing the uploading or delayed loading task on the input and output thread. According to the IO thread execution method, the thread maintainer corresponding to the uploading task or the delayed loading task is selected through the consistent Hash algorithm, the uploading task or the delayed loading task is added to the IO thread maintained by the thread maintainer through a coroutine mechanism, and then the task is executed on the IO thread, so that time sharing of the IO thread is achieved, and the problems of starvation and thread resource waste caused by the fact that the task cannot be processed for a long time can be avoided.

Description

IO thread execution method and device
Technical Field
The application relates to the field of public cloud block storage, in particular to an IO thread execution method. The application also relates to an IO thread execution device.
Background
The cloud block storage service is a virtual disk project, is a new concept extended and developed on the cloud computing concept, and is an emerging network storage technology. In the project implementation process, in order to meet the requirements of virtual disk disaster recovery and disaster recovery, a virtual disk snapshot function needs to be implemented, and mainly the functions of uploading snapshot data of a local virtual disk to a cloud server (upload) and creating a disk based on the snapshot data of the cloud server (lazyload) are implemented, and the snapshot data is downloaded in a delayed loading manner. For example, the ari cloud Storage Service OSS (Object Storage Service) is an ultra-high performance virtual disk item, and implements a virtual disk snapshot function. The OSS bottom layer is a low-speed storage system constructed based on SATA disks (Serial Advanced Technology Attachment, serial hard disks), the number of times of read/write (I/O) operations per second that can be processed is limited by the bottom layer storage, the upload and delayed load completion times have SLA (Service-Level Agreement) requirements, and the delayed load read operations have delay requirements. Aiming at the requirement of the virtual disk snapshot function, the snapshot task is uniformly split in a mode of adopting an independent component snapshot server. Therefore, in order to meet the above requirements, what kind of IO thread model is used to manage snapshot task operation in the snapshot server is an urgent problem to be solved.
The existing snapshot module IO thread model is as follows: all IO threads can scan the task map at regular time to see whether tasks needing to be processed exist, if so, the tasks are operated to realize the function of uploading or delaying the loading of the tasks, and meanwhile, the same thread does not allow other tasks to be processed before one task is processed.
The disadvantages of this solution are as follows: the same thread does not allow processing other tasks before one task is processed, and other tasks cannot be processed for a long time and are in a starvation state, so that thread resources are wasted. The global task map needs locking protection, and the maintenance complexity is increased; the delayed loading read operation is completed by a client retry mechanism, which greatly increases the delay of the read operation.
Disclosure of Invention
The invention provides an IO thread execution method, which aims to solve the problems of starvation and thread resource waste caused by the fact that tasks in the prior art cannot be processed for a long time. The invention further provides an IO thread execution device.
The application provides an IO thread execution method, which comprises the following steps:
receiving an uploading task or a delayed loading task;
selecting a thread maintainer corresponding to the uploading task or the delayed loading task through a consistent hash algorithm;
adding the uploading or delayed loading task on the input and output thread maintained by the thread maintainer by a coroutine mechanism;
and executing the uploading or delayed loading task on the input and output thread.
Optionally, after the step of selecting, by using a consistent hash algorithm, a thread maintainer corresponding to the upload task or the deferred load task, the method further includes:
and adding the uploading task or the delayed loading task to a task map corresponding to the thread maintainer on an input/output thread maintained by the thread maintainer.
Optionally, a task deletion operation is performed on the task map.
Optionally, the step of selecting, by using a consistent hash algorithm, a thread maintainer corresponding to the upload task or the deferred load task further includes:
initializing the thread maintainer list.
Optionally, the implementation manner of the step of initializing the thread maintainer list includes:
each thread maintainer corresponds to one input and output thread, and each thread maintainer maintains one task map.
Optionally, the method further includes:
obtaining a delayed loading read request on an input/output candidate list of a specific delayed loading task;
and in the execution process of the specific deferred load task, preferentially processing the deferred load read request.
Optionally, before the step of obtaining the deferred load read request on the input/output candidate list of the deferred load task, the method further includes:
receiving a delayed loading read request of the specific delayed loading task;
selecting the thread maintainer corresponding to the delayed loading read request through a consistent hash algorithm;
acquiring the corresponding specific delayed loading task by a coroutine mechanism on an input/output thread maintained by the thread maintainer;
and adding the deferred load read request to an input and output candidate list of the specific deferred load task.
Optionally, after the step of selecting, by using a consistent hashing algorithm, the thread maintainer corresponding to the deferred load read request further includes:
and judging whether the content to be read by the delayed loading read request is in a cloud server or a local area through related data in the thread maintainer list, and if the content is in the cloud server, entering the next step to obtain the corresponding specific delayed loading task on an input/output thread maintained by the thread maintainer through a co-program mechanism.
Optionally, time-division multiplexing of tasks on the same input/output thread is realized through a coroutine mechanism.
The present application further provides an IO thread execution apparatus, including:
the task receiving module is used for receiving the uploading task or the delayed loading task;
the thread maintainer selecting module is used for selecting a thread maintainer corresponding to the uploading task or the delayed loading task through a consistent hash algorithm;
the task adding module is used for adding the uploading or delayed loading task on the input and output thread maintained by the thread maintainer by a coroutine mechanism;
and the task execution module is used for executing the uploading or delayed loading task on the input and output thread.
Optionally, the apparatus further comprises:
and the task map adding module is used for adding the uploading task or the delayed loading task to a task map corresponding to the thread maintainer on an input/output thread maintained by the thread maintainer.
Optionally, the apparatus further comprises:
an initialization module to initialize the thread maintainer list.
Optionally, the apparatus further comprises:
a delayed loading read request acquisition module, configured to acquire a delayed loading read request on an input/output candidate list of the delayed loading task;
and the delayed loading read request execution module is used for preferentially processing the delayed loading read request in the delayed loading task execution process.
Optionally, the apparatus further comprises:
a delayed loading read request receiving module, configured to receive a delayed loading read request of a specific delayed loading task;
the corresponding thread maintainer selecting module is used for selecting the thread maintainer corresponding to the delayed loading read request through a consistent hash algorithm:
the corresponding specific delayed loading task acquisition module is used for acquiring the corresponding specific delayed loading task on the input and output thread maintained by the thread maintainer by a co-program mechanism;
and the delayed loading read request adding module is used for adding the delayed loading read request to the input and output candidate list of the delayed loading.
Optionally, the apparatus further comprises:
and the judging module is used for judging whether the content to be read by the delayed loading read request is in a cloud server or a local area through related data in the thread maintainer list, and if the content is in the cloud server, entering the next step on an input/output thread maintained by the thread maintainer to acquire the corresponding specific delayed loading task through a coroutine mechanism.
Compared with the prior art, the invention has the following advantages:
the IO thread execution method selects the thread maintainer corresponding to the uploading task or the delayed loading task through a consistent Hash algorithm, adds the uploading or delayed loading task on the input and output thread maintained by the thread maintainer through a coroutine mechanism, and then executes the uploading or delayed loading task on the input and output thread to realize the execution of the task on the IO thread. The uploading or delayed loading task is added to the IO thread maintained by the thread maintainer through a coroutine mechanism, so that the time sharing of the IO thread is realized, and the problems of starvation and thread resource waste caused by the fact that the task cannot be processed for a long time can be avoided.
Furthermore, on the basis of the IO thread execution method, through a thread local task map and executing addition and deletion operations in the thread, lock-free processing is realized, and maintenance complexity is reduced; in the delayed loading process, the delayed loading read operation is not finished by a client retry mechanism any more, but the delayed loading read request is processed preferentially, so that repeated operation is reduced, and the delay of request processing is greatly reduced.
Drawings
Fig. 1 is a flowchart of an IO thread execution method according to a first embodiment of the present application;
fig. 2 is another flowchart of an IO thread execution method according to a first embodiment of the present application;
fig. 3 is a block diagram illustrating an IO thread execution apparatus according to a second embodiment of the present application;
fig. 4 is another structural block diagram of an IO thread execution apparatus according to a second embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may be embodied in many different forms than those herein set forth and should be readily appreciated by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
A first embodiment of the present application provides an IO thread execution method. Please refer to fig. 1, which is a flowchart illustrating a first embodiment of the present application. The following describes a method for executing an IO thread according to a first embodiment of the present application in detail with reference to fig. 1.
As shown in fig. 1, a flowchart of an IO thread execution method provided in a first embodiment of the present application is shown, and includes the following steps.
And step S101, receiving an uploading task or a delayed loading task.
The method comprises the steps of receiving a task of uploading snapshot data of a local virtual disk to a cloud server, or receiving a delayed loading task of downloading the snapshot data of the cloud server to the local.
The virtual disk is based on a virtual disk example constructed by a back-end distributed storage system, and can be read and written in the cloud server as a common disk.
The method comprises the steps of virtual disk snapshot, wherein a virtual disk stores complete records of contents at a certain moment and is mainly used for backup and data recovery.
An upload task (upload) uploads snapshot data of the virtual disk to a cloud server, for example, an object storage service OSS of the airy cloud, which is a cloud storage service provided by the airy cloud, and is massive, safe, low-cost, and highly reliable.
And (3) a delayed loading task (lazyload), wherein a disk is created locally based on the snapshot data of the cloud storage, the disk can be used immediately, and the snapshot data of the cloud storage is downloaded in a delayed loading manner.
The cloud storage service is a virtual disk project, is a new concept extended and developed on the cloud computing concept, and is an emerging network storage technology. In the project implementation process, in order to meet the requirements of virtual disk disaster recovery and disaster recovery, the function of virtual disk snapshot needs to be achieved, the virtual disk snapshot is different from the traditional backup, and data in the virtual disk snapshot needs to be downloaded for use. The virtual disk snapshot is mainly used for uploading snapshot data of a local virtual disk to a cloud server and creating a disk function based on the snapshot data of the cloud server, and the snapshot data is downloaded in a delayed loading mode. When the virtual machine is started, the data of the disk is slowly loaded from a disk snapshot, and the virtual machine can run without completing the loading. For example, the ali cloud block storage service OSS is an ultra-high performance virtual disk item, and implements a virtual disk snapshot function. The OSS bottom layer is a low-speed storage system constructed based on the SATA disk, the times of performing read-write (I/O) operation per second which can be processed are limited by bottom layer storage, the uploading and delayed loading completion time has SLA requirements, and the delayed loading read operation has delay requirements. Aiming at the requirement of the virtual disk snapshot function, the snapshot task is uniformly split in a mode of adopting an independent component snapshot server.
And receiving a task of uploading the snapshot data of the local virtual disk to a cloud server in a snapshot server, or receiving a delayed loading task of downloading the snapshot data of the cloud server to the local, wherein the delayed loading task is used for IO thread model optimization in a subsequent snapshot task and execution of related tasks.
And step S102, selecting a thread maintainer corresponding to the uploading task or the delayed loading task through a consistent hash algorithm.
The step is used for selecting the thread maintainer corresponding to the uploading task or the delayed loading task through a consistent hash algorithm.
The hash algorithm is to transform an input with an arbitrary length into an output with a fixed length, which is a hash value. The hash algorithm is not a specific algorithm but a generic term for a class of algorithms. The hash algorithm, also called hashing algorithm, generally satisfies the relationship: f (data) = key, inputting data with any length, and outputting a fixed-length data key after processing through a hash algorithm. Meanwhile, the process is irreversible, and data cannot be reversely deduced by keys.
The consistent hash algorithm is characterized in that a certain hash algorithm is used, a large amount of data are averagely mapped to different storage targets, the searching accuracy of the consistent hash algorithm is guaranteed, and meanwhile, when one storage target fails, the load balance of other storage targets on the responsible storage content of the storage target is considered.
Thread maintainer (worker), a multi-threaded execution technique that can be created independently of HTML pages, is used to maintain threads and can also record and maintain data. The thread maintainer is used for recording the incidence relation between the local data and the cloud server data, and can check the uploading and downloading progress of the local data and the cloud server through the thread maintainer, so that the related operation is executed.
Thread maintainers it allows developers to write daemons that can run for long periods of time without interruption by the user to execute transactions or logic while ensuring page responsiveness to the user. In short, it is allowed to create multiple threads, but the child threads are fully under the control of the master thread. Thus, some of the more time-consuming computations can be handled by the thread maintainer. In the present application, one thread maintainer is used to maintain only one thread. And setting a specific hash value through a consistent hash algorithm, and addressing the thread maintainer corresponding to the uploading task or the delayed loading task through matching the hash value. When a new task is added or cancelled to a thread, the correspondence between the original task on the thread and the thread maintainer is not changed due to the addition or cancellation of the new task, so that the consistency of the thread maintainer can be maintained.
Preferably, referring to fig. 2, after the step S102, a step S105 is further included, where the upload task or the delayed load task is added to a task map corresponding to the thread maintainer on an input/output thread maintained by the thread maintainer.
The step is used for adding the uploading task or the delayed loading task to a task map corresponding to the thread maintainer on the input and output thread maintained by the thread maintainer.
A Process (Process) is a running activity of a program in a computer on a data set, is a basic unit of resource allocation and scheduling of a system, and is the basis of an operating system structure. In early process-oriented design of computer architectures, processes were the basic execution entities of programs; in contemporary computer architectures designed for threads, a process is a container for threads. A process is a running activity of a program with independent functions on a certain data set, can apply for and own system resources, is a dynamic concept, is an active entity, and is an 'executing program'.
Threads, sometimes referred to as Lightweight processes (LWP), are the smallest unit of program execution flow. The thread is an entity in the process and is a basic unit independently scheduled and dispatched by a system, the thread does not own system resources and only owns essential resources in operation, but the thread can share all the resources owned by the process with other threads belonging to the same process. One thread can create and tear down another thread, running multiple threads simultaneously in a single program to accomplish different tasks, referred to as multithreading. Due to the mutual restriction among threads, the threads are discontinuous in operation, and have three basic states of ready, blocked and operated. Each program has at least one thread, and if a program has only one thread, the program itself is used.
And when the client has an input/output request, the server establishes the IO thread to execute the relevant input/output request.
The task map (taskmap) is used for recording and maintaining tasks to be executed, the task map can be a global task map or a local task map, and global lock protection is required to be added for modifying the tasks in the global lock task map.
The simplest way for solving the data integrity and state synchronization among multiple threads is locking, a task map records a task list required to be executed, and the uploading task or the delayed loading task is added to a task map and a local task map corresponding to a thread maintainer thread through an input and output thread maintained by the thread maintainer, so that the lock-free processing is realized.
Preferably, a task deletion operation is performed on the task map. The task map records a task list to be executed, and not only can the tasks be added, but also the tasks can be deleted, and the task map is processed without locking.
Preferably, referring to fig. 2, the step S102 further includes: step S106, initializing the thread maintainer list.
This step is used to initialize the thread maintainer list.
Thread maintainers are a multi-threaded execution technique that can be created independently of HTML pages, maintain threads, and record and maintain data. Initializing the thread maintainer list before acquiring the thread maintainer step corresponding to the uploading task or the delayed loading task.
The thread maintainer is used for recording the incidence relation between the local data and the cloud server data, and the thread maintainer can check the uploading and downloading progress of the local data and the cloud server, so that subsequent related operations are executed. And judging whether the content to be read by the delayed loading read request is in the cloud server or the local through the uploading and downloading progress of the local data in the thread maintainer list and the cloud server, and if the content is in the cloud server, entering the next step to obtain the corresponding specific delayed loading task on the input and output thread maintained by the thread maintainer through a coroutine mechanism (as shown in fig. 2, step S108).
Preferably, the step of initializing the thread maintainer list comprises the following implementation modes: each thread maintainer corresponds to one input/output thread, and each thread maintainer maintains a task map.
Each thread maintainer maintains an input and output thread, and each thread maintainer maintains a task map, so that one input and output thread corresponds to one task map, and the lock-free processing of the task map is realized by adding and deleting the uploading task or delaying the loading task and the local task map in the task map.
Step S103, adding the upload or delayed load task on the input/output thread maintained by the thread maintainer by a coroutine mechanism.
The step is used for adding the uploading or delayed loading task on the input and output thread maintained by the thread maintainer by a coroutine mechanism.
Coroutine, a lightweight multitasking mechanism of the operating system user state, and programmers must undertake the responsibility of scheduling, so that the performance can be improved, and meanwhile, the coroutine also loses the capacity of using multiple CPUs by a standard thread. And it can keep the state (i.e. a specific combination of all local states) of the last call, and each time the process is re-invoked, it is equivalent to enter the state of the last call, put another way: and entering the position of the logic flow where the last exit is.
Coroutine is a program control mechanism with which collaborative multitasking may be conveniently implemented. A program may contain multiple coroutines, as opposed to a process containing multiple threads. The multiple threads are relatively independent, have own contexts and are switched under the control of a system; the coroutines are relatively independent and have own contexts, but the switching is controlled by the coroutines, and the current coroutines are switched to other coroutines and controlled by the current coroutines. On the input and output threads maintained by the thread maintainer, the programmer adds the uploading or delayed loading task by himself, and when multitask execution is needed, the operating system does not decide when to execute which task, namely, preemptive multitask, but gives the decision right to the tasks, and the tasks are voluntarily abandoned when the task is considered to be proper by himself, and the tasks are voluntarily executed when the task is considered to be proper by himself, namely, cooperative multitask. The same thread allows other tasks to be processed before one task is processed, and the other tasks cannot be processed for a long time. And the coroutine can reserve the state of the last calling, and each time the process is reset, the coroutine is equivalent to entering the state of the last calling, so that the task processing efficiency is improved, and thread resources are well utilized.
Preferably, the task time-sharing multiplexing on the same input and output thread is realized through a coroutine mechanism.
The same thread allows other tasks to be processed before one task is processed, and the other tasks cannot be processed for a long time. The problems of unpredictable starvation and thread resource waste of the program caused by the inherent synchronization problem of the prior are solved, and the time-sharing of the IO thread is realized.
For example, suppose there is an operating system, which is single-core, there is no other program to run on the system, there are two threads a and B, and the tasks are all arithmetic operations, and there is no contention and data sharing problem between AB. Now A B two threads are in parallel, the operating system will switch between the two AB threads continuously, and a pseudo parallel effect is achieved. If the coroutine mode is used, coroutine A can be operated on the same thread, and the yield is given to coroutine B when A is finished, or the yield is given to coroutine A when B is finished, and the coroutine A is switched to other coroutines through the control of the current coroutine, so that true parallel of tasks is realized, and time-sharing of IO threads is realized. And the state of the last calling is reserved, and the process is equivalent to enter the state of the last calling when the process is reset each time.
Preferably, referring to fig. 2, the IO thread model optimization method further includes the following steps:
in step S112, a deferred load read request on the input/output candidate list of the specific deferred load task is acquired.
This step is used to obtain the deferred load read request on the input/output candidate list of a particular deferred load task.
And delaying to load the read request, wherein the client needs to read the data in an emergency in the delayed loading process of the snapshot data.
The specific deferred load task is a specific deferred load task corresponding to the deferred load read request, namely the deferred load task to be read by the deferred load read request.
The deferred loading task has a corresponding task execution candidate list in the execution process, executes a specific task process according to the sequence on the candidate list, and needs to read related data urgently at a client side and needs to acquire a deferred loading reading request on an input/output candidate list of a specific deferred loading task.
Preferably, referring to fig. 2, the step S112 further includes:
step S107, receiving a delayed loading read request of a specific delayed loading task;
step S108, selecting a thread maintainer corresponding to the delayed loading read request through a consistent Hash algorithm;
step S110, acquiring the corresponding specific delay loading task on the input/output thread maintained by the thread maintainer by a coroutine mechanism;
step S111, adding the load-deferred read request to the input/output candidate list of the specific load-deferred task.
The steps are used for receiving the delayed loading read request of a specific delayed loading task and adding the delayed loading read request to an input and output candidate list of the specific delayed loading task.
And receiving a delayed loading read request of a specific delayed loading task, and selecting a thread maintainer corresponding to the delayed loading read request through a consistent hash algorithm, wherein the thread maintainer is consistent with a thread maintainer corresponding to a common delayed loading task, and the corresponding thread maintainer cannot be changed due to the change of the task. In the step of acquiring the corresponding specific delayed loading task by a coroutine mechanism on the input/output thread maintained by the thread maintainer, a task for acquiring the specific delayed loading task needs to be established in advance, and the specific delayed loading task is acquired by completing the task establishment. And further, the specific delayed loading task needing to be subjected to the prior reading operation is obtained by finishing obtaining the specific delayed loading task. The task of obtaining the specific deferred loading task and the execution of obtaining the specific deferred loading task are consistent with the execution process of the common task, and are executed according to the steps of the steps S101 to S104, which is not described herein again.
Preferably, referring to fig. 2, after the step S107, a step S108 is further included:
and judging whether the content to be read by the delayed loading read request is in a cloud server or a local place through related data in the thread maintainer list, and if the content is in the cloud server, entering the next step to obtain the corresponding specific delayed loading task on an input/output thread maintained by the thread maintainer through a coroutine mechanism.
And the thread maintainer records and maintains data, the thread maintainer contains the incidence relation between the local data and the cloud server data, and the thread maintainer can check the uploading and downloading progress of the local data and the cloud server and judge whether the content to be read by the delayed loading read request is in the cloud server or the local. If the content to be read is not downloaded to the local at the cloud server, and the content needs to be deferred to be loaded to the local from the cloud server, the next step S109 is executed, in which the specific deferred loading task is obtained by a coroutine mechanism on the input/output thread maintained by the thread maintainer, so that the deferred loading read request is added to the input/output candidate list of the specific deferred loading task, and the deferred loading read request is preferentially executed when the specific deferred loading task is executed.
Step S113, preferentially processing the deferred load read request during the specific deferred load task execution process.
This step is used for the execution of the deferred load read request.
The specific delayed loading task executes a specific task process according to the sequence on the candidate list, and when the client needs to read related data urgently and acquires the delayed loading reading request on the input/output candidate list of the specific delayed loading task first, the delayed loading reading request on the input/output candidate list is processed preferentially to perform corresponding reading operation. The snapshot server is not finished by a client retry mechanism any more, but processes the delayed loading read request preferentially, so that repeated operation is reduced, and the delay of request processing is greatly reduced.
And step S104, executing the uploading or delayed loading task on the input and output thread.
This step is used for the execution of the upload or deferred load task.
After receiving the upload task or the delayed load task, the snapshot server adds the upload task or the delayed load task through an input/output thread maintained by the thread maintainer by a coroutine mechanism, and executes the upload task or the delayed load task on the input/output thread according to a sequence set by the coroutine mechanism. The same thread allows other tasks to be processed before one task is processed, so that other tasks cannot be processed for a long time, and time-sharing of IO threads is realized.
As shown in fig. 3, a block diagram of an IO thread execution apparatus according to a second embodiment of the present application is shown, where the block diagram includes the following modules:
the task receiving module 301 is configured to receive an upload task or a delayed load task;
a thread maintainer selecting module 302, configured to select, through a consistent hash algorithm, a thread maintainer corresponding to the upload task or the delayed load task;
a task adding module 303, configured to add the upload or delayed load task in a coroutine mechanism on the input/output thread maintained by the thread maintainer;
a task execution module 304, configured to execute the upload or deferred load task on the input/output thread.
As shown in fig. 4, another structural block diagram of an IO thread execution apparatus according to a second embodiment of the present application is shown, where the structural block diagram includes the following modules:
and a task map adding module 305, configured to add the upload task or the delayed load task to a task map corresponding to the thread maintainer on an input/output thread maintained by the thread maintainer.
An initialization module 306 to initialize the thread maintainer list.
A delayed loading read request obtaining module 312, configured to obtain a delayed loading read request on an input/output candidate list of the delayed loading task;
a delayed load read request execution module 313, configured to preferentially process the delayed load read request during the execution process of the delayed load task.
A delayed loading read request receiving module 307, configured to receive a delayed loading read request of a specific delayed loading task;
the corresponding thread maintainer selecting module 309, configured to select, by using a consistent hash algorithm, the thread maintainer corresponding to the delayed loading read request;
the corresponding specific delayed loading task obtaining module 310 is configured to obtain, on the input/output thread maintained by the thread maintainer, the corresponding specific delayed loading task by a coroutine mechanism;
a deferred load read request adding module 311, configured to add the deferred load read request to the list of input/output candidates of the deferred load.
The determining module 308 is configured to determine, through the related data in the thread maintainer list, whether the content to be read by the deferred load read request is in a cloud server or a local area, and if the content is in the cloud server, enter the next step on the input/output thread maintained by the thread maintainer to obtain the corresponding specific deferred load task by a coroutine mechanism.
According to the IO thread execution method, the thread maintainer corresponding to the uploading task or the delayed loading task is selected through a consistent Hash algorithm, the uploading or the delayed loading task is added through a coroutine mechanism on the input and output thread maintained by the thread maintainer, and then the uploading or the delayed loading task is executed on the input and output thread, so that the execution of the task on the IO thread is realized. The uploading or delayed loading task is added to the IO thread maintained by the thread maintainer through a coroutine mechanism, so that the time sharing of the IO thread is realized, and the problems of starvation and thread resource waste caused by the fact that the task cannot be processed for a long time can be avoided.
Furthermore, on the basis of the IO thread execution method, through a thread local task map and executing addition and deletion operations in the thread, lock-free processing is realized, and maintenance complexity is reduced; in the delayed loading process, the delayed loading read operation is not completed by a client retry mechanism any more, but the delayed loading read request is processed preferentially, so that repeated operation is reduced, and the delay of request processing is greatly reduced.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
1. Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (14)

1. An IO thread execution method, comprising:
receiving an uploading task or a delayed loading task;
selecting a thread maintainer corresponding to the uploading task or the delayed loading task through a consistent hash algorithm;
adding the uploading or delayed loading task on the input and output thread maintained by the thread maintainer by a coroutine mechanism, wherein one program comprises a plurality of coroutines, and the coroutines are a multitask mechanism of an operating system user state;
and executing the uploading or delayed loading task on the input and output threads, allowing the same input and output thread to process other tasks before one task is processed during multi-task execution, and realizing time-sharing multiplexing of the tasks on the same input and output thread through a coroutine mechanism.
2. The IO thread execution method according to claim 1, wherein the step of selecting the thread maintainer method corresponding to the upload task or the deferred load task by using a consistent hashing algorithm further includes:
and adding the uploading task or the delayed loading task to a task map corresponding to the thread maintainer on an input/output thread maintained by the thread maintainer.
3. The IO thread execution method according to claim 2, wherein a task deletion operation is performed on the task map.
4. The IO thread execution method according to claim 2, wherein the step of selecting the thread maintainer corresponding to the upload task or the deferred load task by using a consistent hash algorithm further includes:
initializing the thread maintainer list.
5. The IO thread execution method of claim 4 wherein the step of initializing the thread maintainer list is implemented by:
each thread maintainer corresponds to one input/output thread, and each thread maintainer maintains a task map.
6. The IO thread execution method of claim 4, wherein the method further comprises:
obtaining a delayed loading read request on an input/output candidate list of a specific delayed loading task;
and preferentially processing the delayed loading read request in the execution process of the specific delayed loading task.
7. The IO thread execution method of claim 6, wherein the obtaining a lazy load read request on the list of input and output candidates of the lazy load task is preceded by:
receiving a delayed loading read request of the specific delayed loading task;
selecting the thread maintainer corresponding to the delayed loading read request through a consistent hash algorithm;
acquiring the corresponding specific delay loading task by a coroutine mechanism on the input and output thread maintained by the thread maintainer;
and adding the deferred load read request to an input and output candidate list of the specific deferred load task.
8. The IO thread execution method of claim 7, further comprising, after the step of selecting the thread maintainer corresponding to the deferred load read request by a consistent hashing algorithm:
and judging whether the content to be read by the delayed loading read request is in a cloud server or a local place through related data in the thread maintainer list, and if the content is in the cloud server, entering the next step to obtain the corresponding specific delayed loading task on an input/output thread maintained by the thread maintainer through a coroutine mechanism.
9. An IO thread execution apparatus, comprising:
the task receiving module is used for receiving the uploading task or the delayed loading task;
the thread maintainer selecting module is used for selecting a thread maintainer corresponding to the uploading task or the delayed loading task through a consistent hash algorithm;
the task adding module is used for adding the uploading or delayed loading task on the input/output thread maintained by the thread maintainer by a coroutine mechanism, wherein one program comprises a plurality of coroutines, and the coroutines are a multi-task mechanism of an operating system user state;
and the task execution module is used for executing the uploading or delayed loading task on the input and output threads, allowing the same input and output thread to process other tasks before one task is processed during multi-task execution, and realizing time-sharing multiplexing of the tasks on the same input and output thread through a coroutine mechanism.
10. The IO thread execution apparatus of claim 9, further comprising:
and the task map adding module is used for adding the uploading task or the delayed loading task to a task map corresponding to the thread maintainer on the input and output thread maintained by the thread maintainer.
11. The IO thread execution apparatus of claim 9, further comprising:
an initialization module to initialize the thread maintainer list.
12. The IO thread execution apparatus of claim 9, further comprising:
a delayed loading read request acquisition module, configured to acquire a delayed loading read request on an input/output candidate list of the delayed loading task;
and the delayed loading read request execution module is used for preferentially processing the delayed loading read request in the delayed loading task execution process.
13. The IO thread execution apparatus of claim 9, further comprising:
a delayed loading read request receiving module, configured to receive a delayed loading read request of a specific delayed loading task;
the corresponding thread maintainer selecting module is used for selecting the thread maintainer corresponding to the delayed loading read request through a consistent hash algorithm;
the corresponding specific delayed loading task acquisition module is used for acquiring the corresponding specific delayed loading task on the input and output thread maintained by the thread maintainer by a co-program mechanism;
and the delayed loading read request adding module is used for adding the delayed loading read request to the input and output candidate list of the delayed loading.
14. The IO thread execution apparatus of claim 9, further comprising:
and the judging module is used for judging whether the content to be read by the delayed loading read request is in a cloud server or a local area through related data in the thread maintainer list, and if the content is in the cloud server, entering the next step on an input/output thread maintained by the thread maintainer to acquire a corresponding specific delayed loading task through a coroutine mechanism.
CN201810018487.4A 2018-01-09 2018-01-09 IO thread execution method and device Active CN110018880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810018487.4A CN110018880B (en) 2018-01-09 2018-01-09 IO thread execution method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810018487.4A CN110018880B (en) 2018-01-09 2018-01-09 IO thread execution method and device

Publications (2)

Publication Number Publication Date
CN110018880A CN110018880A (en) 2019-07-16
CN110018880B true CN110018880B (en) 2023-04-14

Family

ID=67187570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810018487.4A Active CN110018880B (en) 2018-01-09 2018-01-09 IO thread execution method and device

Country Status (1)

Country Link
CN (1) CN110018880B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6983339B1 (en) * 2000-09-29 2006-01-03 Intel Corporation Method and apparatus for processing interrupts of a bus
CN1967487A (en) * 2005-11-18 2007-05-23 Sap股份公司 Cooperative scheduling using coroutines and threads
CN107172174A (en) * 2017-06-01 2017-09-15 重庆高略启达智慧城市信息技术有限公司 The quick upload of file is realized between clients and servers
CN107368362A (en) * 2017-06-29 2017-11-21 上海阅文信息技术有限公司 A kind of multithreading/multi-process for disk read-write data is without lock processing method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10430190B2 (en) * 2012-06-07 2019-10-01 Micron Technology, Inc. Systems and methods for selectively controlling multithreaded execution of executable code segments
US10212120B2 (en) * 2016-04-21 2019-02-19 Confluent, Inc. Distributed message queue stream verification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6983339B1 (en) * 2000-09-29 2006-01-03 Intel Corporation Method and apparatus for processing interrupts of a bus
CN1967487A (en) * 2005-11-18 2007-05-23 Sap股份公司 Cooperative scheduling using coroutines and threads
CN107172174A (en) * 2017-06-01 2017-09-15 重庆高略启达智慧城市信息技术有限公司 The quick upload of file is realized between clients and servers
CN107368362A (en) * 2017-06-29 2017-11-21 上海阅文信息技术有限公司 A kind of multithreading/multi-process for disk read-write data is without lock processing method and system

Also Published As

Publication number Publication date
CN110018880A (en) 2019-07-16

Similar Documents

Publication Publication Date Title
US11182196B2 (en) Unified resource management for containers and virtual machines
Warneke et al. Exploiting dynamic resource allocation for efficient parallel data processing in the cloud
US11061729B2 (en) Throttling logging processes
US11188364B1 (en) Compilation strategy for a sharable application snapshot
JP2004288162A (en) Operating system architecture using synchronous task
US20210303327A1 (en) Gpu-remoting latency aware virtual machine migration
Kontagora et al. Benchmarking a MapReduce environment on a full virtualisation platform
Mavridis et al. Orchestrated sandboxed containers, unikernels, and virtual machines for isolation‐enhanced multitenant workloads and serverless computing in cloud
US20190347129A1 (en) User space pre-emptive real-time scheduler
CN116225728B (en) Task execution method and device based on coroutine, storage medium and electronic equipment
US20240086261A1 (en) Critical Section Speedup Using Help-Enabled Locks
CN113918336A (en) FreeRTOS task running method and device based on multi-core CPU
US20230376357A1 (en) Scaling virtualization resource units of applications
CN110018880B (en) IO thread execution method and device
WO2020248512A1 (en) Method for constructing runtime model of terminal application behavior
US11809275B2 (en) FaaS in-memory checkpoint restore
Liu et al. A large-scale rendering system based on hadoop
CN116382713A (en) Method, system, device and storage medium for constructing application mirror image
Wang et al. Reg: An ultra-lightweight container that maximizes memory sharing and minimizes the runtime environment
US20220350596A1 (en) Computing node allocation based on build process specifications in continuous integration environments
Lufei et al. vStarCloud: An operating system architecture for Cloud computing
Fireman et al. Prebaking runtime environments to improve the FaaS cold start latency
US7908375B2 (en) Transparently externalizing plug-in computation to cluster
US11762672B2 (en) Dynamic linker for loading and running an application over a plurality of nodes
Bussler Multi-Tenancy: A Concept Whose Time Has Come and (Almost) Gone.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40010943

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant